cancel
Showing results for 
Search instead for 
Did you mean: 

Need to Process 2gb data Through PI

former_member200386
Active Participant
0 Kudos

Hi Experts,

I am working on a scenario Proxy to Jdbc where  BW  was my source system it will process 2400000 records on a average (this will run on mothnly Basis).

when i executed the proxy program 2400000 records are coming in a single payload. i just want to process 1000 records at a time to JDBC is there chance to control  the output from PI end.

if i change the message interface occurrence & Interface mapping occerrence to 1.Unbounded what will happen?

Thanks in Advance

Pavan

Accepted Solutions (1)

Accepted Solutions (1)

former_member181985
Active Contributor
0 Kudos

You can basically chunk data on source system it self. Change ABAP report to send 1000 records in each message till all records are complete/read

former_member200386
Active Participant
0 Kudos

Thanks for quick response praveen,

date is the input parameter to execute the proxy program in se38. i just want to know from PI end is there any possibility to handle the message size ?...............

Thanks

pavan

Former Member
0 Kudos

Hi,

I doubt you can chunk the message in PI but what what you can do is you can use SP on target side and then pass the source message at a single go to the SP (as a xml string) and let DB handle the logic.

Thanks

Amit Srivastava

arkesh_sharma
Active Participant
0 Kudos

Hi Pavan,

You can split messages in PI while mapping and start a new record when the counter reaches 1000.

So, when you send data out from PI, 24 record files will created with 1000 records each.

Please let me know if you are expecting a different functionality.

Thanks,

Arkesh

former_member200386
Active Participant
0 Kudos

hi arkesh thanks for your reply.

how can we split the messages in meessage mapping. please help on this.

thanks & regards

pavan

arkesh_sharma
Active Participant
0 Kudos

Hi Pavan,

Sorry for the delay in response. You can split the message at the root node by putting a condition and a counter. For example, if your target root node is T_MT and its occurence is 0..unbounded then you put a condition that when the counter value of Source node reaches 1000 then create new T_MT node by inserting context after every 1000 records.

markangelo_dihiansan
Active Contributor
0 Kudos

Hello,

Chunking is only available file-file scenario. Also, you should implement the fix on the sender side since there is no guarantee that your PI system will be stable right after it receives a data of around 2gb.

Hope this helps,

Mark

Answers (2)

Answers (2)

Former Member
0 Kudos

Solve this splitting on the sender side, because PI is no Data Warehouse system. Such huge messages could make PI instable.

CSY

nageshwar_reddy
Contributor
0 Kudos

I believe the best approach for this is to change the proxy to send only N(Optinal number should be decided based on record size) records at a time. Modifying the proxy will be pretty easy compared to other solutions. Since you already have the proxy buil, you need to modify the code to build a loop around the proxy method for each N records.

I have used this approach when you have mltiple records to be sent, but the target system is capable of accepting only one record at a time.