cancel
Showing results for 
Search instead for 
Did you mean: 

Processing a 2GB file with message mapping in SAP PI

Former Member
0 Kudos

Hi All, I have a requirement where I need to process a 2 GB file through SAP PI with mapping transformations happening in PI and post it to SAP ECC system. This is the data coming from POS (Point of Sale) system. I recommended volume for PI interfaces is not more than 5MB.Is there anyone who explain me the approach for this in PI?

Accepted Solutions (0)

Answers (7)

Answers (7)

Former Member
0 Kudos

Please write a script at the OS level to split the file into multiple record sets as per your needs keeping the same name with an numeric extension  At the file adapter level, execute OS command before processing and you won't run into any issues

Former Member
0 Kudos

My File adapter is going to throw a timeout error if I am going to read a 2GB file with Recordset, 2000 records. Please suggest.

Former Member
0 Kudos

As per your responses, I  need to chunk the file and split the file into pieces of 5 MB with one interface and pick the each file and process it in a new interface.

My other approach can be with File adapter with Recordset, "n" records getting processed for each Message ID. Which is best approach to follow?

nabendu_sen
Active Contributor
0 Kudos

I think Recordset per message would be better.

Former Member
0 Kudos

Hi Nabendu,

If I am going with Recordset, n records is the adapter engine going to hold the file adapter till the processing is complete?....If there is any error in processing one of my message ID in the middle,is it going to stop the remaining messages to get processed?....Please suggest

rajasekhar_reddy14
Active Contributor
0 Kudos

Hi Shyam,

If you are receiving flat file from POSDM system then you could use record set per message option to split file.

if it is XML you need to different approach like if possible split file at POSDM system only.

Thank you,

Raj

Former Member
0 Kudos

Hi Raja,

If I am going with Recordsets per Message -- Is the adapter engine going to hold the file adapter till the processing is complete?....If there is any error in processing one of my message ID in the middle,is it going to stop the remaining messages to get processed?....Please suggest

Unfortunately, In my case the POSDM system is a legacy database system where they can only send text files. It cannot send XML files and we need to schedule the File adapter in PI system after the batch job is executed by legacy system to place the file.

nabendu_sen
Active Contributor
0 Kudos

As soon as Records are divided into independent messages, there will be no dependency. If one message fails, others will not get stopped. Now the concern is if Source wants to resend the failed Record after correction along with other successful records, it may cause duplicate records at Receiver side. You need to check with receiver system, whether they can handle duplicates or not.

udo_martens
Active Contributor
0 Kudos

Hi,

a file adapter can theoretically process 2 Gig. But any mapping would collapse. You need to chunk the file.

/Udo

Former Member
0 Kudos

HI Udo,

We can chunk the file and send it to SAP system with no mapping transformations in PI but SAP will not need a simple file transfer from legacy to SAP it needs the data in an IDOC format with mapping transformations happening in PI and posting the data.

If we use chunk mode and split the file into pieces and process these split files separately, but we don't know how the chunk mode splits the file as it may break the record unevenly and you may lose the data when it reaches the limit specified in the chunk mode (i.e for example as 5MB).

Former Member
0 Kudos
nabendu_sen
Active Contributor
0 Kudos

Hi Shyam,

As per SAP, PI can process up to 1 GB for PI File adapter. But I am still skeptical. I want to go with the suggestion of Baskar, split it. Also always perform Bulk load testing, for these type of interfaces. 

baskar_gopalakrishnan2
Active Contributor
0 Kudos

2GB file is larger size. If possible you can split the file data into smaller chunks and do multiple transaction.  This is better approach.