cancel
Showing results for 
Search instead for 
Did you mean: 

File Adpater: Size of your processed messages

udo_martens
Active Contributor
0 Kudos

Hi,

i opened yesterday a thread aboud performance of file adapter, but i didn't get a satisfieing answer. So please give me following information:

What was the biggest size of message which was processed by your file adapter?

Every answer is welcome, even if the size seems to be small.

Regards,

Udo

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Udo,

With content conversion , we have processed a file of size 64MB as one single message, it takes about 6 to 10 mins for our entire processing to complete(to go thru as an idoc to sap in our case).

well, it will get worse , if you have a ccBPM in between.we had it initially(for validating & controlling the flow) and it took close to 2 hrs to go thru. It was not guaranteed as well, J2EE will autorestart(heap dump!!) and the processing will terminate at times. we had followed up with sap, tried tuning heap size & using large queues, but did not resolve the issue , removed ccBPM and it is ok now.

so Udo , just the file adapter processing a LARGE message will not resolve ur issue, rather i beleive it may add to ur headache, once file adpater delivers the message to the IS.

Wherever possible we had limited the message size using recordsets per message, when there was no choice, we had to take it in as one single message. strictly my personal experience here , i may be wrong, if so feel free to point it out, ll learn.

Regards

Saravana

udo_martens
Active Contributor
0 Kudos

Hi Saravana,

(couldnt give you "very helpful"- may be it is only possible for 2 answers??)

Obvisiosly i should describe my project:

I want to use XI for a mapping task

CSV File -> XI -> CSV File

Definitly no BPM. A row conversion seems to be nessary coz the source is no XML. The mapping task should be quite easy, may be i can use the grafical mapping, may be i have to use XSL. I want to know the perfomance before, because i can ask the producer of source files to divide them in small pieces.

Do you think, the file adapter is not the bottleneck but the IS?

Regards,

Udo

Former Member
0 Kudos

Udo,

We had a very simple content conversion in our case(If u see what Manish says, more intense the content conversion, then that could be a bottleneck as well),

We read the CSV file as is inside XI, each line mapped onto a <row> xml tag and later used Java + GUI mapping to convert into an idoc hierarchial structure.

I don't think "type" of mapping used will have drastic effect on your performance,but when the message is thrown & caught & thrown back b/w J2EE(adapters)/ccBPM(workflow on ABAP)/mapping runtime(mostly Java), then it may create issues when you have a very large message.

Regards

Saravana

manish_bhalla2
Contributor
0 Kudos

Hi Udo and Saravana,

<i>We had a very simple content conversion in our case(If u see what Manish says, more intense the content conversion, then that could be a bottleneck as well)</i>

I think that this might very well have been the bottleneck, since we had a fixed-length file coming in with many different row types, and different conversions with each row type. We unfortunately did not try out the 'generic' conversion technique saravana describes.

Cheers

Manish

Answers (4)

Answers (4)

Former Member
0 Kudos

Hi all,

My scenario is: File -> XI -> IDoc.

Unfortunately, I’m dealing with the same problem!!! Our customer has a huge file that must be sent to SAP trough XI. This huge file can have 400mb of size and is a flat file that must be converted into XML, so I need to use File Content Conversion inside of Sender File Adapter, also a Multi Mapping is required to generate 3 different types of IDoc’s…

So, resuming…

-Content conversion is required;

-Multi Mapping is required (1x3);

-No BPM is required, I hope so!!! This interface isn’t closed yet…

My question is: Which is the best approach to handle this kind of issue?

Thanks in advance,

Ricardo.

Former Member
0 Kudos

Hey

>>No BPM is required, I hope so!!!

why?to split a file into several IDOC's u need BPM(u dont need BPM for 1 to n if it had been anything else but IDOC)

1 to n multimapping without BPM only works on adapter engine(and ofcourse IDOC and HTTP are not part of that)

so u need BPM in ur case

thanx

ahmad

Message was edited by:

Ahmad

Former Member
0 Kudos

Hi Ahmad,

Now I understood So, I have a tricky scenario in my hands :s

Thanks for your answer.

Cheers,

Ricardo.

Former Member
0 Kudos

HI Udo,

It is always better if you can divide the source file into smaller chunks. XSL mapping is not advisable as we faced many problems with the performance, java mapping is better in that case. Best option is message mapping. We have processed around 50 MB of file without any issue using message mapping and java mapping.

Thanks,

Prateek

manish_bhalla2
Contributor
0 Kudos

Hi Udo,

We ran into problems with the J2EE engine crashing while processing text files more than 2MB. But this was with intensive content conversion.

We did raise this with SAP, and we tried everything they told us to do, but we could not manage more than that.

Cheers

Manish

udo_martens
Active Contributor
0 Kudos

Hi Manish,

ok, i have a csv file, so i need a row conversion. Does that mean, i cant schedule with files bigger than 2 MB? Is your experience from a new XI 3.0 system? Did you increase the java heap size? As you read, other XIs could process 120 MB!? Can you imagine any reason for that big difference?

Regards,

Udo

manish_bhalla2
Contributor
0 Kudos

Hi Udo,

Yes, this was with a XI 3.0 system. I think it was SP9 or thereabouts.

We were able to process 2MB files so that they went through smoothly with no perceptible time-lags. Larger files started taking more and more time. Also the J2EE engine started crashing and restarting and crashing again. We contacted SAP and they suggested we increase the heap-size. We were already higher than what they had suggested.

I had done a lot of research into the problem at the time, and I came across people who had processed larger files. However we were not able to replicate the same on our system, and we could never find out why.

In the end, becasue of time-pressures on the project, we settled for a workaround of using a script to split the incoming file into 2MB chunks before the file adapter picked it up. This actually worked quite well for us becasue it really improved the time taken to process the message.

Cheers

Manish

Former Member
0 Kudos

Hi,

We had processed 120 MB of file but there was no mapping involved.

Thanks,

Prateek

udo_martens
Active Contributor
0 Kudos

Hi Prateek,

can you describe your scenario? Did you have a row conversion? Which mapping did u use?

Regards,

Udo

Shabarish_Nair
Active Contributor
0 Kudos

Hi Udo,

One of my scenarios worked well with a 150 MB file avoiding content convertion and mapping (Ref:/people/shabarish.vijayakumar/blog/2006/04/03/xi-in-the-role-of-a-ftp)

I got bit ambitious after that and tried to transfer a 250 MB wmv file.... the next thing i knew was "my server went for a toss"