cancel
Showing results for 
Search instead for 
Did you mean: 

Java HTTP Worker threads hang in nativeRead using XISOAPAdpater

Former Member
0 Kudos

I am seeing all our Java HTTP Worker threads hang waiting on FCAConnection.nativeRead at heavy load. They do not time out and only a server reset re-enables them.<br>

<br>

I am running SAP PI 7.10 SP08. I have several Integrated Configurations with incoming XML files to the SOAP adapter and out to the File adapter. This occurs with several different Operation Mappings which are custom Java. Input file sizes are around 10MB. The system wil run fine under light load and will run for a variable time under heavy load (>20 simultaneous incoming files), but usually only a couple minutes before all the threads are hung and all incoming message processing stops. The corresponding Java web sessions also remain open.<br>

<br>

I found one similar forum questions that suggested updating to SP05. I was at SP07 and updated to SP08 just to check, but there was no change. <br>

<br>

Details from one thread:<br>

Name: HTTP Worker [14]<br>

Pool: Application<br>

Task: Serverlet MessageServlet in XISOAPAdapter<br>

State: processing<br>

Class: com.sap.engine.services.httpserver.server.Processor$FCAProcessor<br>

<br>

Stack trace:<br>

Thread 'HTTP Worker [14]', process 'server0', index '33'<br>
"HTTP Worker [14]" Id=51 is RUNNABLE (running in native)<br>
  at com.sap.bc.proj.jstartup.fca.impl1.FCAConnection.nativeRead(Native Method)<br>
  at com.sap.bc.proj.jstartup.fca.impl1.FCAConnection.read(FCAConnection.java:272)<br>
  at com.sap.bc.proj.jstartup.fca.impl1.FCAInputStream.read(FCAInputStream.java:102)<br>
  at com.sap.bc.proj.jstartup.fca.impl1.FCAInputStream.read(FCAInputStream.java:67)<br>
  at com.sap.engine.services.httpserver.server.io.HttpInputStreamImpl.read(HttpInputStreamImpl.java:167)<br>
  at java.io.FilterInputStream.read(FilterInputStream.java:121)<br>
  at java.io.PushbackInputStream.read(PushbackInputStream.java:161)<br>
  at java.io.FilterInputStream.read(FilterInputStream.java:97)<br>
  at com.sap.aii.af.sdk.xi.net.MIMEInputSource$MIMEReader.readContent(MIMEInputSource.java:713)<br>
  at com.sap.aii.af.sdk.xi.net.MIMEInputSource.readBody(MIMEInputSource.java:342)<br>
  at com.sap.aii.af.sdk.xi.net.MIMEServletInputSource.parse(MIMEServletInputSource.java:58)<br>
  at com.sap.aii.adapter.soap.web.MessageServlet.doPost(MessageServlet.java:388)<br>
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:727)<br>
  at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)<br>

<br>

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

Have you referred to this SAP note - 1295227 ? There is a dev_server0.txt attached to this note, you can compare that to yours and apply this note. Also, what kernel level are you at? Try to update the kernel to the latest available or one level lower than that.

- Thanks

Sai

Former Member
0 Kudos

Sai,

Thanks for the quick reply. We do not have any deadlock reported in the dev_server0 file. During the test the only entries are garbage collection entries. The next to last one is a pretty long one - 5 seconds, but nothing else to match the SAP Note.<br>

J Mon Jul 13 11:04:58 2009
: 2367654K->1015941K(2368380K), 3.9982460 secs] 2724725K->1015941K(3154812K), 4.9786670 secs] [Times: user=16.42 sys=0.16, real=4.97 secs]
J  87581.079: [GC 87581.079: [ParNewJ
J Mon Jul 13 11:04:59 2009
: 524288K->262144K(786432K), 0.4061110 secs] 1540229K->1401042K(3141180K), 0.4063840 secs] [Times: user=5.21 sys=0.01, real=0.40 secs]
J
J Mon Jul 13 11:05:02 2009
J  87584.688: [GC 87584.689: [ParNew: 780838K->259204K(786432K), 0.1973300 secs] 1919736K->1694745K(3141180K), 0.1976200 secs] [Times: user=2.20 sys=0.11, real=0.20 secs]

Former Member
0 Kudos

Sai,<br>

I started looking through all the files that had been touched in the <instance>/work directory and found the following in dev_icm:<br>

[Thr 1116821824] Mon Jul 13 11:04:06 2009
[Thr 1116821824] *** WARNING => IcmReadFromConn(id=53/4438): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]
[Thr 1114708288] *** WARNING => IcmReadFromConn(id=27/4429): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]
[Thr 1085831488] *** WARNING => IcmReadFromConn(id=28/4432): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]

[Thr 1083189568] *** ERROR => P4RecvRequest(id=9/3963): IcmMplxAllocBuf failed 50 times - giving up (ICMENOMEM) [p4_plg_mt.c  1452]
[Thr 1083189568] *** WARNING => P4PlugInReadHandler(id=9/3963): P4RecvRequest failed: No more memory(-3) [p4_plg_mt.c  1205]

[Thr 1088473408] *** WARNING => IcmReadFromConn(id=48/4428): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]
[Thr 1115236672] *** WARNING => IcmReadFromConn(id=47/4434): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]
[Thr 1085303104] *** WARNING => IcmReadFromConn(id=50/4437): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]
[Thr 1084774720] *** WARNING => IcmReadFromConn(id=1/4431): temporarily out of MPI buffers -> roll out [icxxthrio_mt 2568]

Former Member
0 Kudos

Hi Jsaon,

Please create a thread dump of your server nodes according to this note: Note 1044373 - Thread Dump Viewer for SAP J2EE Engine 7.1

Please post the corresponding log/trace files to further investigate the issue. Also, please go through this SAP Note - Note 937159 - XI Adapter Engine is stuck; this might help you resolve the issue; We can go as high as 350 for the MAX applicaiton threads value on PI systems.

Thanks,

Sai.

Former Member
0 Kudos

I found these errors listed in SAP Note 1013166. The patch is delivered for PI 710 in SAPKB71008, which I installed last week in order to fix a different issue. So I would think I should have the patch. <br>

Former Member
0 Kudos

Sai,<br>

I verified we have the patch installed that mentions this failure.<br>

I will change the parameters per the SAP Note 937159 and test again. I'll post the dump afterward.<br>

Thanks,<br>

-Jason

Former Member
0 Kudos

While tracking down these warnings and error we found that we needed to increase the amount of MPI memory using the profile parameter:<br>

<br>

mpi/total_size_MB <br>

<br>

Using transaction SMICM -> goto -> memory pipes we can see that the number of buffers peaks at a value much higher than is normally in use. In our case we saw it peak at over 3,800 buffers when traffic started but at a steady heavy load it was always under 2000.<br>

We do not see the MPI messages any more and we do not get the hang, so I believe our problem was solved with that change.<br>

<br>

Thanks for the responses which helped us find our way to the solution.<br>

-Jason<br>

Answers (0)