cancel
Showing results for 
Search instead for 
Did you mean: 

SAP Note #723909 Clarification needed: > 2G heap for server nodes

0 Kudos

We mare upgrading from Enterprise 4.7 to ERP 6.0 SP12. For our ESS/MSS we will use standalone Java NW 7.0 SP14 with XSS SP12. Our portal servers and ERP ABAP are Unix Solaris 9 or 10 64-bit. The portal servers are 4CPU x 16G. We use three server nodes on our central instance and three server nodes on our dialog instance. All server nodes have 2G heap.

Recent load tests indicate we are reaching memory limits in the server nodes. THis causes frequent full GC with tenured objects deleted (very time consuming). We want to increase heap size > 2GB. Note 723909 seems to say this can be done for 64-bit systems, but SAP appears to recommend additional server nodes with 2G heap. I think this is because of GC performance with large heap.

- Are we supported with heap > 2G?

- Are there still concerns with GC when heap > 2G?

- Are there other reasons why heap should be no higher than 2G?

Thanks in advance

Accepted Solutions (0)

Answers (1)

Answers (1)

markus_doehr2
Active Contributor
0 Kudos

- Are we supported with heap > 2G?

- Are there still concerns with GC when heap > 2G?

- Are there other reasons why heap should be no higher than 2G?

You´ve answered your questions yourself

To have a fast J2EE engine you need to absolutely make sure all the heap is covered in physical memory. No swapping should take place at all. Once the system starts swapping in and out, you will loose a lot of performance.

How big is your Oracle SGA? If you can decrase it so all the heaps (+ dispatcher heap) will fit in physical memory you can increase more. We use 2.5 - 3 GB heap (for BI) with three server nodes.

Markus

Former Member
0 Kudos

HI,

I was expecting more discussion around this

I see everyone saying to use 2GB limit and more nodes... but and when customer says needs to run something that will use 6GB and is causing "out of memory" errors...

They ask to increase Java Heap to 6GB (Linux/Oracle) and there is not valuable arguments to face that!

So, how to deal with a process that is know will take more than 6GB?

Increase memory heap? Or create 3x2GB nodes?

bodo_lange
Advisor
Advisor
0 Kudos

Not a trivial question and definitely no easy answer. In general the heap discussion depends by large on the objects, the usage of the system and how intense cross service messaging can get. there is a trade-off between number of nodes and heap per node, so make the maths.

Well I would try it with a trial and error approach or a more detailed heap analysis. You need to understand how your objects are persisted in the different heap areas and the pin ratio or survival rate. Do a cache analysis and check your cache extrusion rate before taking a decision.

Sometimes with well scripted loadtests you can simulate your portal load and do sophisticated heap analysis in order to understand how to configure your heap. I have seen customers running 64RHLinux on AMD with 5GB per node and FullGC in less the 8 seconds. But this was after several cycles of finetuning raising the survival rate of speciifc objects in the heap.

Well I guess the trick is monitoring caches during loadtest and understanding traversal rates and your heap.

Regards,

Bodo