cancel
Showing results for 
Search instead for 
Did you mean: 

Can memory allocation sequence be change permanently ?

0 Kudos

Dear All,

In Windows Server, is there any method to permanently change the sequence of allocation of memory for batch process, to allow assigning heap first, before assigning extended memory ?

Reason :

We aware in a 2 processor server, some memory access intensive batch jobs (which do a lot of sorting in memory) would be running sometimes much slower, sometimes much faster (in an order by around 2-3 times). We are suspecting that is due to NUMA effect of varied memory performance, which now undergo further verification test by repetitive testing (running on server with 1 processor socket, vs running on server with 2 processor socket) we are still awaiting the result .

As heap should be "owned" by the worker process itself, so I am guessing if OS would take into account of NUMA and prefer to assign memory "local" to the processor, as long as the requested memory size can fit in that NUMA node. (vs extended memory which is assigned for the whole instance and thus probably not able to take care of locality of every running process) We would also take further test about it through RSMEMORY program then.

However, even if the new sequence has proven any effect, RSMEMORY setting would gone by every time instance restart. So I would like to know if there is chance to persist the sequence

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Eric,

According to the information of SAP Note 1612283 two socket servers should still perform very well in terms of NUMA. That is one thing we already have worked out in various benchmarks. OK - benchmarks primary did focus on throughput of a system, not on single thread performance which is probably your problem. But 2-3 slower because of memory locality, that is a number I won't expect.

Are you sure that there are no external effects (other processes, Page-In/Out activities, number of objects to be processed by the report) are causing the differences in runtime?


Can we please have some more information about the hardware and operating system version you are using?

Is the system running von a hypervisor (VMware or Hyper-V)?

kind regards

Peter

0 Kudos

Dear Peter,

Thanks for your feedback !

Applicaiton Team notice the issue on LIVE platform first, and subsequenly reproduce the FAST / SLOW behavior in UAT environment, where OS and HW come quite close to LIVE environment. In UAT environment we have much more control on the load, so, even though there runs 2 SAP instance on each server, we ensure there are no other major loading happens while the test being conducted.

Environment details (All servers are not virtualized)

     A) Server with 2 processor

                    OS : Win 2008 Standard

                    RAM : 48 GB install, 32 GB accesible

                    CPU : 2 x Intel E5-2640 (Sandy Bridge EP)

          Note : The OS / RAM mismatch only occurs in UAT,

                         in LIVE environment, we use Win 2008 Enterprise and has no such mismatch

     B) Server with 1 processor

                    OS : Win 2008 Standard

                    RAM : 24 GB install, 24 GB accesible

                    CPU : 1 x Intel E5640 (Westmere EP)

Test performed

     1) The same job with same parameter has been re-run on Server A for 11 days,

                    5 times : ~ 16000 secs

                    6 times : ~ 38000 secs

               (pattern of different run time being not consecutive / alternate, but seems random)

         a) From STAD, time spent on DB access in any such occasions remain at ~ 800-900 sec

         b) While the job run, SM50 observed process consume ~500-600 MB extended memory,

     2) The same job with same parameter has been re-run on Server B for 3 days, and still continue

          So far all 3 times finished in ~ 18000 secs

          Note : as these tests run for only few days, we are not yet sure

                         if NUMA effect is really the root cause

     3) The same job with same parameter has been re-run on Server A (test just started today)

          using RSMEMORY to alter the allocation sequence with Heap first, Extended later

          Result to be observed          

Rgds,

Eric

0 Kudos

Dear All,

Just to update test result we performed.

After past month of testing, the root cause of fluctuating run time of our program has been found.
It caused by application issue, and does not related with NUMA architecture.


Sorry for any misleading I might caused.

Regards,
Eric


P.S.

Just for info.

The longer runtime occurs much more sparesely in past month testing, though still happens,
      So we use /SDF/MON to keep continuous monitoring on memory consumed by the job.
      and found the longer run time job shows different memory footprint at some moment.

Through debugging in SM50, we aware looping on one code segment occurs in longer run time job
     while that looping does not occurs on shorter run time jobs.
We thus believes even though job parameter not change, some subtle change on data environment

     could lead to severe change on program behavior.

We thus pass back the case to Application Team for futher logic review.

0 Kudos

I see the description of allocation sequence from below post

and follow moderator suggestion to post the question here

as I guess my question may suit this forum title more