on 01-30-2014 10:52 AM
Hello,
We are facing memory dumps and slow SAP performance on our PRODUCTION ECC EHP4 system.
Many work process also goes to PRIV mode frequently
But we have very good response Time also.
I checked ST06 and see,
CPU Idle % = 40%
IO wait % = 16%
CPU count = 8
Number of request waiting to get processed in
1 Min = 7
5 Min = 6
15 Min = 6
We have Physical memory = 50GB
Free Memory only = 300 MB
We have following parameter set in RZ10
em/initial_size_MB = 4 GB
ztta/roll_extension = 2 GB
abap/heap_area_dia = 5GB
abap/heap_area_nondia = 5GB
abap/heap_area_total = 10 GB
rdisp/wp_no_dia 20
rdisp/wp_no_btc 8
Please guide
Regards,
Hello,
I increased the Extended Memory to 16Gb
And decreased the Heap memory by 2Gb
So for now, As I check over Two weeks , I dont have any Wp's in PRIV mode.
Thanks,
Jigar
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
To avoid PRIV mode you should increase the Extend Memory areas.
Regards
Clebio Dossa
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
First investigate your storage subsystem performance. Can you provide output of vmstat command?
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
What memory dumps?
Check ST02 also.
Thanks.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello
Having workprocess getting into priv mode is not always a problem, the bad point is that the workprocess is then dedicated to the current user.
It can get worst f the user keeps it's trnasaction open for a very long time and keep the workprocess stuck...
With the configuration you have that means that the process has already allocated 6Gb of memory, this is quite a lot but not insane.
This is really becoming a problem if you have many process stuck in PRIV and all other process gets busy.
You can detect this by checking the process queue in SM51 / goto / server name / Information / queue info
=> according to the low level of process queue there is no major impact.
But it can be good to investigate & identify the prog / transaction that cause this behavior.
Check if user selection are appropriate (not too wide) and in case of custom program have a look at the ABAP.
What is your OS ?
Regards
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello,
OS : Linux x86_64
I see following under info queue
Process Type | Requests waiting | Max. No. of Requests Wait. | Maximum number of requests | Requests written | Number of requests read |
NOWP | 0 | 21 | 2000 | 2220880 | 2220880 |
DIA | 0 | 29 | 2000 | 1022368 | 1022368 |
UPD | 0 | 3 | 2000 | 306 | 306 |
ENQ | 0 | 1 | 2000 | 1 | 1 |
BTC | 0 | 3 | 2000 | 121633 | 121633 |
SPO | 0 | 1 | 2000 | 56826 | 56826 |
UP2 | 0 | 1 | 2000 | 151 | 151 |
Do I need to increase the em/initial_size_mb to 16 Gb
And DO I need to decrease the some DIA and BGD work processes to free some memory?
Regards,
Jigar
Hi
Decreasing the number of process should not really help (if they are not used they do not consume a lot of memory and if they are used removing some will decrease the whole instance perf...)
Try at first to increase em/initial_size_mb to 8Gb and more important try to find which programs makes process to get into PRIV
Regards
Hi Jigar,
Please update below configuration parameters
em/initial_size_MB | 8096 | MB | |
em/blocksize_KB | 4096 | kB | |
ztta/roll_area | 3000000 | Byte | |
ztta/roll_first | 1 | Byte | |
ztta/short_area | 3200000 | Byte |
em/global_area_MB 250 MB
Compare your parameter values with the one listed above. If possible update above recommendation in your system. Then execute sappfpar check=<path of instance profile> to adjust the shared memory segments.
Hope this helps.
Regards,
Deepak Kori
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.