cancel
Showing results for 
Search instead for 
Did you mean: 

Used vs Peak memory and unload process

patrickbachmann
Active Contributor
0 Kudos

Hi folks,

I'm trying to troubleshoot why some tables are unloading from memory.  When I look at Used Memory/Peak Used I see that used is 29% and peak is 100.8%.  So my understanding (correct me if I am wrong) is that at some point the used memory finally reached 100% and so the system uses its secret unload algorithm and proceeded to unload some memory until it dropped to the current 29%.  So it looks like I have plenty of memory for the moment.  So I go ahead and run a fairly intensive view and immediately I see automatic unloading occurring in M_CS_UNLOADS.  It appears used memory is only 29% and I know for a fact that my view does not need all of the other available 71% so I'm not clear on why tables are being unloaded. 

NOTE:  I notice that resident memory is about 75%.  Used memory is about 132gb and resident is 367gb.  Could the tables be unloading due to resident memory being so high?  If so my next question is;  should resident memory be consuming almost 3 times the amount of memory as the used memory?

Thanks,

-Patrick

PS: This particular server is REV 47

Accepted Solutions (1)

Accepted Solutions (1)

former_member93896
Active Contributor
0 Kudos

Hello Patrick,

check out "SAP HANA Memory Usage Explained" http://www.saphana.com/docs/DOC-2299. It should help you (and others) understand the different types of memory better. "Used memory" and the corresponding peak/max of used memory are the essential measures. They will fluctuate - as you have experienced - during runtime. This is normal.

Many unloaded however due to shortage of free memory are not normal however. Althought you say your query does not consume so much memory, I would not be so sure about it. Your numbers tell us something else. There could be very big intermediate result sets for example due to joins. A more detailed analysis of the execution plan is required. I suggest to open a customer message so SAP support can take a closer look.

I have to say, that your system is way behind on revisions. Rev 47 was release last year in December and you are missing a good five months of continuous improvements in memory management and performance tuning. I highly recommend to update to a current revision.


Regards,
Marc

SAP Customer Solution Adoption (CSA)

patrickbachmann
Active Contributor
0 Kudos

Thanks both of you for your feedback. Especially helpful was the pdf from Marc.  That really filled in some of the gaps that I was not clear on.  Also very useful inside that PDF was information on correctly sizing for memory utilization.  I could not get the link to Quick Sizer tool to work but I'm going to tinker with that some more. 

Thanks again.

-Patrick

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Patrick,

Let me share my knowledge on this issue.

As far as I know, tables will be unloaded due to memory deficiency.

Please notice that this "deficiency" dosen't indicate that 100% of memory should be used.

I am not sure of the exact number and logic behind the table unload.

But my experience is that when total memory usage reach around 75%, table unload will start.

For your second question, I am sorry that I am not familiar with the phrase "resident memory".

If possible could you share result of m_memory and m_heap_memory?

Maybe I could help explain that.

I am not sure how did you observe memory usage, but if you really observed 100% memory usage, there will be an OOM dump in trace directory. That will help you understand the memory usage of your view better.

Best Wishes,

Di