cancel
Showing results for 
Search instead for 
Did you mean: 

Automatic process unloading tables from memory

patrickbachmann
Active Contributor
0 Kudos

Hi folks,

I'm looking for specific information on any process that is automatically unloading data from memory.  I've experienced HANA memory becoming almost fully used (reaches around 90%) and then magically unloads memory until nearly empty and I'm trying to understand any process that is responsible for this.  In HANA Admin guide In section 9.2.1 there is mention of unloading;

"The database may also actively unload tables or individual columns from memory, for example, if a query or other processes in the database require more memory than is currently available. It does this based on a least recently used algorithm."

When memory is full, rather than automatically unloading only one or two of the least used tables it seems to be completely unloading everything.   I'd like to find more information on this 'least recently used' algorithm or any related process that could be unloading data from memory.

Thanks,

-Patrick

Accepted Solutions (1)

Accepted Solutions (1)

lbreddemann
Active Contributor
0 Kudos

Hi Patrick,

unfortunately there is no more information disclosed on this topic.

However, given the memory usage pattern you mentioned to me it looks if not column unloading it the major driver for the memory freeing but some other (usually unaccounted) memory consumers had been removed from memory as well.

What kind of information did you use to monitor the memory usage/table unoading?

Lars

patrickbachmann
Active Contributor
0 Kudos

Hi Lars,

That is unfortunately indeed.  As far as how I am viewing memory, for an overall snapshot I monitor using the administration console, overview tab.  Then to look at individual tables I open the table definition and look at runtime information tab.  I also run a SQL query that I wrote to see the tables in memory, ordered from largest consumption to smallest.

SELECT * FROM M_CS_TABLES ORDER BY MEMORY_SIZE_IN_TOTAL DESC

Thanks for your reply,

-Patrick

lbreddemann
Active Contributor
0 Kudos

Hmm.. ok, I think you could get some better information (depending on the HANA revision you're using).

You might want to switch on unload_trace for the indexserver to INFO.

Once this had been activated unload activities will be written out to trace files called indexserver_<hostname>.<port>.unloads.<file_id>.trc.

You can either look into the trace files directly (boring/not very enlightening) or you run a query like this:

select * from  "M_CS_UNLOADS"

regards,

Lars

patrickbachmann
Active Contributor
0 Kudos

Ok thanks, very interesting idea, I'm going to look into experimenting with this and then update this thread soon.  Thanks!

patrickbachmann
Active Contributor
0 Kudos

Lars I have experimented with this and the M_CS_UNLOADS is definitely helpful.  Unfortunately I wish it contained the name of the user that unloads the table.  The good thing is I see a reason of 'EXPLICIT' or 'LOW MEMORY' for each unload.  After I did a manual unload I can see it's tagged as explicit so I can reasonably assume anything done by the user is flagged as explicit and anything done automatically by the server is flagged as 'LOW MEMORY'.  I'm assuming there's no way to track/find actual user names that performed the EXPLICIT unload?  Thanks

lbreddemann
Active Contributor
0 Kudos

Since the load/unload mechanism is mainly fully automatic (explicit loading and unloading is a developer/DBA task. Definitively not a day-2-day activity!) it doesn't make too much sense to carry around this piece of information.

What you can always do of course is to activate the SQL trace on the HANA server.

Trace level NORMAL and statement type DML (don't ask... ) will do the trick.

Cheers,

Lars

patrickbachmann
Active Contributor
0 Kudos

Thanks for your insight Lars, I will now close this post.  The M_CS_UNLOADS has already been a tremendous eye-opener as I had no idea until now just how many tables were automatically being unloaded.  It highlights just how important it's going to be to monitor this stuff as we move forward with more and more content/models being implemented.

Thanks again.

-Patrick

patrickbachmann
Active Contributor
0 Kudos

PS: I think not only the DBA but developers must understand this near to day 2 because otherwise they will be consuming all the memory immediately and could think everything is just fine because they are not keenly aware that memory is highly consumed due to it's automatically being unloaded.

lbreddemann
Active Contributor
0 Kudos

That's why I wrote that this is a developer/DBA task.

Anyhow, the application logic shouldn't be build upon this.

Think of how manual "management" of the loading/unloading would work once you're on a scale out system...

The traces and the system views are there to provide insight and understanding of how the system works. It's clearly not meant to have "LOAD .../UNLOAD .." commands spread over a client application.

In a classic DBMS you usually wouldn't go and "pre-warm" the cache yourself (except for benchmarks) by loading the data into cache before you actually use it...

patrickbachmann
Active Contributor
0 Kudos

Good points.  Your last paragraph reminds me of when I worked in BW environment.  Pre-warming the cache is exactly what we would do so that users throughout the day would have fast reports. 

When we first run our views in HANA (assuming tables were not preloaded into memory ahead of time) they are slow as they are loaded into memory. (Lets say 5 minutes) Then the next time we run the view it's very fast. (Lets say 5 seconds)  That's the experience I want all of our users to have, 5 seconds.  If hypothetically we have 100 huge views run each day (each run repeatedly throughout the day) and then early in the morning the next day the oldest 10 are removed from memory then I'm imagining that first user coming into work and having to wait 5minutes for it to load into memory.  Not necessarily a horrendous thing since the same report in SAP may take HOURS but perhaps not ideal.

As a developer you might create a new view then look at memory consumption which is 90% and think great, everything is ok.  Then you create another new view and you check memory and it's 90%.  Ok still great.  Not realizing that a bunch of older views were automatically unloaded and that you have affected some user who now has to wait each morning for his report to load into memory.  To me this just underscores the importance of the developers and dba's working tightly together and always knowing how memory is being affected. 

Just food for thought.

henrique_pinto
Active Contributor
0 Kudos

I wonder if using the preload clause for table definition would keep them from being unloaded automatically.

http://help.sap.com/hana/html/sql_alter_table.html#alter_table_preload_clause

<preload_clause>

 <preload_clause> ::= PRELOAD ALL | PRELOAD ( <column_name> ) | PRELOAD NONE 

Sets or removes the preload flag of the given tables or columns.

When the preload flag is set tables are automatically loaded into memory after an index server start. The current status of the preload flag is visible in the system table TABLES in the PRELOAD column. Possible values are 'FULL', 'PARTIALLY' and 'NO'. Also in system table TABLE_COLUMNS in column PRELOAD with possible values being 'TRUE' or 'FALSE'.

@Lars, could you comment here?

lbreddemann
Active Contributor
0 Kudos

Preload is providing a way to have tables/columns automatically loaded in to RAM, true.

Things to consider here are:

  • do you really (like really, really) accept the additional time required to load the data to RAM during your instance recovery? Is pre-warming that important to the very first execution of a query?
  • is the worse query response time for the executions that require loading data from disk really that bad that it outweighs the additional development effort?

In most cases the first point already ends  the discussion.

For the kind of queries where you absolutely need the pre-warmed system, just run the query once before you need it (similar to what you would do in SAP BW...).

my two cents on this.

Lars

lbreddemann
Active Contributor
0 Kudos

Ok - memory consumption in HANA is a completely different (though related) topic.

It's not that straight forward to understand and I would highly suggest to not try to work around the memory management by manually triggering unloads/loads of columns.

If you're facing performance problems with your HANA views due to heavy unloading of columns due to memory shortage, this could have a lot of causes.

One of them is (and we've seen this a couple of times already): the system is not properly sized for the amount of data to be processed.

Unlike classic DBMS where you have your rather fixed cache size and just add disk space as more data gets into the system, there is a tight coupling between DISK+CPU+MEMORY in HANA.

When you start with your 128 GB box and load/process more and more data, you eventually end up with more than 64 GB of data to be processed.

There, you crossed the limit for the system sizing.

Another reason for memory shortage may also be the intermediate results created and used in the models you build. Happens a lot and could lead to very bad performance.

I'm not saying: don't try to understand that these effects could happen in HANA.

But don't try to work around stuff that is probably not even the cause of the problems.

cheers, Lars

henrique_pinto
Active Contributor
0 Kudos

So you wouldn't recommend using this preload clause in the table definition??

lbreddemann
Active Contributor
0 Kudos

Nope, I don't recommend to do that.

Besides the fact that recovery time (as in "time to be able to use the system after a crash again") is critical for most applications, there are more fine grained options for HANA data management available/in the pipe.

With SPS5 you can provide priorities for tables for how quick (relatively) they should be unloaded.

This can be used for e.g. PSA tables in BW on HANA.

As with many other features that seem to promise better performance, preload is not a silver bullet.

It's just as good as any other "++go_faster++" hint around 😉

henrique_pinto
Active Contributor
0 Kudos

Cool, thanks for your opinion.

patrickbachmann
Active Contributor
0 Kudos

Excellent comments and discussion guys.  Appreciate all your thoughts and will be sharing them with my HANA team mates here.  Thanks.

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Patrick,

Did you resolve this issue?  We are facing a similar problem in our test enviroment where database tables are automatically loaded and then it starts loading again every 15 minutes. Forgot to mention we are on lower version of HANA SP06.

Please let us know if you any additional insights into this

Thanks,

-Hari

patrickbachmann
Active Contributor
0 Kudos

Hi Hari,

It's been a couple years so I don't entirely remember but we are at rev 82 now and no longer have this issue.

-Patrick