cancel
Showing results for 
Search instead for 
Did you mean: 

Increasing the memory allocation limit of HANA

Former Member
0 Kudos

Hi,

I need to increase the memory allocation limit of my HANA db which is hosted on AWS. The reason is I have some huge tables and it frequently gets out of memory. Kindly help.

Thanks

Moaz

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

Hana on AWS is developer edition and is not prod usage. So its limit are defined.

Former Member
0 Kudos

Thanks Erhan,

I think it is fair to assume that in the production grade HANA server we would have the option of increasing the memory?

Apart from loading a table into memory what else cause the increase in memory. In my HANA instance even after restarting it I can see it using around 9 GB of space out of the 15GB allocated. Ideally i would like to bring the 9 GB down , so it is somehow possible?

Former Member
0 Kudos

Hi,

Of course you can! If you have budget you can buy an IBM HANA machine with 100 terabyte monster which is upgradeble to 250 terabyte.

On HANA memory is not used only for database. Memory is also used by operating system, HANA components. You can view that in Administration tab System Information "Component memory usage".

You can view memory consumption on Administration in tab of System Information "Memory of all row tables" and "Loaded memory of coloumn tables".

henrique_pinto
Active Contributor
0 Kudos

Hi Moaz,

another thing you can do to "optimize" your memory consumption in a test environment is to unload the tables you won't be using and just load to memory the tables you'll be effectively using in your test queries.

To unload/load a table, you can right click on the table in HANA Studio and do it manually.

You can also use ALTER TABLE and set it to LOAD or not automatically during HANA startup.

Former Member
0 Kudos

Hi Pinto,

I tried the unload command, I see no decrease in the memory consumption. In fact earlier it used to start around 9Gb and then go up but now it starts from 13 GB. I have restarted the HANA db and even my AWS instance but the consumption remains the same. I have also checked for tables that are loaded in memory and none of these tables are from my development schema. The tables in memory are all from _SYS_STATISTICS,_SYS_REPO. Now due to lack of memory my queries are throwing exceptions.

henrique_pinto
Active Contributor
0 Kudos

I find these values particularly high.

Did you try to unload them?

Or see what's happening to fill them so much?

Former Member
0 Kudos

Hi Pinto,

I know these values are very high. I have unloaded each of my development table but still these values remain as they are. Everything that is in memory at the moment belongs to the system and I cannot  understand why is the system consuming so much memory?

"Or see what's happening to fill them so much?"

Can you guide me how to check this?

former_member184768
Active Contributor
0 Kudos

Hi,

Can you please check the memory utilization using the following queries:

  1. Heap Memory used size for Index server in : select * from M_memory where name = 'HEAP_MEMORY_USED_SIZE' --- Please check for your Indexserver
  2. Shared Memory used size for Index server in : select * from M_memory where name = 'SHARED_MEMORY_USED_SIZE'
  3. Column Store Memory used size : SELECT round(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB" FROM M_CS_TABLES;
  4. Row Store Memory used size : SELECT round(sum(USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024) AS "Row Tables MB" FROM M_RS_TABLES;
  • Memory in Code and Stack: SELECT round(sum(CODE_SIZE+STACK_SIZE)/1024/1024) AS "Code+stack MB" FROM SYS.M_SERVICE_MEMORY;
  • Total Used memory: SELECT round(sum(TOTAL_MEMORY_USED_SIZE/1024/1024)) AS "Total Used MB" FROM SYS.M_SERVICE_MEMORY;

Please check if things are in sync and order.

Regards,

Ravi

Former Member
0 Kudos

Hi Ravindra,

Following are the results.

Heap Memory used size for Index server in : select * from M_memory where name = 'HEAP_MEMORY_USED_SIZE' --- Please check for your Indexserver


Shared Memory used size for Index server in : select * from M_memory where name = 'SHARED_MEMORY_USED_SIZE'

  1. Column Store Memory used size : SELECT round(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB" FROM M_CS_TABLES 'I modified this query to give me more details'


  1. Row Store Memory used size : SELECT round(sum(USED_FIXED_PART_SIZE + USED_VARIABLE_PART_SIZE)/1024/1024) AS "Row Tables MB" FROM M_RS_TABLES;


  • Memory in Code and Stack: SELECT round(sum(CODE_SIZE+STACK_SIZE)/1024/1024) AS "Code+stack MB" FROM SYS.M_SERVICE_MEMORY;

         3060MB


  • Total Used memory: SELECT round(sum(TOTAL_MEMORY_USED_SIZE/1024/1024)) AS "Total Used MB" FROM SYS.M_SERVICE_MEMORY;

      13658MB


former_member184768
Active Contributor
0 Kudos

Hi,

I assume your indexserver port is 30003. Based on the screenshots you provided and my understanding of memory allocation:


MB
HEAP_MEMORY_USED_SIZE             1,676.69
SHARED_MEMORY_USED_SIZE             3,474.62
Code and Stack             3,060.00
Column Tables                          -  
Row Tables                          -  
Total used memory          13,658.0

I could not get the memory used in Column tables and row tables, but it should be around 3 - 4 GB of memory (without considering any temporary memory allocation used in run time operations).

In a nutshell, your current overall utilization is 13 GB of memory which comprises of heap memory, object memory and memory used in runtime calculations / query execution.

If your column and row tables are not occupying 3 - 4 GB, then it seems the memory pool is not getting released. We faced a similar situation and had to restart the server to bring down the memory which was not released by HANA. You can also try the same.

Regards,

Ravi

Former Member
0 Kudos

Hi Ravindra,

The column tables memory is 28 mb and row tables is 2881 mb. My HANA db is on AWS and I have several times re-stared my AWS instance and my HANA database but the memory is not being released. Secondly how do we unload a row based table from memory?

former_member184768
Active Contributor
0 Kudos

Hi,

Generally the application tables should be Column tables. But from your screenshot, I think you have some of the application tables as row tables in Schema HONDA...

Although I did not try unloading row based tables (since I don't have any), the operation should be same as column based tables. Right click on the table and select UNLOAD or use SQL statement UNLOAD <table_name>.

Regards,

Ravi

Former Member
0 Kudos

Hi,

I tried the unload command with Row tables and HANA says it is not supported. Secondly I realised(I am not sure) that if I have tables in Column storage they can be loaded and unloaded where as with the row tables it might not be possible to unload. I convert my row tables to column and my current memory consumption is down to 10 GB. I need to see how to bring it down further.

former_member184768
Active Contributor
0 Kudos

Hi,

You are right. I created a dummy ROW table and loading and unloading operations are not allowed.

Can you please check if you have Delta memory for COLUMN tables using:

select * from m_cs_tables where memory_size_in_delta > 1000;

If you see lot of memory being occupied in Delta memory, then you can perform DELTA MERGE operation. It will move the data from uncompressed DELTA memory to compressed MAIN memory.

Regards,

Ravi

Former Member
0 Kudos

Hi Ravindra,

The above query is of help but when the merge did not empty the delta memory. I ran the above query then I ran the following query "MERGE DELTA OF "myschema"."table" ". then I ran the above query again but  still the table appeared. Secondly I see a lot of tables from _SYS_REPO and _SYS_STATISTICS , the MERGE statement did not work for them either. can you explain what am I missing?

former_member184768
Active Contributor
0 Kudos

Hi,

The DELTA MERGE command just notifies the HANA system that the delta memory can be merged. The actual Merge depends upon the system setting for how much DELTA memory should be reached before the actual MERGE is triggered. Hence you will not see the immediate impact. Secondly, usually you will see the impact if the DELTA memory is quite large enough.

_SYS_REPO and _SYS_STATISTICS are system objects. I never tried to merge these objects as I do not have access to perform any operations on these. I think these objects are managed by the system itself.

Regards,

Ravi

Former Member
0 Kudos

HI Raviadra,

In that case I would like to know how to set the DELTA MERGE memory and what are the consequences of increasing and decreasing the delta memory.

Regards

Moaz

former_member184768
Active Contributor
0 Kudos

Hi Moaz,

The following parameter defines the delta merge behavior:

indexserver.ini -> mergedog -> max_delta_memsize. The default value is 1000 MB. As far as I know, if your delta merge is more than this size, then it will not happen automatically. You might want to increase this parameter if your data loads lead to larger delta.

Regards,

Ravi

Former Member
0 Kudos

Hi,

is this a Row Table  ?

Unload / Load / Merge simply does not work on Row Tables because they are not put to

the "In Memory" = "TREX".

They are kept in the databases shared memory only.

Best

Martin

Answers (1)

Answers (1)

rama_shankar3
Active Contributor
0 Kudos

Muhammad:

How big are the files that you are trying load? Based on my experience with AWS, ideal size of file is 5GB. If you have bigger files, break them up logically either by month or year as separate files.

Also, make sure that you are loading in auto merge OFF setting.

Hope this is helps.

Regards,

Rama