on 10-02-2012 7:38 PM
Hi,
Hana on AWS is developer edition and is not prod usage. So its limit are defined.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks Erhan,
I think it is fair to assume that in the production grade HANA server we would have the option of increasing the memory?
Apart from loading a table into memory what else cause the increase in memory. In my HANA instance even after restarting it I can see it using around 9 GB of space out of the 15GB allocated. Ideally i would like to bring the 9 GB down , so it is somehow possible?
Hi,
Of course you can! If you have budget you can buy an IBM HANA machine with 100 terabyte monster which is upgradeble to 250 terabyte.
On HANA memory is not used only for database. Memory is also used by operating system, HANA components. You can view that in Administration tab System Information "Component memory usage".
You can view memory consumption on Administration in tab of System Information "Memory of all row tables" and "Loaded memory of coloumn tables".
Hi Moaz,
another thing you can do to "optimize" your memory consumption in a test environment is to unload the tables you won't be using and just load to memory the tables you'll be effectively using in your test queries.
To unload/load a table, you can right click on the table in HANA Studio and do it manually.
You can also use ALTER TABLE and set it to LOAD or not automatically during HANA startup.
Hi Pinto,
I tried the unload command, I see no decrease in the memory consumption. In fact earlier it used to start around 9Gb and then go up but now it starts from 13 GB. I have restarted the HANA db and even my AWS instance but the consumption remains the same. I have also checked for tables that are loaded in memory and none of these tables are from my development schema. The tables in memory are all from _SYS_STATISTICS,_SYS_REPO. Now due to lack of memory my queries are throwing exceptions.
Hi Pinto,
I know these values are very high. I have unloaded each of my development table but still these values remain as they are. Everything that is in memory at the moment belongs to the system and I cannot understand why is the system consuming so much memory?
"Or see what's happening to fill them so much?"
Can you guide me how to check this?
Hi,
Can you please check the memory utilization using the following queries:
Please check if things are in sync and order.
Regards,
Ravi
Hi Ravindra,
Following are the results.
Heap Memory used size for Index server in : select * from M_memory where name = 'HEAP_MEMORY_USED_SIZE' --- Please check for your Indexserver
Shared Memory used size for Index server in : select * from M_memory where name = 'SHARED_MEMORY_USED_SIZE'
3060MB
13658MB
Hi,
I assume your indexserver port is 30003. Based on the screenshots you provided and my understanding of memory allocation:
MB | |
HEAP_MEMORY_USED_SIZE | 1,676.69 |
SHARED_MEMORY_USED_SIZE | 3,474.62 |
Code and Stack | 3,060.00 |
Column Tables | - |
Row Tables | - |
Total used memory | 13,658.0 |
I could not get the memory used in Column tables and row tables, but it should be around 3 - 4 GB of memory (without considering any temporary memory allocation used in run time operations).
In a nutshell, your current overall utilization is 13 GB of memory which comprises of heap memory, object memory and memory used in runtime calculations / query execution.
If your column and row tables are not occupying 3 - 4 GB, then it seems the memory pool is not getting released. We faced a similar situation and had to restart the server to bring down the memory which was not released by HANA. You can also try the same.
Regards,
Ravi
Hi,
Generally the application tables should be Column tables. But from your screenshot, I think you have some of the application tables as row tables in Schema HONDA...
Although I did not try unloading row based tables (since I don't have any), the operation should be same as column based tables. Right click on the table and select UNLOAD or use SQL statement UNLOAD <table_name>.
Regards,
Ravi
Hi,
I tried the unload command with Row tables and HANA says it is not supported. Secondly I realised(I am not sure) that if I have tables in Column storage they can be loaded and unloaded where as with the row tables it might not be possible to unload. I convert my row tables to column and my current memory consumption is down to 10 GB. I need to see how to bring it down further.
Hi,
You are right. I created a dummy ROW table and loading and unloading operations are not allowed.
Can you please check if you have Delta memory for COLUMN tables using:
select * from m_cs_tables where memory_size_in_delta > 1000;
If you see lot of memory being occupied in Delta memory, then you can perform DELTA MERGE operation. It will move the data from uncompressed DELTA memory to compressed MAIN memory.
Regards,
Ravi
Hi Ravindra,
The above query is of help but when the merge did not empty the delta memory. I ran the above query then I ran the following query "MERGE DELTA OF "myschema"."table" ". then I ran the above query again but still the table appeared. Secondly I see a lot of tables from _SYS_REPO and _SYS_STATISTICS , the MERGE statement did not work for them either. can you explain what am I missing?
Hi,
The DELTA MERGE command just notifies the HANA system that the delta memory can be merged. The actual Merge depends upon the system setting for how much DELTA memory should be reached before the actual MERGE is triggered. Hence you will not see the immediate impact. Secondly, usually you will see the impact if the DELTA memory is quite large enough.
_SYS_REPO and _SYS_STATISTICS are system objects. I never tried to merge these objects as I do not have access to perform any operations on these. I think these objects are managed by the system itself.
Regards,
Ravi
Hi Moaz,
The following parameter defines the delta merge behavior:
indexserver.ini -> mergedog -> max_delta_memsize. The default value is 1000 MB. As far as I know, if your delta merge is more than this size, then it will not happen automatically. You might want to increase this parameter if your data loads lead to larger delta.
Regards,
Ravi
Muhammad:
How big are the files that you are trying load? Based on my experience with AWS, ideal size of file is 5GB. If you have bigger files, break them up logically either by month or year as separate files.
Also, make sure that you are loading in auto merge OFF setting.
Hope this is helps.
Regards,
Rama
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
90 | |
10 | |
10 | |
10 | |
7 | |
7 | |
6 | |
5 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.