cancel
Showing results for 
Search instead for 
Did you mean: 

HANA Indexserver memory calculations

0 Kudos

Hi Guys,

We have standalone Hana implemented in our company. I've found  indexserver memory consumption in 230GB.

Out of 230gb  I can  see 180GB consumed by below tables but not sure of what consumes the other 50GB? How can i verify this 50GB? Also each time i look for indexserver memory it goes up and down. Can someone please throw light on this???

100GB for cloumn tables size

50GB for cloumn tables size

30GB is Stack & Code size.

50Gb ?????????

Thanks & Regards

Raj

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

Hi Rajendar,

There are several reasons for your 50 GB consumption. That might be for SQL query computation, or heap memory like that.

The fluctuation in index memory is because of loading and unloading of tables from in memory to Disk.

Former Member
0 Kudos

Hi,

Also you can check the below link too

Query to check HANA Memory Usage ? - SAP HANA TUTORIALS FREE

Add Rep if you found these correct.

Regards,

Mahaveer Jain

Former Member
0 Kudos

Source: » SAP HANA Memory Usage Explained

Used memory serves several purposes:

    • Program code and stack
    • Working space and data tables (heap and shared memory)

The program code area contains the SAP HANA database itself while it is running. Different parts of SAP HANA can share the same program code.

The stack is needed to do actual computations.

The heap and shared memory are the most important part of used memory. It is used for working space, temporary data and for storing all data tables.

SAP HANA Memory Usage

You can use the M_SERVICE_MEMORY view to explore the amount of SAP HANA Used Memory as follows:

Total Memory Used:

SELECT round(sum(TOTAL_MEMORY_USED_SIZE/1024/1024)) AS "Total Used MB"
FROM SYS.M_SERVICE_MEMORY;


Code and Stack Size:

SELECT round(sum(CODE_SIZE+STACK_SIZE)/1024/1024) AS "Code+stack MB"
FROM SYS.M_SERVICE_MEMORY;


Total Memory Consumption of All Columnar Tables:

SELECT round(sum(MEMORY_SIZE_IN_TOTAL)/1024/1024) AS "Column Tables MB"
FROM M_CS_TABLES;


Total Memory Consumption of All Row Tables

SELECT round(sum(USED_FIXED_PART_SIZE +
USED_VARIABLE_PART_SIZE)/1024/1024) AS "Row Tables MB"
FROM M_RS_TABLES;


Total Memory Consumption of All Columnar Tables by Schema:

SELECT SCHEMA_NAME AS "Schema",
round(sum(MEMORY_SIZE_IN_TOTAL) /1024/1024) AS "MB"
FROM M_CS_TABLES GROUP BY SCHEMA_NAME ORDER BY "MB" DESC;

Regards,

Mahaveer Jain

0 Kudos

Thanks Mahaveer.

But how do we know or calculate how  heap and shared memory is calculated or visible atleast?

The heap and shared memory are the most important part of used memory. It is used for working space, temporary data and for storing all data tables.

Former Member
0 Kudos

Hi Rajendra,

To calculate the memory is called as Memory Sizing.

This depends on Size of Tables, Compression that is exerted on the tables stored & Extra working memory (Need to know the tables sizes first, it will show that if we have large tables then the result of operation between these large tables too can be large)

Dynamically allocated memory consists of Heap and Shared Memory.

But it is not easy to calculate the Shared Memory as it is displayed inaccurately for the following reason.

Shared Memory is a memory which is shared between two processes for information exchange by writing to this memory location. It is not easy to account for the shared memory. Does it belong to one of the process or both or neither so if we naively sum the memory belonging to multiple processes we grossly over count.

Linux reports shared memory inconsistently. When you ask it to report the resident memory map of a single process, it will report the shared-memory as part of its memory footprint. However, when you ask "how much total memory is resident", it does not account for shared memory.

This is usually not significant, because very few programs use large blocks of shared memory. However, SAP HANA is different. An early SAP HANA design decision was to use "shared memory" for row-store tables . As a result, when you use large row-store tables, the shared-memory footprint of SAP HANA can become very large.

Thus, if you ask Linux to report the resident size of SAP HANA and the total resident size on the host, the size of SAP HANA may appear to exceed the total, which of course makes no sense.

To compensate, SAP HANA adds the resident size of the shared-memory part of the SAP HANA processes to the resident size reported by Linux. This is another intentional reason why SAP HANA may report different values than Linux.

Regards,

Mahaveer Jain