cancel
Showing results for 
Search instead for 
Did you mean: 

cleanup of columnstore table did not reduce memory usage

Former Member
0 Kudos

Hi

We have a HANA table (VBFA) which we have drastically reduced, as it was taking up most of the Memory.  The rowcount was reduced from 540240655 to 40958378, but the table is still consuming 128 GB of RAM in memory, this did not reduce with the cleanup.

See output of script HANA_Tables_LargestTables (thanks to SAP note 1969700)


OWNER;TABLE_NAME;S;L;HOST;B;U;POS;COLS;RECORDS;DISK_GB;MAX_MEM_GB;CUR_MEM_GB;MEM_%;CUM_%;PART.;TAB_MEM_GB;IND.;IND_MEM_GB;LOBS;LOB_GB

SAB_SLT;VBFA;C;P;hanaprd-mgmt; ;X;  1;  41;   540240655; 142.26;    155.39;    116.83;93.13;93.13;    1;     51.99;   1;     64.83;   0;  0.00 -- 1 pm

SAB_SLT;VBFA;C;P;hanaprd-mgmt; ;X;  1;  41;   142123695; 145.25;    154.87;    100.99;85.42;85.42;    1;     41.43;   1;     59.56;   0;  0.00 -- 3 pm

SAB_SLT;VBFA;C;P;hanaprd-mgmt; ;X;  1;  41;    40958378; 146.01;    158.12;    134.60;88.76;88.76;    1;     44.71;   1;     89.89;   0;  0.00 -- 5 pm

and also of script HANA_Memory_TopConsumers of same SAP note:


TOP MEMORY CONSUMERS:

HOST;PORT;AREA;DETAIL;SIZE_GB;SIZE_PCT;CUM_SIZE_PCT;DESCRIPTION;MEMORY_REDUCTION_APPROACHES

hanaprd-mgmt;30003;Column store;VBFA;    128.28;   57.14;       57.14;Size of table VBFA in column store;Application data management and archiving -- 5PM

Please advise on what we need to do to reduce the memory usage of this table.

Thanks

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi

After database bounce, also the used memory of this table did not release:

Top 1 largest table:


SAB_SLT;VBFA;C;P;hanaprd-mgmt; ;X;  1;  41;    41770852; 146.15;    148.59;    119.95;83.74;83.74;    1;     35.55;   1;     84.40;   0;  0.00 -- 6:20 pm

top 2 memory consumers:


HOST;PORT;AREA;DETAIL;SIZE_GB;SIZE_PCT;CUM_SIZE_PCT;DESCRIPTION;MEMORY_REDUCTION_APPROACHES

hanaprd-mgmt;30003;Column store;VBFA;     92.27;   39.76;       39.76;Size of table VBFA in column store;Application data management and archiving

hanaprd-mgmt;30003;Heap area;Pool/JoinEvaluator/JERequestedAttributes/Results;     62.63;   26.99;       66.75;Join results;Reduce processed SQL data, avoid "SELECT *"

lucas_oliveira
Advisor
Advisor
0 Kudos

Hi Willem,

What's the delta x main size of the table now?

You can use query HANA_Tables_ColumnStore_DeltaStorage from the same note to check that.

Also, check if the delta merge kicked in by using query HANA_ColumnStore_Merges_Rev* for that table.

If the automatic merge is not set and you deleted the data manually, you probably have a bunch of rows pending to be deleted in the next delta merge. If so, you can manually trigger the delta merge manually as well.

EDIT: looking at your results again, it seems that the index part of your table is increasing while the table itself is decreasing. Maybe you have lots of concat attributes as well. Take a look at the column sizes of your table with query HANA_Tables_ColumnStore_Columns *and indexes with HANA_Indexes_LargestIndexes *. That should point you to the biggest indexes in there. Let's see how it looks like.

BRs,

Lucas de Oliveira

Former Member
0 Kudos

Hi Lucas

Thanks for your response, and I think you are spot on with the Delta Merge issue.  I also notice this in the alerts section of the database: ID 29:

Alert Size of delta storage of column-store tables - "SAB_SLT.VBFA partition 0 on hanaprd-mgmt:30003 has a delta storage size of 103161MB. This exceeds the threshold value and may indicate a problem with delta merge processing."

I will respond soon with the output of the queries.

Regards

Willem

Former Member
0 Kudos

Hi Lucas

HANA_Tables_ColumnStore_DeltaStorage script shows that 101 GB of this table is in MEM_DELTA_GB:


HOST;PORT;SCHEMA_NAME;TABLE_NAME;RECORDS;RECORDS_MAIN;RECORDS_DELTA;DELTA_REC_PCT;MEM_GB;MEM_MAIN_GB;MEM_DELTA_GB;DELTA_MEM_PCT;LAST_MERGE_TIME

hanaprd-mgmt;30003;SAB_SLT;VBFA;  42795063;   687360912;    473621671;        40.79;  101.79;       0.17;      101.62;        99.82;2015/09/09 00:26:20

This is HANA version 82.  Script HANA_ColumnStore_Merges_Rev70+ does list 561 entries in delta merge for this table alone. lots of them contain error 2048:


START_TIME;HOST;PORT;NUM;TYPE;MOTIVATION;SCHEMA_NAME;TABLE_NAME;DURATION_S;ROWS_MERGED;TIME_PER_ROW_MS;LAST_ERROR

2015/09/10 15:06:15.516;hanaprd-mgmt;30003;   1;RECLAIM;AUTO;SAB_SLT;VBFA;    112.51;  252513232;           0.00;2048

2015/09/09 00:28:07.922;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    536.76;          0;           0.00;2048

2015/09/09 00:15:40.170;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    663.59;   32147484;           0.02;2048

2015/09/09 00:02:38.292;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    705.84;   99418680;           0.00;2048

2015/09/08 23:59:43.212;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    114.82;          0;           0.00;2048

2015/09/08 23:56:36.024;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    126.79;          0;           0.00;2048

2015/09/08 23:46:05.571;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    568.74;          0;           0.00;2048

2015/09/08 23:39:37.759;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    323.20;   20284788;           0.01;0

2015/09/08 23:33:32.882;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    302.00;   17477424;           0.01;0

2015/09/08 23:26:51.958;hanaprd-mgmt;30003;   1;MERGE;AUTO;SAB_SLT;VBFA;    338.24;   16253160;           0.02;0

HANA_Tables_ColumnStore_Columns on this table, produced 100 rows, most colums MEM_TOTAL_MB is 0 or close to 0.

HANA_Indexes_LargestIndexes script produced one row:


SCHEMA_NAME;TABLE_NAME;S;INDEX_NAME;C;INDEX_TYPE;COLS;INDEXES;MEM_SIZE_GB

any;VBFA; ;any;any;INVERTED VALUE UNIQUE;    ;      1;      66.69

Now a resolution would be very handy to reduce the Delta merge on this table.  Working through SAP note 1977314.

Former Member
0 Kudos

also:

Could not execute 'MERGE HISTORY DELTA OF "SAB_SLT"."VBFA"' in 6 ms 98

µs

.

SAP DBTech JDBC: [2048]: column store error: merge history index

error:

[2450] error during merge of delta index occurred

Former Member
0 Kudos

UPDATE:

It seems like this issue is not resolving.  Manual trigger of delta merge fail:

SAP DBTech JDBC: [2048]: column store error: merge history index

error:

[2450] error during merge of delta index occurred

Also, this query is failing:

UPDATE SAB_SLT"."VBFA" WITH PARAMETERS('OPTIMIZE_COMPRESSION'='YES');

SAP DBTech JDBC: [2048]: column store error: process parameters error:  [6952] exception 6952:

TRexUtils/ParallelDispatcher.cpp:260

AttributeEngine: not enough memory

Could not execute 'MERGE HISTORY DELTA OF "SAB_SLT"."VBFA"' in 6 ms 98

µs

Logged in incident with SAP support

lucas_oliveira
Advisor
Advisor
0 Kudos

Hello Willem,

Yep, it seems you don't have enough memory to perform a delta merge (with history part) on your appliance. An oom trace file will tell you how much memory was requested on the step it failed. That helps understanding the state of the system and what was necessary to continue. However, it does not really mean that the setp you saw in the trace file is the last one... other steps might require more memory.

Anyhow, I can think of a few workarounds for the scenario:

1 - Release as much memory as possible (restarting / unloading) and try a new delta merge;

2 - Table with session history definitely adds up to the memory consumption. Not sure why your table is using that feature. If that's not mandatory for your application, it's important to investigate the possibility of using simple session instead. Of course that depends on the application. Ruling those off, you could create a clone of VBFA (VBFA_CLONE) with simple session (*no* history) and copy the data from VBFA to VBFA_CLONE, gradually and merging the clone on each batch. Afterwards, you could drop the original VBFA and rename the clone. Again, this needs deeper investigations due to application usage of the history part.

3 - If you have another system on a bigger appliance and you can spare a few hours from it, you can export/import into the other system, try the merge there and finally bring the table back to the original system. For this option it's important to evaluate the impact on applications and if this can really be carried out or not.

4 - Reload table from the source system (evaluating the usage of simple session in the newly created table )


I'm sure there are other options but that's what I can think of as of now.


I believe these workarounds need another pair of eyes, unless you are sure about its impacts. If you're not, I recommend you to create an SAP support incident to help evaluating impacts given your scenario.

Lastlty, delta merges are partition oriented. Thus, partitioning your table, after solving the issue of course, will help avoiding this scenario in the future.

BRs,

Lucas de Oliveira

Answers (0)