cancel
Showing results for 
Search instead for 
Did you mean: 

Out of memory dump for select query in sap hana

Former Member
0 Kudos

Hi,

I am running a select query having inner join for 3 tables.

This select will return 85,55,550 records.

When i am executing the select statement, getting out of memory dump.

 

transaction rolled back by an internal error: exception 1000013: exception 1: no.1000002 (ltt/impl/memory.cpp:125)

Out of memory ; $size$=2048; $name$=Pool/RowEngine/QueryExecution; $type$=pool; $inuse_count$=102758216; $allocated_size$=106879587472

Please help me how to resolve this issue.

Thanks.

Accepted Solutions (1)

Accepted Solutions (1)

former_member184768
Active Contributor
0 Kudos

Hi Ramya,

Couple of things from my point of view.

1) You mentioned the Query will return 8.5 million records. But you did not mention how many records are there in the underlying 3 tables.

2) The bottleneck is due to the large resultset generated by the join condition which could be much more than 8.5 million records.

3) The large resultset could be due to high volume, filters not getting pushed down or not proper filters defined, incorrect join condition resulting into N times the data set before aggregation.

Can you please try the following:

1) Try to put some test filters which will reduce the data set so that you can atleast see the result

2) Check the visualize plan option for the query and identify which table / join condition is generating most records and which operation is consuming most resources.

3) Check if the filters are actually pushed down before the resultset generation

4) See if the join conditions are defined appropriately.

Regards,

Ravi

Former Member
0 Kudos

Hi All,

Thanks for your suggestions.

I have splitted the join into temp tables and combine the data.

It is working fine now.

Thanks.

Answers (4)

Answers (4)

Former Member
0 Kudos

What you are doing depends on the design of your joins because it will depend which engine the code runs in. Either way, doing the joins in temp tables will be very inefficient.

If you are doing SELECT X FROM Y INNER JOIN X ON Y.A=X.A then HANA will usually be pretty inefficient with very large data volumes.


If you are aggregating then make sure you build an Analytic View and build your select on top of that.

If you are joining then use CE functions in a Scripted Calc View. This will almost certainly return 85m rows efficiently.


John

Former Member
0 Kudos

Hi John

Actually the actual the SP that does the join is inefficient as the main chunk of time goes into data inserts ( doing delta merges ) . I had actually taken tips from your performance guide like partitioning the table , turning off auto delta merge and then doing a manual merge after data inserts .  But the performance still needs improvement.

However since the SP dumps ready made joined data into a phyical table in HANA , the analytical/calc view built on it executes super fast .

interestingly i had another use case , but the luckily the join fields were the only fields needed in the output , so I was able to simulate left outer join by just doing a union / aggregation and filtering.

rishi

Former Member
0 Kudos

If you paste the code and model, we can look at what a CE function would look like. I have worked with local temp tables of 300m rows or more and the OLAP engine can typically optimize this on the fly.

rajarshi_muhuri
Active Participant
0 Kudos

Hi Jon

I opened a new thread , as I did not want to hijack this thread . I pasted the current code of the SP that does the join in chunks and then inserts into a physical table . Unfortunately when the data set is large , the SP takes 45- 1 hour to complete.

http://scn.sap.com/thread/3447164

rindia
Active Contributor
0 Kudos

Hi Ramya,

I too faced this problem when I had HANA revision 58. But it is solved luckily when upgraded to 62.

You can check the thread here.

Also Hugo Amo suggested SAP note 1862506 which I am pasting below for quick reference.

Symptom
After upgrade of HANA database software to SPS05, the statisticsserver process may run out of memory,
resulting in at least one of the following trace files:
  • statisticsserver_<hostname>.3<instance_id>05.rtedump.<timestamp>.<pid>.oom.trc
  • statisticsserver_<hostname>.3<instance_id>05.rtedump.<timestamp>.<pid>.oom_memory_release.trc
Environment
  • HANA 1.0 SPS05 and newer
Cause
The database software release HANA 1.0 SPS05 needs more memory for the statisticsserver process than older versions.
Under certain conditions, the default memory allocation limit is too small. The lack of memory can also cause a termination of statisticsserver backup jobs.
Resolution
  1. In the HANA Studio navigation pane, select Open Default Administration of the affected database system.
  2. In the Administration pane, open the Configuration view and expand the tree nodes statisticsserver.ini -> [] memorymanager.
  3. Right-click the property allocationlimit, which has a default value of 5%, and select Change. Enter a higher value of 10% and click Save.
  4. Open the Landscape view, and right-click the service statisticsserver. Select Reconfigure Service.
  5. Repeat the failed operations and test if the above statisticsserver oom trace files re-occur.
  6. If yes, increase the allocationlimit to 15% or 20%.

Regards

Raj

Former Member
0 Kudos

Hi,

I am using HANA SP6  Rev 62.

Application limit is 10%.

In the DEV box only, i am getting OOM issue, in production i will be having large volume of data.

Sorry to ask this question,

how to see the trace file /usr/sap/<sid>/HDBxx/<hostname>/trace?

In ECC or BW system, we will get this from tcode AL11.

How to check in HANA and help me how to resolve this issue?

Thanks.

rindia
Active Contributor
0 Kudos

Hi Ramya,

Right click on hana node of your dev box and choose Administration.

 

In the Diagnosis Files tab, check for the column Modified and see the time stamp of when you got OOM.

Once you identified your trace file, double click on the file to open it. This is how I do.

Regards

Raj

Former Member
0 Kudos

As such a join like yours on such a big volume  WILL DEFINITELY CRASH HANA .   HANA cant simply handle just a huge join.  ( even if the cardinality is n:1 ) .  There is no way out to this

We have a similar situation where we have to do the join, so we run a SP that chunks out the join and spits it to a temp table , the view just lifts the joined data.

This is a very basis issue , there is nothing much you can do to prevent it . They can only turn on traces to do root cause analysis.

our HANA box ran into many memory issues in REV 56. now we are in REv 64 . Its got better but still occurs

Former Member
0 Kudos

go to the last release first (rev 68 on today).

AtulKumarJain
Active Contributor
0 Kudos

Hi Ramya,

Could you please share your HANA rev no ?

if it is not latest please upgrade it .

BR

Atul

SK-EA
Active Participant
0 Kudos

Ramya,

Please check for the RTE dump file created when you faced this OOM issue. It will be in trace folder (/usr/sap/<sid>/HDBxx/<hostname>/trace) in the below format:

<hanaprocess>_<hostname>_<internal_comm_port>.rtedump.<timestamp>.nnnnn.oom.trc

E.g., statisticsserver_XXXXXXXXXX.30005.rtedump.20131002-121505.14269.oom.trc (XXXXXXXXX is the host name of the server)

This will help identify the process which is running out of memory. You may need to increase the memory allocation temporarily for this process.

For example, In administration console, go to Configuration tab -> statisticsserver.ini -> memorymanager -> allocationlimit. double click on it and increase the memory allocation.

Hope this helps. Any queries, please let me know.

Regards,

Srinivas K.