cancel
Showing results for 
Search instead for 
Did you mean: 

Table crosses 2 billion Rows and giving error while executing it.

0 Kudos

Hi,

We have one Table which has more than 2 billion record as per the suggestion of some experts we have partitioned that table. We have three active nodes in different servers .

Now After partitioning also it is giving out of memory error. so experts are suggesting  to move one partition to another node (server).

1)My question is if we divide table in to partitions ,whether it will allow more records to store or not?Is this the resolution?

2)If we move one partition to another node will problem get resolved? Or any other things I need to take care as a precaution?

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

HI Sohil,

This should not be a limit or reason for OOM. Can you check last delta merge time for the table ?

What is the delta memory ?

What is the main memory of the table.

If you see a high delta memory, then try performing the delta merge on the table.

Regards,

Pavan Gunda

0 Kudos

Hi  Pavan,

We have recently performed delta merge on that table.That should not be the issue.

In SAP dump , it was indicating for table crosses 2bn rows so that is why we have partitioned table.

Now the issue is related to memory.

so my concern is if we move one partition to another node will it work?

both the partitions will be in different servers physically.

Former Member
0 Kudos

Hello Sohil,

The short and simple answer is yes it will work. Partitioning can be used in single and multiple-host systems.

Table Partitioning - SAP HANA Administration Guide - SAP Library

Redistribution of Tables in a Distributed SAP HANA System - SAP HANA Administration Guide - SAP Libr...

Q) What type of partitioning did you use ?

A)

Now to better understand why you are getting OOM it would be helpful if you uploaded the indexserver trace file and also the rte dump.

In parallel I really would suggest you open a message with SAP as they can at least logon to your system and have a look as opposed to SCN members who can only try and answer questions based on the info you give us.

KR,

Amerjit

Former Member
0 Kudos

Hello Sohil,

For the future it really would help if you specify what revision you are running.

1) Once you reach the 2bn limit then you do have to partition.

2) Once you have partitioned a table each partition can hold up to 2bn records.

3) Your OOM could be due to merge operations but without proper information that's just a stab in the dark.

If you are really interested in a deeper look at the subject there is a great article put together by .

Even thought the following doc refers to SPS3 it will still give you an idea on partitioning:

Checking your compression which will impact memory consumption can be done with the SQL in the following note: 2105761 - High memory consumption by RANGE-partitioned column store tables due to missing optimize c...

The above is just some starting point info for you to help nudge you towards a better understanding of what you have.

KR,

Amerjit