on 05-04-2015 7:58 PM
Hi All,
Any idea what is causing below issue? We are running some complex update statements on a large table.
Could not execute 'UPDATE <<>TABLE NAME>AS A SET A.SYS_DXXXX = ...' in 8:19.878 minutes .
SAP DBTech JDBC: [2048]: column store error: search table error: [2999] Docid sequence with more that 4 billion elements not supported.
Regards,
Suman
Such a short post and so many problems...
Why haven't you searched before you posted (as you agreed to do by accepting the Rules of Engagement here on SCN)?
You surely would have figure out that SAP HANA doesn't support more than 2 billion rows per table/partition.
But maybe this isn't the problem at all, since the error message looks a bit strange (4 billion ... never saw that one, but hey, what to I know?).
So, maybe it's really something else, but you managed to not show your code and data.
You also leave it to us to guess your SAP HANA revision, which makes any thought about possible bugs a complete joke.
Suman, I suspect you don't want to actually get help... if it was attention you was looking for, I hope these lines fixed you up.
- Lars
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Lars,
I am sorry for not providing more details. I am new to hana. I searched for this error and could not found any details on internet/ scn forums
Yes , I am aware the 2 billion row limit. The biggest table we have in the query is 160 million
It is a update query which has multiple joins. We are running on SP8
"Docid sequence " I never heard of this term before. .
I have re written the query in multiple ways and always end up with same issue.
It would be really helpfull if there is any pointers on this issue.
And No, I didn't post it for any attention.
Let me know if you need more details.
Regards.
Suman.
Could it be that the total number of records returned from your subquery exceeds ~4 billions records ( max, unsigned integer 32-bit)?
SAP HANA does have a limit of 4 billion records for intermediate results; so even though you can process sets beyond the 2 billion limit, the 32-bit internal ID limits the number of rows to 4294967296.
In order to get the update through, you should split/partition the records to be updated as well as the result of the sub-select.
- Lars
Thank you. That helped.
Inside the procedure I split up the query logically and ran the update like below
Note : I didn't split the number of records . I have only splatted the query
lt_temp = SELECT KEY1,F2,F3..
UPDATE TABLE_B AS B. SET B.F3 = A.F3
FROM
: lt_temp as A
inner join
TABLE_C
left outer ..
It worked fine.
If I write the same update logic in single query then the query is failing .
I checked the explain plan and I don't see 4 billion count on any operation.
Is it something to do with late materialisation ?
User | Count |
---|---|
87 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.