cancel
Showing results for 
Search instead for 
Did you mean: 

TSV_PAGE_ALLOC failed - internal table

Former Member
0 Kudos

Hello ,

4.6c with oracle 9.2 with hp unix

Some important jobs have got cancelled with the error-

TSV_TNEW_PAGE_ALLOC_FAILED

The internal table "IT_11911" could not be enlarged further.

To extend the internal table, 11936 bytes of storage space was

needed, but none was available. At this point, the table "IT_11911" has

0 entries.

The variant and program has been looked into check tighter data selection.

The same job with the same variants and same selection criteria runs fine if we re-run .

If the same amount of data is collected by the program or the job ,how does the system allocates more memory for the different instance of the same job at a different time.

Here are the heap parameters-

abap/heap_area_dia 2000000000

abap/heap_area_nondia 2000000000

abap/heap_area_total 6000000000

abap/heaplimit 41943040

001700 IF BUFFER_SUBRC NE 0.

> INSERT MATX INDEX MATX_TABIX.

001720 ELSE.

001730 MODIFY MATX INDEX MATX_TABIX.

001740 ENDIF.

Please advise.

Thanks ,

Prasanna

Accepted Solutions (0)

Answers (2)

Answers (2)

markus_doehr2
Active Contributor
0 Kudos

> If the same amount of data is collected by the program or the job ,how does the system allocates more memory for the different instance of the same job at a different time.

This may happen if the shared memory is filled with "other data" at that time (e. g. another job of another user). You may check the shared memory (extended memory) usage in ST02.

If this is a self written program I suggest you try to split the processing up, so not reading everything into memory and then looping over internal tables but do it in sections (like 1000 entries). This will significantly decrease memory usage and will most likely also be faster.

Markus

Former Member
0 Kudos

Hi Mark / Markus,

If there are some common tables in question for all the failed jobs, will change in the archiving strategy help overcome this error so that there will be less amount of data be extracted?. The DB size is really really huge.(20tb)

Thanks,

Prasanna

markus_doehr2
Active Contributor
0 Kudos

> If there are some common tables in question for all the failed jobs, will change in the archiving strategy help overcome this error so that there will be less amount of data be extracted?. The DB size is really really huge.(20tb)

We can't tell if that would help by just knowing that "a memory error occurs".

Archiving sometimes helps, yes, but if the program still selects too much data it won't help.

Markus

Former Member
0 Kudos

Hi ,

The problem is not with a single report, but a about 3-4 of them.So this is making me think about the parameters .

I've doubt before we check the available swap space , notes on the heap parameters for ux.

If a background work process has crossed abap heap limit, then that dialog step will use the rest from abap/heapnondia area which is shared by all the wp's(1:n) making less heap available for the remaining bgd process on that instance?

thanks ,

Prasanna

markus_doehr2
Active Contributor
0 Kudos

> The problem is not with a single report, but a about 3-4 of them

Are those standard reports created by SAP?

> If a background work process has crossed abap heap limit, then that dialog step will use the rest from abap/heapnondia area which is shared by all the wp's(1:n) making less heap available for the remaining bgd process on that instance?

Heap memory, no matter whether in background or dialog, is process local memory which is exclusively allocated for the workprocess. It's not part of the extended/shared memory but allocated on top of it.

Markus

Former Member
0 Kudos

There is one standard report and rest are Z. The Z are being checked and variants are being checked for tigther selection.

But the scope is very less when it comes to variant split up.

Yes, the heap is privately owned.But My major doubt was --

Do all the non dia WP's on a app.server share abap/heap_non_dia area?

So if one bgd wp crosses the abap/heaplimit and uses the remaining area from abap/heap_non_dia area, whether it will leave less memory for other bgd wp. Hence the reason for the same cancelled job being re-run works.

I've opened a OSS message for SAP to check the dumps/traces.

Thanks,

Prasanna

Edited by: Prasanna K on Jul 15, 2010 4:23 PM

Former Member
0 Kudos

The selections are getting too big... Probably due to database growth.

You could increase your heap sizes, but that is just a work around. It's better to change the selections of the jobs. Perhaps you could change the selections by using multiple variants in multiple job steps.

Kind regards,

Mark