cancel
Showing results for 
Search instead for 
Did you mean: 

RTSINPUT_CUBE performance?

Former Member
0 Kudos

Dear Experts,

I have been rebuilding a system. RTSINPUT (TSCUBE) with 30 odd key figures and 700+K CVCs and 30 odd storage buckets has been running for close to 17 hours now.

What possible explanation exists in SAP for a standard program like this ?



Parallel processing ? Yes it is there

Locking setting in planning area ? It is Live cache lock

System resources? - Perfectly fine per the 24 x 7 administrator

HANA? No.

SAP SCM7.0 Ehp1

SM50? Waiting, waiting, running, waiting, reading.

TRACE? I do not want to kill a production system.

Live Cache Work Processes: Set up per SAP note advisory.. long ago.

DB02.. Not my job


Can I break this up into smaller chunks? Yes I can but I have wasted 17 hours already. I do understand sum of parts may be less than the whole.... run time.


Why should a a 'simple' program like RTSINPUT take soooooooooooooo long ?


Is this a computing problem or a bad implementation of the code of RTSINPUT? Well I am not techspert so I can only pretend to understand the nitty gritty here.


Thanks

L

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

If anyone is faced with such question.. How long will this program take with such inputs

Answer = Slightly less than a day.

Then add the toppings you need to add to manufacture some grievances on behalf of SAP.

The job just finished. In 20 something hours. But now my data is stale. So I need to do something about it but I think that can be managed somehow by the demand planners who think and do faster than these programs. Anyway this is onetime. So I don't care.

Next is TSCOPY that I believe will take a couple of days.

Former Member
0 Kudos

Hi Lok

We also faced similar issue in one my earlier project.

We were loading the data from cube for which indexes were not built due to which it was taking huge time.

Please check the indexes for the cube. If it´s not built then stop the job, please build the indexes and again execute the job.I am sure this will run faster. Please let us know if this information helps you.

Thanks

Amol

Former Member
0 Kudos

Thanks Amol,

That is the first thing I did.. Delete and Compile index using process chains... though I was struggling to locate exec programs in the foreground.


May be the results log is taking a lot of time. I am running another variant of the same without the results log checked.. Background spooling and log writing takes time! In most programs that need to sequentially write to large tables and this one is large because of first time loads.


Whatever be it 20 hours for a fully loaded report program even with all toppings and checks sounds ridiculous unless there is a reason coming off documented empirical or analytical research on ABAP program run times in ideal testing conditions. The problem is there is just no estimate of run times for these programs. However approximate based on assumptions about system activity and resources. Someone must develop a utility here.


Another theory says that interpreted programs no matter how they are implemented will always be slow compared to programs that are compiled. That was just to show that I am reading what they are hiding 🙂


L

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi Loknath,

We're facing similar issue and have tried various things but not able to resolve it.

We've around 1M CVCs and 7 KF, with time horizon of 4 years (in weekly buckets - 230 iterations).

In Q, we ran standard /SAPAPO/TSCUBE job without any parallel processing and 'Results log' unchecked. It took almost 3 days to load the data.

In P, we were expecting it to complete a lot faster. However, with the same settings as in Q, the job failed after 20000 secs with following error:

Could not allocate space for object 'dbo.SORT temporary run storage:  140774612336640' in database 'tempdb' because the 'PRIMARY' filegrou

SQL Error: 1105

Internal session terminated with a runtime error MESSAGE_TYPE_X (see ST22)

Job cancelled

We then tried a bunch of different settings:

a) Tried to run only for one Sales Org (about 10K CVCs)

b) For one Sales Org, ran with parallel processing 15 parallel processes with 10000 block size. No initial values to be read.

However, it took more than 10 hours and never completed. Strangely, in SM51, it would only show on process doing the work (expected more than 1 to be running).

Any ideas/suggestions would be really helpful.

Thanks a lot!

Best Regards,

Hiren

Former Member
0 Kudos

Buy HANA. May be it helps 🙂

Sorry no idea and I have stopped caring for SAP. You need to engage SAP support to investigate the memory parameters. Do not let them call it a consulting service. They will at the first instance. They must be accountable for system performance at all times.. just as Honda Automobiles is responsible for safety related features much after warranty.. They recently replaced my SRS system for free.

Thanks

L