cancel
Showing results for 
Search instead for 
Did you mean: 

Macro down..factory down

Former Member
0 Kudos

A macro is running in background.. DP in bg

It adds 2 KF's... so there is only one way the macro could be written C = A + B

Out of 10 chars available, only 2 are selected for grouping in the background job.

No Nav chars used.

Period of execution of the macro is 28 buckets / iterations (lowest level at which time series is stored)

40K CVC's at the level of grouping selected...This is 1/7th of what it were otherwise with all chars selected

Data view has 8 key figures. Horizon is 48 x 4 buckets but macro horizon is only 28 buckets.

RAM is 16 GB.

Everyhing is consistent.

Brand new system. Release SCM 71.

This macro has been running fo more than 24 hours and still a third of CVC's are yet to be processed

Factory is shut down becasue of this macro still running.

Questions:

1. Why is this macro so slow ? .. it takes 5 seconds for for CVC on average.. as seen in spool. Logs are not activated for the DP job.. just spool is

2. What should be done to make this as fast as HP calculator of 1980's make I use ?.

3. Is this NORMAL time to execute macros in SAP speak ?

4. Is there is hidden advisory, note or a check box or a program that I pay for to have this macro execute in less than 1 minute ?

Thanks

BORAT

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

in general, for such jobs you should build a data view as small as possible, with only key figures and buckets that are needed, the rest will slow down your job.

I sometimes see a horrible performance the first time data in a "new" keyfigure is written to liveCache, then it works much better. Is it the first time? For a macro that does some steps and writes some things, I would say usual performance is 1 minute for each 1000-2000 cvcs. As Dogboy says, there is a lot of variability in theese things.

I would define a much smaller selection (1000-2000cvcs) and run the same macro, may be 40K cvcs is too much for your liveCache settings (heap, cache, etc...). Having data in smaller chunks would surely improve matters, performance is not linear.


regards,

J.

Former Member
0 Kudos

Thanks you DB49 and James

Highly appreciate your responses at the right time.

James you are right - this is a first time execution.

DB49, All notes are checked, DB paraemeters were reviewed and sufficient memory exists for run time. The only thing that is not defined was parellel processing profile for DP bg jobs.

My data view has limited buckets and limited kef figures and only the needed chars. I cant make it any leaner.

We are doing a system re-start. OSS message is not yet raised. We are raising it soon.

No other processes were on when this macro ran. No locking of data of any kind not in SM12 atleast. The spool is very clear.. all green and yellow (no change). Initial data (0) is not processed by the macro. All settings are optimal based on SAP notes.

BASIS is checking few things at DB/OS level.

Does SAP provide a free audit service after a new system goes-live ?.  It would be nice to have a third pair of eyes.

Many Thanks

I will keep you posted on what eventually fixes such an issue.

Many Thanks

BORAT

Former Member
0 Kudos

DB49,

When this macro was "designed" (that took less than  25 seconds to write),  I had only 5K CVC's and very little data in planning area. I generally trust the test workbench of macro to see if the results are ok though not everything can be tested using macro test workbench.

I cant possibly test each and every macro in background in my test system before I move them to Prod. because such a result based on data in dev systems is not going to be any indicator of its performance in production system.

Thanks

BORAT

Former Member
0 Kudos

Heres one clue

The speed of execution is decreasing exponentially i.e. the first 1000 CVC's took 60 secs, next 1000, by 200 second, next 1000 by 600 secs and now in 10000 CVC's in 10500 seconds. So at this rate I am probably talking of 15 hours of run this

This was executed the second time for the heck of it. Only thing that I change was the number of chars in the planning book to only those many required for grouping. viz. just 3 chars incl. APO version

SM50 doesnt tell me much.

Former Member
0 Kudos

Live cache is all green. work processes (LC sessions) are set as advised by SAP Notes. No critical logging events.

So the only remaining thing that can help is parallel processing. I can tell if its so only after re-run of the macro. I will let the current one run and save the spool for eternity for reference of my past achievements 🙂

Heres how I think: NO MATTER HOW BAD the macro, agg level, sel level, data view clutter is, ANY SYSTEM that is produced, screenplayed and directed to make such petty calculations MUST WORK like a horse each time everytime for next 100 years.

Thanks

BORAT

Former Member
0 Kudos

Hi,

you can certainly use parallel processing, but still divide it in smaller chunks, parallel processing has also its problems and performance is also nonlinear.

Good luck,

J.

Former Member
0 Kudos

Thanks James.. Smaller chunks would mean creating 20 activities and 20 jobs and all night..will do this later with selections along some visibly clear basis.

Hope the macro completes before cows come home tomorrow

Thanks

BORAT

Former Member
0 Kudos

Borat,

I cant possibly test each and every macro in background in my test system before I move them to Prod.

!!!!!  I can see you are indeed a brave man.

Best Regards & Good Luck,

DB49

Former Member
0 Kudos

Thanks for the compliments Dogboy.

I dont wish to spend one year testing macros in background of all the things on earth.

a) because no one is paying me for it

b) I would be ashamed to tell my grandchildren what I did for a living  and

c) I am not in the business of proving SAP wrong . I trust SAP because its a very expensive software made by the most perfectionist people in the world.

Former Member
0 Kudos

🙂

What I meant was that if you divide it in three sequential jobs which in turn are parallelized, it will end faster (and in case of problems, it will be easier to stop). But of course, in one case or the other there will be a lot of waiting involved.

Good luck!

J.

Former Member
0 Kudos

jokes apart heres what it is

Error COM ROUTINE 40010 for the last 10% of the CVC's left.

Job on SM37 on screen shows "Active" though there is no progress further in spool for several miniutes now.

TSCONS and TSLCREORG are all good and green..

And the live cache version is 7.7.07.40. In case this has any bearing. OM17 DP time series check is also good.

Interesting this error is ONLY in background spool. Not in SDP94 that I just check for a record that had COM routine error.

Thanks James: I will adopt your suggestions once I come out this mess

Former Member
0 Kudos

Boris,

A short search of notes yields

http://service.sap.com/~form/handler?_APP=01100107900000000342&_EVENT=REDIR&_NNUM=1645187&_NLANG=E

Jokes apart,

a) because no one is paying me for it

IT doesn't do testing, the business users do testing.

b) I would be ashamed to tell my grandchildren what I did for a living  and

Would you be ashamed if you told them you were fired for shutting down a factory?

c) I am not in the business of proving SAP wrong . I trust SAP because its a very expensive software made by the most perfectionist people in the world.

OK, this part really is a joke.  I just couldn't resist....

Best Regards,

DB49

Former Member
0 Kudos

Thank you DB49 for staying on top of things and digging out this useful note.

I wont rebut further because you are sincerely struggling to help me. I admire this gesture. As regards my chicken factory, I assure you it will be back to basics tomorrow with pen, paper and excel and I will be the first one to advocate thus.

This note talks about large values in live cache that cannot be handled by the application. I wonder why "values" of all things is now a problem on top of gizillion donts, warnings and intimidations.

So it is possible that a value in the liveCache grows with every write process

Is this abnormal ?. So be it. Why should this cause to be cause of pain in all the wrong places?

This note is nothing but SAP's design flaw and I dont need to be SAP blueblood to conclude thusly. Its my company. I can plan demand of 1 billion eggs (yes 1 followed by 9 zeros) and I dont give a damn where and how this value is stored and causes pain.

Dankeschön

BORAT

Former Member
0 Kudos

btw this was COM error 40010 and not COM unknown.. I could land only a couple of conditions for COM 40010. One of them being during release of DP.

Collectively it seems there are  1000 + notes on COM errors. Some of them should probably be valid of all releases but it appears these notes were never reviewed with each release's restrictions. Note search is a real torture.

Former Member
0 Kudos

This is real funny.. from the note 1397188

Find CVC which has high value and delete the entire CVC

Some more jokes. This seems like a creation of a mischievous developer in SAP who probably had differences with his boss 🙂

you can use report /SAPAPO/TS_LCM_QUERY to find out the combinations with the

tremendous high values.

Then you can load the corresponding CVC in SDP94

and correct the value to a smaller number. BUT it might be that you cannot load

the CVC in SDP94 as well because even there the values are too high. In this

case you can use transaction /sapapo/tskeyfmain to set the high value bucket to

zero.

If you still unable to find the combination and time bucket which

has high value in livecache please raise a SAP message with authorization to

activate livecache trace

Former Member
0 Kudos

Borat,

The note to which my link referred was specifically about your com error.  I didn't see anything in this note about 'large values'.  Look again.

And yes, OSS search technology does leave a bit to be desired.  You should bring this up at the next SAP user group meeting in your area.

Best regards,

DB49

Former Member
0 Kudos

Apologies. I was muddles with 25 notes open. Yes THIS NOTE EXPLAINS IT ALL

I GOT THE ISSUE NOW. Why didnt this pop up in my search. ? Anyway. Million thanks. I got the reason now. This was probably a 2000 dollar consulting service.

Thanks

BORAT.

Former Member
0 Kudos

To Alleviate suffering of others, here I post the gyst of it.

Make sure transaction /SAPAPO/OM17 and report /SAPAPO/TS_LCM_REORG are used
at a time when no processes are running on the system. Then return code 40.010
error can be avoided.

Please check in transaction STAD if you have run /SAPAPO/OM17 during the
same time as the job.

Refer note above by DB49 for Why

Lesson: Do not run consistency check jobs when DP background jobs are running. This is probably true for all other background jobs that accesses planning areas.

Former Member
0 Kudos

Borat,

Your messages have become more colorful/larger fonts as the thread progressed. 

With respect to the $2000, you should consider donating something to a local charity that provides care for SAP Analysts that have been driven crazy by their job.

Best Regards,

DB49

Former Member
0 Kudos

Very true.

May be I need not look beyond local psychiatric ward 🙂 to find a few of them. I know many who have become practically defunct for any other existential purposes but money cant help them. God can.

Answers (1)

Answers (1)

Former Member
0 Kudos

Borat,

I will assume you have already done your job of searching through OSS, that you have already investigated all the issues mentioned in 

https://service.sap.com/sap/support/notes/546079 ,

https://service.sap.com/sap/support/notes/539848 ,

https://service.sap.com/sap/support/notes/864950 .

and all related notes.  I will further assume you have already consulted with your Basis colleagues and they have assured you that there is no  bottleneck at the LC, database, or OS level. I will further assume that you have already opened your emergency message with SAP (I believe I heard you mention 'factory down')

The answers to your questions would depend in part upon how this macro had behaved when it was originally developed and tested; and how does it behave today in your test system.  Also, another key bit of info would be what was the behavior when it was last run in production.

There is no single answer to your question three; there is no 'Normal'.  Each installation is different; each design is different.  I can say that if I had a simple Macro that ONLY added two key figures, and no other macros were executing concurrently, and no other processes were accessing the PA concurrently,  I would expect better performance than you have described.  I have seen month-end process chains (multiple steps) that take more than a day to run, but that is about the limit I have ever seen in a productive system.

It will be interesting to hear what SAP tells you when they answer your message.  Please keep us informed.

Best Regards,

DB49