cancel
Showing results for 
Search instead for 
Did you mean: 

SXMS_PF_AGGREGATE report taking long time to execute

former_member228109
Participant
0 Kudos

Hello Experts ,

I have scheduled SXMS_PF_AGGREGATE  report in background since long(around 20 days) but it has not finished yet . When i check the job log , it says n number of aggregates created so far. That means it is working . But i want to know when it will be finished ?

Thanks in advance.

Regards ,

Nikhil Save

Accepted Solutions (1)

Accepted Solutions (1)

AnilDandi
Active Participant
0 Kudos

Hi Nikhil

There are probably a lot of records to process. Try to reorg SXMSPFRAWH table and delete data prior to say 15 days. The reorg can be done using SXMS_PF_REORG report. After the cleanup is done, do the aggregation and set the reorg as a periodic job. Please see this blog post: Deleting SXMSPFRAWH Data in SAP XI | SAP NW Newbie

former_member228109
Participant
0 Kudos

Hi Anil ,

I have followed the link till point 4. After that ,the link is saying to schedule the jobs hourly . But tell me one thing if it is not finishing one  job which i have scheduled , what difference does it make by scheduling same job hourly?

Regards ,

Nikhil Save

AnilDandi
Active Participant
0 Kudos

Hi Nikhil,

The expectation of running until point 4 is that you will now have these reports working on less amount of data and the subsequent runs will take less time.

Do you still see the report taking long (after cleanup)? If that is the case, you may have to run a trace or observe the work process if it is executing a SQL statement that potentially needs tuning.

The blog also mentions that if these steps do not work, you may have to implement SAP note 1082836, after consulting SAP.

regards,

Anil

former_member228109
Participant
0 Kudos

Hi Anil ,

After completing till step 4 , I have scheduled SAP_XMB_PERF_AGGREGATE in background yesterday , not hourly based but as a single job. But still it is running and if i see job log , it has not yet started aggregating anything .

Should i schedule both jobs together now on hourly basis as suggested in the link ? Will it make any difference? I dont think it will make any difference , because if single job is not finishing within hour , how all hourly jobs will finish?

Please advise.

Regards ,

Nikhil Save

AnilDandi
Active Participant
0 Kudos

Hi Nikhil,

Do you see what this job is doing (from SM50 or SM66)? Any table access or update?

If the job is running for more than an hour, running it hourly will not help. The job runtime should be brought down and then scheduled to run every hour.

regards

Anil

former_member228109
Participant
0 Kudos

Hi Anil ,

I have checked in SM50 . It is updating SXMSPFRAWH table.

Regards ,

Nikhil Save

AnilDandi
Active Participant
0 Kudos

hi Nikhil,

What is the SQL command and its explain plan?

Please see if the number of records in the table SXMSPFRAWH decreased after the reorg. If the number did not reduce much consider checking SAP note 1677786 - Reorganization of component data records in XI/PI


Also check if SAP note 2133218 is applicable.


regards

Anil

former_member228109
Participant
0 Kudos

Hi Anil ,

>>What is the SQL command and its explain plan?

I did not get your question .

Number of records were decreased in the table SXMSPFRAWH after the reorg . Around 80 lacs were deleted from 5 crore records.

So should i continue this job till it finishes ?

Regards  ,

Nikhil Save

AnilDandi
Active Participant
0 Kudos

Hi Nikhil,

I am assuming that the database is Oracle. When the job is running, see if any select query is running. You can double-click on the work process and get the SQL query that is running. Copy the SQL query. Call ST04, go to Diagnostics > Explain. Paste the SQL and click on Explain.

Also: If it is an Oracle DB, update the statistics the table SXMSPFRAWH using the following command:

brconnect -c -u / -f stats -t SXMSPFRAWH -f collect -p 3

Yes, please let the job complete.

regards

Anil

former_member228109
Participant
0 Kudos

Hi Anil ,

Thanks .

Yes , The database is oracle .

I have double clicked on work process in sm50 but did not get any sql query .

Regards ,

Nikhil Save

AnilDandi
Active Participant
0 Kudos

Hi Nikhil,

From the screen shot, I see that the report has taken a lot of time to read the records. I suspect the issue is with the SQL query or slow disks.

Check ST06 > Snapshot Current Data > Disks. See if there is high Resp (ms) against any disk.

Please call ST04 > Performance > SQL Statement  Analysis > Shared Cursor Cache. Click on tick mark. Search for select statement on SXMSPFRAWH  and click in Explain. Unless we have the explain plan, we can't know why it is taking so long.

regards

Anil

former_member228109
Participant
0 Kudos

Hi Anil ,

Thanks for the brief explanation.

Please find the below screenshots .

>>Check ST06 > Snapshot Current Data > Disks. See if there is high Resp (ms) against any disk

>>Please call ST04 > Performance > SQL Statement  Analysis > Shared Cursor Cache. Click on tick mark. Search for select statement on SXMSPFRAWH  and click in Explain.

Total 4 select querries are running .

Regards ,

nikhil Save

AnilDandi
Active Participant
0 Kudos

Hi Nikhil,

Utilization on the disks sda10 and sda are high. If one of these are for swap or paging file, your system seems to be running low on RAM. If the disk is related to database, then the physical I/Os are very high. You may want to create an index on the table SXMSPFRAWH on against the columns SOURCE, LASTTS, STATUS and COMPONENTID. But before you create this index, update the table's statistics, see if there is any improvement in the job run. Also check if these (or most of these columns) are already indexed. Check if there are any missing indexes using the transaction DB02.

Answers (1)

Answers (1)

iaki_vila
Active Contributor
0 Kudos
former_member228109
Participant
0 Kudos

Hi Inaki ,

The symptoms mentioned in the note (In transaction SM50, you notice that the system executes a large number of SELECT statements on the database table SXMSPFAGG) are not there in my case. Also let me tell you one thing , that it is creating aggregates properly. The only thing is ,  it is running since long . The tables SXMSPFRAWH and SXMSPFRAWD are having 5 crore and 31 crore records simultaneously . Will that be a reason ?

Regards ,

Nikhil Save