cancel
Showing results for 
Search instead for 
Did you mean: 

Export_too_much_data

Former Member
0 Kudos

Hello everyone,

The SAP_COLLECTOR_FOR_PERFMONITOR job fails with a dump Export_Too_Much_Data. I followed Note 713211 for this problem.

The solution to the problem consists of 2 steps.

1. Implemention the attached correction instructions.

2. If the above doesnt work (which it did not, for me) run RSUOAUDO and delete permanently the daily, weekly and monthly segment history.

The note says "Please consider seriously the effect of this lack of information for your administrative tasks)"

What are the implications of doing Step 2 ? Is there some kind of a reorg job that would allow me to circumvent this issue? I am getting this error in our Production Server, so experimenting is not really an option.

Thank you all for your help.

Kunal

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Kunal,

That report is a reset button. It will indeed remove the data from the montioned segments and give you a clean start. The intention of the 30 day trial run is to allow the SAP system to run it's normal reorg jobs with the new default storage periods to 10, 6, 4 for days, weeks and months respectively.

Has it been 30 days yet? It's a catch 22 here I assume. Running the report will remove all historical data, but continuing with the short dumps probably means you are not getting new data. If you are running EWA's on production, you should be able to piece together your KPIs as needed, although it's not as comprehensive as ST03...

Cheers,

Tim

Former Member
0 Kudos

Thanks Tim.

Well it hasn't really been 30 days yet, but the job did start running without the error almost a week after I had implemented the correction.

So basically the stages were

1. Job starts failing

2. Note implemented

3. Job keeps failing.

4. Job starts running successfully.

5. Job starts failing again.

Im on Stage 5 and it hasn't been 30 days yet. So I'm looking to run the report as the next option.

We are running CCMS Alerts from Solman (although no EWA). What is surprising to me is that the job fails only intermittently. There is no pattern to its failure. If the data being read by the collector job is bigger than 2 Gb, why does it not fail the other times ? The data being read is not changing in a day/hour up to that degree.

Former Member
0 Kudos

Kunal,

Agreed that in the absence of any pattern, it's difficult to troubleshoot. But the intermittentcy of the issue may still be associated with a pattern. What we do understand is that the job fails when the data to be read exceeds 2gb. (Is this a windows platform?). There are many factors that can contribute here to this situation including system type and useage patterns. For instance, if this were a BI system and the jobs fail around the time of data loads, or in an SCM or even ERP system when shift changes occur or specific departments start their day running reports, or the HR department does a payroll run.

You need to determine how much time you need to spend on this particular issue vs other issues you have. Historically, however, there is a correlation here somewhere. I'm guessing that there is in fact some system activity out there that's pushing the record count over the top.

Alternatively, you could address this by system tuning. I'm not sure if the "EXPORT_TOO_MUCH_DATA" error is caused by a buffer overflow or a memory segment overflow (the 2GB to me sounds like it's hitting a limit for memory of a single work process). But again, tuning the system for this specific issue will affect the whole system, and you again need to determine (1) if you have the system resources to do it and (2) if you want every process in the system to have access to more resources - you may end up making system performance worse to tune 1 job

I'd appreciate knowing how you make out in the end - I've seen this issue before myself, but have not yet seen it continue after the fix and am sure I will run into this situation.

Cheers,

Tim

Former Member
0 Kudos

Tim,

Not sure if you recall this one. Apologize for the late reply, but Ive been a little caught up and didn't get time to update the message.

I took your advice, especially since every word of it made sense to me. I guess I was a little lazy and also a little short of time to look into this. But there was a pattern. The job failed only during certain time of the day and night (2 AM/PM - 3 AM/PM , 2 AM/PM - 3 AM/PM, 2 AM/PM - 3 AM/PM, 2 AM/PM - 3 AM/PM)

Basically I started by working with ST06 to see if I could do it via performance tuning and I increased the EXPORT/IMPORT buffer a little especially because I was seeing some swaps in it.

The good news was that the swaps disappeared but the bad news was that that did not fix the problem.

So finally I opened a message with SAP and they came back with the same solution of running the report as mentioned in the report. I threw a few questions back to them like how do bigger installations deal with this problem since the data collected is much larger; etc etc; to no avail.

So I guess the answer is if Step A mentioned in the Note doesnt work, you HAVE to do step 2

Im also posting the reply to my message by the Developer from SAP

"

the job SAP_COLLECTOR_FOR_PERFORMANCE_MONITOR is controlled by table

TCOLL. This table contains the reports which are regularly

scheduled. The report which collects DB02 data runs 2 times a day

(see report RSDB_TDB). Therefore, you are getting the dump twice a day.

The problem can only be solved by following the steps mentioned in

SAP-note 713211 and delete the data.

The problem can not be avoided by changing the settings for the

EXPORT/IMPORT parameter."

Kunal

Former Member
0 Kudos

Hey Kunal.

No worries. Thanks for the feedback. I'll note the fix and be a bit more patient myself

Cheers,

Tim