Application Development Discussions
Join the discussions or start your own on all things application development, including tools and APIs, programming models, and keeping your skills sharp.
cancel
Showing results for 
Search instead for 
Did you mean: 

Experience with SU24 synchronization report - performance tricks?

Former Member
0 Kudos

Dear gurus,

I am currently using the DOWNLOAD and UPLOAD functions in SU24 to synchronize by USOB* data throughout an ECC 6.0 landscape.

Unfortunately some things were maintained in PROD and others in DEV, and some in SU22 and others in SU24. So I manually corrected DEV to have all the settings and the most current ones or respectively the better ones.

Here comes the problem: The upload report uses lock entries to enque all the objects chosen (select-options FROM -> TO) and does a replace of all the entries found in the downloaded .txt file. The system has a default enque/table_size of 4096 entries but SAP delivers (in my system) 500k entries in USOBX and 250k for USOBT plus we have many additions of our own in USOBX_C and USOBT_C... so you cannot do a mass synchronization with the report.

The main bugger is that per object locked, there may only be one or alternately many entries in the tables, so I cannot efficiently use the select-options in batches of 4000 and call the whole issue a stress-free afternoon...

My current strategy is to estimate them into batches of 50k each, and then replace the FROM field of the select-options with the first entry in the lock popup. Using 5 sessions opened this gives me about 3 hours work with 10 minute intervals between while the system deque's the entries not processed.

Has anyone ever done or encountered this before? Any other performance tricks or tuning which can be done with the currently available tools?

Cheers,

Julius

1 ACCEPTED SOLUTION

Private_Member_119218
Active Participant
0 Kudos

Julius,

I'll go ahead and ask the obvious question, so don't take this the wrong way.

Can't you just abuse the transport system and haul the tables around? (I've never done it, but it seems like the obvious solution.)

11 REPLIES 11

Former Member
0 Kudos

Okay, I found another better way to optimize it further - but still not perfect:

The select-options in the upload function let you import from the clip-board.

Via SE16N you can select all records from USOBT_C which have not been modified yet, and then filter on the object NAME field for those not uploaded yet. The F4 search help within the filter gives you each entry once only!

Only snag is that the enque process locks USOBX_C and USOBT_C, so there might be entries in X but not in C. If you divide the value of enque/table_size by 2 then you should be on the safe side and through in about 2 hours.

Cheers,

Julius

0 Kudos

I have this down to under an hour now for the 750 thousand entries using 5 sessions open in parallel.

Be carefull not to loose your place in the SE16N search help. It is restricted to the first 5000 entries displayed. Unfortunately you cannot influence that ... and you can choose about 2300 transactions quite safely without hitting the default lock limits.

In PROD the limit was set to 14 thousand, so 10 runs in total do the trick (limit is the SAPGui runtime and not the enque server, as obviously it is not going to run in the background...)

I deleted all the profiles of a copied large role for a test in PROD and opened it again in read old / merge new mode. Bingo... -> all lights still green and "old" status.

I will leave it open for a little while still anyway for anyone who performed a USOB*-sync and found a better / faster way? I seem to have reached my limit here.

Cheers,

Julius

ps: Most of the work was actually to fix SU24 in DEV so that nothing would be lost or broken. Only transports and no changes in PROD from now on.

mvoros
Active Contributor
0 Kudos

Hi,

>

> Dear gurus,

>

> Here comes the problem: The upload report uses lock entries to enque all the objects chosen (select-options FROM -> TO) and does a replace of all the entries found in the downloaded .txt file. The system has a default enque/table_size of 4096 entries

This value is ridiculously small for the current systems. SAP changed this value to 32MB since 7.01 (Note 1386841). I would suggest to change it to higher value if your system is not 7.01.

Cheers

Former Member
0 Kudos

I am on 7.00... Thank you for the note! The basis guys here were happy to see it

I still have the SAPGui runtime constraints (you cannot upload in the background.. and the option of parellel processing using SE16N based F4 search help is usefull to split it into sessions. We changed the enque/table_size to 16384 already for good measure and if I use only 10 sessions then it should only take about 20 minutes or so.

Keeping the time down to a minimum is still a requirement, as we need to open the system and pull an emergency user with developer rights for the upload function to work...

Cheers,

Julius

mvoros
Active Contributor
0 Kudos

>

> I still have the SAPGui runtime constraints (you cannot upload in the background..

Yes, you can I mean if you have a skillful developer. SU22 uses reports RSU22DOWN and RSU22UPLD. It's pretty straightforward to modify them (make a Z-copy) to allow downloading/uploading from application server. You just need to replace FMs GUI_DOWNLOAD/GUI_UPLOAD in the routines DOWNLOAD_DATA and UPLOAD_DATA_FROM_FRONTEND.

Cheers

Former Member
0 Kudos

Yes, I have read Rob's blogs and also talked to a developer here - but if I can get it down to under an hour then no developer skills will be faster

It is a once off sync.

Another restraint and tweak which I noticed is that multiple sessions don't help the enque server as there is only one using main memory, and you have to first use wildcards (A, B, /* etc...) and watch the timestamps before using the SE16N search help. USOBT_C and USOBX_C will not necessarily contain all entries in PROD - so I have all the local DEV changes recorded in DEV and will transport them through afterwards. Ideally this should not be required, but a good measure anyway.

Particularly S* is tricky. Do that using SE16N search help, because if you hit the limit once then you can take a long lunch while the system deques and rolls back and all other updates from other users using locks are blocked.

Using multiple entries to prepare the clipboard data is still a big help and saves a lot of time.

If you want to sync your production system, then you have to consider this (other locks).

If you need to do it, then you have to carefully correct the DEV system before hand!

It also makes a lot of sense to correct the SU25 Step 2 data as well if it has not been used consistently, if you plan to use it in future. If you plan to sync your environments, then fixing SU25 at the same time is an opportunity not to be missed!

Cheers,

Julius

Edited by: Julius Bussche on Jan 29, 2010 12:04 AM

Former Member
0 Kudos

Okay, I am going to close this. I have it down to 40 minutes for 850k entries. I import the transports for SU24 changes and SU22 corrections afterwards !

Do S* and all remaining non-alphabet ranges using the SE16N search help F4 trick. Run the alphabet through 25 times - one for each other letter first. Dpending on your Z* and Y* entries, you might want to split that as well. Unlikely....

There is another memory related "gotcha" for releases lower than 6.40 or higher if a new install was not subsequently done. This is likely to affect a PROD system! The system does some nifty footwork before the upload to verify that you own roll area is at least double that of the flat file size. This can be set to a maximum via RZ10 parameter ztta/max_memreq_MB which might have an older default still. As the file for a full sync can be expected in the range of 70MB, the default of 125 MB is not enough, and the upload function dumps upfront to prevent the sort order from causing inconsistencies. This is particularly critical if you choose a large range or * and hit the limit of the above mentioned table overflow of the locks first which it needs.

How many skillfull developers will spot that in their copy of the reports (it is a kernel function...) I cannot tell

Tricky stuff! If you want to or discover the need to do this, then show your basis guys this thread first.

Cheers,

Julius

Private_Member_119218
Active Participant
0 Kudos

Julius,

I'll go ahead and ask the obvious question, so don't take this the wrong way.

Can't you just abuse the transport system and haul the tables around? (I've never done it, but it seems like the obvious solution.)

0 Kudos

It's okay. I have a thick skin

Yes, we will always transport in future, but those are individual changes made in SU24 only.

I have not used the option of uploading all the data (including SU22 data!) into a transport request either. I looked at it, but we are a bit behind on support packs here and the upload with SU2X_SAVE_DATA_TO_DB seemed more reliable than a transport recording.

I will do a test run with the transport option in my sandbox and let you know what the differences are.

Cheers,

Julius

Bernhard_SAP
Employee
Employee
0 Kudos

Hi Julius,

please have a look at note # 1429716 which provides the possibility of 'synchronizing' SU24-data from several systems....

by means of transport. The transport will take considerable more time than the normal transport (SU25->3.) (For isntance export in a test system of abt. 120.000 log. transport objects took abt. 1 hour...)

b.rgds, Bernhard

0 Kudos

Thanks for the infos Bernhard!

I have 824 thousand entries. It has been 2 hours of watching the egg-timer clock and this thread to see which one moves first...

We plan to do this on an evening during the week when user's have gone home, so I need to get it down to under an hour.

I also noticed that the "Add to transport request" option in the upload function also locks the objects first, and even a 16MB setting is not enough to do it in one go. It stops halfway through C and I need to get to Z.

The combination of SE16N search help ranges to reduce the locks, the adjusted enque/table_size and 5 open sessions seems still to be the fastest and easiest.

Cheers,

Julius