cancel
Showing results for 
Search instead for 
Did you mean: 

Client copy of a database with some large tables

steven_glennie
Explorer
0 Kudos

Hello Group,

I am currently performing a client copy as part of an R/3 4.6C test system refresh.

We used parallel processes which does speed the client copy up and the largest tables will always take the longest to copy. It appears to be still processing our last table which is 90Gb in size. It is currently performing a sequential read on this table and the only way I can see that it is doing anything is that it is using CPU and I/O from WRKACTJOB. The client copy has now been runnig for 2.5 days and we have a fast machine (570 with 5 CPU 70Gb Ram 4Tb DASD with 50% free)

Is anyone familair with the process or know of any other ways to work out how much longer this process will take or to check if it actually is doing something, advice would be much appreciated.

Any other experiences / tips would be welcomed. We have been looking at other thrid pary software like gold client as this process will get longer and longer as the database grows and funding for archiving projects are being withdrawn.

Best regards,

Steven

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

Hi Steven,

you could use client transport instead of homo-copy if you had problems with that. That is FAR FASTER than remote copy.

Unfortunately 4.6D kernel has a 2GB limit and therefore needs the following parameter:

filesplit = 2000000000

(tpparam or tp_domain.pfl - note 86535)

... and with filesplit the compression ratio shrinks down from factor 10 to 2 or so ... but with 50% used of 4TB there is no issue if the trans directory is on that host.

Then the export should be not too long - 1 day I would guess. The import can be done later on seperately but doesn't need any "clean time" for a consistent state any more )

Regards

Volker Gueldenpfennig, consolut.gmbh

http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

steven_glennie
Explorer
0 Kudos

Thanks for all for your responses, I will certainly research the suggestions.

The local client copy is on the last & largest table, but I beleive that there may be some sort of limit that is stopping the copy complete. From SM50 I could see that it was performing sequential reads and then updates. Now though it is stuck permananenly on sequential read as has been like this for 12 hours.

Are any of you aware of any limits on tables size / records or any other reason that may cause this?

With regards

Steven

Former Member
0 Kudos

Hi Steven,

always sequential read is total normal here !

That it takes pretty long is normal as well. There should be no limits. I would suggest DSPFD of that table - you should see every hour, that the amount of rows changes (not every second).

Regards

Volker Gueldenpfennig, consolut.gmbh

http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

steven_glennie
Explorer
0 Kudos

Hi Volker,

I have been using something similar to monitor. I was using DSPFM and going to the bottom (command B) which gives Current number of records = 107051853

as in DSPFD but it has been like this for 12 hours. Can you confirm my understanding that rows and records are the same thing as it did not mention rows in th DSPFD.

Many thanks

Steven

steven_glennie
Explorer
0 Kudos

Hi Volker,

Appologies Volker, I just noticed that the last change date just change from the last time I looked. It took 1 1/2 hours before it changed, but I could actually see the record count has changed. It has only changed by 16,500 records. At this rate it may not complete before Christmas !! I guess I will just have to be extrmely patiant.

Many thanks

Steven

Former Member
0 Kudos

Hi Steven,

I noticed similar issues when running one "delete" statement to remove millions of records. It started with removing about 1,000 records/sec, then run VERY slow after certain point. It took a while to rollback when I killed the job also. Eventually I split the data with Where clause, and add "commit work" in between.

But it may not be that easy to change the way SAP does local client copy...

Back to data archiving or other ways of triming the data first?

Even the client copy failed, you may still be able to logon that client and test other functions.

Good luck,

Victor

steven_glennie
Explorer
0 Kudos

Thanks victor, I will definatletly be looking at archiving in TST as the GLPCA is one of the easiest to archive as it does not have any related / dependant archiving objects.

I have noticed something really stange though regarding the record count using DSPFD r3prddata/glpca

As I have been monitoring this closely, I spotted that it is updating by exaclty 16384 records every time the details are updated for GLPCA.

Records Time Change in records

107,051,853 "0300"

107,068,237 "0430" 16384

107,084,621 "0620" 16384

107,101,005 "0811" 16384

Does anyone have any ideas why this might be as this seems strange to me.

If this table didn't complete for several weeks or I cancelled the client copy job what issues would be caused by letting people log onto the system with only one table not being fully updated? What post processing is done, if any, after the last table is copied? I need to determing if this client can be used or not with an incomplete client copy.

With regards,

Steven

Former Member
0 Kudos

Hi Steve,

GLPCA is also why we started the data archiving project.

If it would be the first data archiving project to you, the following blog may help.

/people/sap.user72/blog/2006/07/20/technical-setting-for-data-archiving

If the client copy job is cancelled, the rollback stage might remove all the records added after the last "commit work". At the end, end users can post documents without issues, but may not be able to check the history document from GLPCA. We used to CLRPFM R3TSTDATA/GLPCA after database refresh, and just tell the controling group that the history data is not copied to TST, check PRD if needed.

Thank you,

Victor

steven_glennie
Explorer
0 Kudos

Hi Victor,

Thanks again for the info. We actually performed the CLRPFM on some other large tables before the client copy.

Does anyone know what would happen if we stopped the client copy, stopped the SAP system, stopped journalling on the GLPCA file and then performed the CLRPFM on the GLPCA table and then start the SAP system and then the client copy again in restart mode.

My thinking is that it would see that GLPCA was still to be processed and then process this quickly as it is empty and proceed with the post client copy post steps and complete.

Any advice would be much appreciated as letting it run for weeks is not really an option for us as we need users to access this system ASAP in preparation for integration testing.

With regards,

Steven

steven_glennie
Explorer
0 Kudos

Hello All,

Just to give you an update,

1. The client copy background job was stopped.

2. The SAP system was stopped

3. The journalling was stopped for file GLPCA.

4. A CLRPFM was performed on GLPCA.

5. The journalling on GLPCA file was started again

6. The SAP system was started.

7. The client copy was started iagian in restart mode.

8. After 30 minutes the client copy completed sucessfully

Many thanks to all of you that posted on this. I'm still not sure why client copy was having a problem with the GLPCA table, but your help allowed me to decide to perform the above actions and overcome the roadblock I had.

Many thanks & regards,

Steven

Former Member
0 Kudos

Hi Steven,

btw: option 3 & 5 were not necessary ... as CLRPFM doesn't write the rows into the journal ...

Regards

Volker Gueldenpfennig, consolut.gmbh

http://www.consolut.de - http://www.4soi.de - http://www.easymarketplace.de

Former Member
0 Kudos

Just for future reference you can exclude tables from client copies using the program RSCCEXPT. For those tables which are huge but mean nothing in the target client this works great. Just remember to clean it up after the client copy so you don't accidentally mess up future client copies!

Andy

Former Member
0 Kudos

Hello Steven,

then you can check with SCC3 which are the actual tables and with DSPFD on green screen you can see the total records.

Regards

Guido

Former Member
0 Kudos

Hello Steven,

you should rebuild the clients with local client copies, SCCL and the appropriate profile.

If you need specific client data from the original test system, you should have performed client exports.

Remote copy with such DB-size will not succeed.

Regards

Guido

steven_glennie
Explorer
0 Kudos

Hello Guido,

I am rebuilding the clients with SCCL. The first copy has been running for 2.5 days and is still apparantly running. I am looking for ways to verify the job is actually doing something and has not frozen.

The only indication that I have is that the job is using CPU in wrkactjob and if I look at the threads related to this I can see i/o increasing.

With regards

Steven

Former Member
0 Kudos

Hello Steven,

For the largest tables, I would run data archiving jobs to trim the data first. Likely obsolete data doesn't help much on the testing procedure anyway.

Another aggressive approach is to execute SQL statement to trim the data. It may result in DB2-consistent but not ABAP-consistent data, and specially attention is needed for BIG Delete without frequent Commit. The advantage is that you can do it without functional people configuring the retention periods within SAP.

Thank you,

Victor

Former Member
0 Kudos

Hello Steven,

the fastest and only feasible way would be a homogenous system copy with SAVLIB/RSTLIB method.

But you have to take into account the version management, which must be saved and later is to be reimported, see note 130906.

After the system copy you can run local client copies to rebuild your client landscape, and/or run BDLS to correct logical systems.

Have a look at the guide for homogenous system copy

http://service.sap.com/~sapidb/011000358700002949502001E/R3HOM.PDF

Regards

Guido

steven_glennie
Explorer
0 Kudos

Hello Guido,

Thanks for your repsonse.

I did perform the homegenous system copy, but now need to recreate the same client that was in the test system before and create two additional clients for training.

With regards,

Steven