cancel
Showing results for 
Search instead for 
Did you mean: 

Reducing time required for ABAP-only copyback (system copy) process

Former Member
0 Kudos

Our company is investigating how to reduce the amount of time it takes to perform a copyback (system copy) from a production ABAP system to a QA system. We use a similar process for all ABAP-only systems in our landscape, ranging from 3.1h systems to ECC6.0 ABAP-only systems on both DB2 and Oracle database platforms, and the process takes approximately two weeks of effort from end-to-end (this includes time required to resolve any issues encountered).

Here is an overview of the process we use:

u2022 Create and release backup transports of key system tables and IDu2019s (via client copy) in the QA system to be overwritten (including RFC-related tables, partner profile and IDOC setup-related tables, scheduled background jobs, archiving configuration, etc.).

u2022 Reconfigure the landscape transport route to remove QA system from transport landscape.

u2022 Create a virtual import queue attached to the development system to capture all transports released from development during the QA downtime.

u2022 Take a backup of the target production database.

u2022 Overwrite the QA destination database with the production copy.

u2022 Localize the database (performed by DBAu2019s).

u2022 Overview of Basis tasks (for smaller systems, this process can be completed in one or two days, but for larger systems, this process takes closer to 5 days because of the BDLS runtime and the time it takes to import larger transport requests and the user ID client copy transports):

o Import the SAP license.

o Execute SICK to check the system.

o Execute BDLS to localize the system.

o Clear out performance statistics and scheduled background jobs.

o Import the backup transports.

o Import the QA client copy of user IDu2019s.

o Import/reschedule background jobs.

o Perform any system-specific localization (example: for a CRM system with TREX, delete the old indexes).

u2022 Restore the previous transport route to include the QA system back into the landscape.

u2022 Import all transports released from the development system during the QA system downtime.

Our companyu2019s procedure is similar to the procedure demonstrated in this 2010 TechEd session:

http://www.sapteched.com/10/usa/edu_sessions/session.htm?id=825

Does anyone have experience with a more efficient process that minimizes the downtime of the QA system?

Also, has anyone had a positive experience with the system copy automation tools offered by various companies (e.g., UC4, Tidal)?

Thank you,

Matt

Accepted Solutions (1)

Accepted Solutions (1)

sunny_pahuja2
Active Contributor
0 Kudos

Hi,

Your process is right. But why one system system is taking 2 weeks of time. What is your database size ?

Thanks

Sunny

Former Member
0 Kudos

Hi, Sunny.

One system that immediately comes to mind has a database size of 2TB. While we have reduced the copyback time for this system by running multiple BDLS sessions in parallel, that process still takes a long time to complete. Also, for the same system, importing the client copy transports of user ID's takes about 8 hours (one full workday) to complete.

The 2 weeks time also factors in time to resolve any issues that are encountered, such as issues with the database restore/localization process or issues resulting from human error. An example of human error could be forgetting to request temporary ID's to be created in the production system for use in the QA system after it has been initially restored (our standard production Basis role does not contain all authorizations required for the QA localization effort).

Matt

sunny_pahuja2
Active Contributor
0 Kudos

Hi,

> One system that immediately comes to mind has a database size of 2TB. While we have reduced the copyback time for this system by running multiple BDLS sessions in parallel, that process still takes a long time to complete. Also, for the same system, importing the client copy transports of user ID's takes about 8 hours (one full workday) to complete.

>

For BDLS run, I agree with Olivier.

> The 2 weeks time also factors in time to resolve any issues that are encountered, such as issues with the database restore/localization process or issues resulting from human error. An example of human error could be forgetting to request temporary ID's to be created in the production system for use in the QA system after it has been initially restored (our standard production Basis role does not contain all authorizations required for the QA localization effort).

>

For the issues that you encounter because of system copy, you can minimize this time period as you would be doing it on periodic basis (making some task list) and you can make a note of issues that you faced in previous run. So, normally i don't count it as system copy time

Thanks

Sunny

Answers (1)

Answers (1)

Former Member
0 Kudos

Hi,

In our experience, the part of the procedure consuming the longest time is the BDLS run.

It used to take up to 48 hours of run time for an ECC6 database time of around 2.4 TB.

After an optimization project we now parallelize 26 times the BDLS run (one run per all table names beginning with one letter).

BDLS runtime is now around 7 to 8 hours which is much better (1 night of runtime).

Our System copy duration is now 2 to 3 days.

Hope this helps.

Regards,

Olivier

Former Member
0 Kudos

Thank you, Oliver. That is an impressive improvement, and I will pass it along to our Basis team.

Former Member
0 Kudos

Hello Matt,

I too agree with Oliver on parallel BDLS. I have followed the steps in the below shown blog and got to see significant difference in runtime with parallel option.

It should also help you to reduce the overall system copy time (end-end).

Thanks,

Siva Kumar

Former Member
0 Kudos

Hi Matt,

I was not the member of the team working directly on the subject but I know, as we are using Oracle as DB, that there were problems to deal with undo segments and archive logs. I thinks thay have decided now to run BDLS with no log archiving.

Regards,

Olivier