cancel
Showing results for 
Search instead for 
Did you mean: 

BDLS runtime/performance

Former Member
0 Kudos

hello - has anyone came up w/ some ways to improve BDLS runtime/performance? we have an 4.7 system on Oracle that is about 8TB allocated. BDLS is taking a long time....over 1 day and still running

any idea on how much changing the #entries/commit will help? default is 1 million records/commit....wondering how high i can go, and how much , if any, that would help

thanks

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hi,

>any idea on how much changing the #entries/commit will help? default is 1 million >records/commit....wondering how high i can go, and how much , if any, that would help

I'm afraid that you are on your own on this subject.

This is a tuning completely dependent from your specific configuration.

The biggest entries#/commit the better for performance but until you break down the undo segment limit (for Oracle of course).

You have to experiment to find the better parameter for your setup.

BDLS on a 8 TB database, it just can be very time consuming...

Regards,

Olivier

Former Member
0 Kudos

thanks yes...it's still running btw - 2+ day

- is running BDLS required if you do a system copy and are not changing the SID? (i.e. the logical system name)

Former Member
0 Kudos

On our 1.2 TB database, it lasts about 12 hours...

If you don't change the SID and you use the same client, you should not need to run BDLS.

But I think it is a dangerous idea to keep the same SID for a test system : too error prone for my taste !

Regards,

Olivier

Former Member
0 Kudos

Hello Ben,

How much time did it took for BDLS to complete in your system. We have the same setup and our database is 8.5 TB.

Please let me know because in my system it is running for past 35 hours.

Regards

Prerna

Answers (1)

Answers (1)

anindya_bose
Active Contributor
0 Kudos

I do two thing to make BDLS conversion faster in our system.

1. Switch of the Archive log mode of database. During BDLS lots of archive logs get generated, to switching to noarchive mode always make it faster.

2. Create index on some large table where I think it could take much time.

It is possible to exclude some tables if you are sure that you do not need those tables in the test system,

Check https://service.sap.com/sap/support/notes/932032

The conversion can take very long (a few days etc.) if the relevant tables do have many entries (e.g. COEP is always a problem). The bottleneck of the conversion process is always the database access, not the report!

Former Member
0 Kudos

There is already good advice here so far. Two more things:

It may be helpful to create specific indexes to speed up large tables. But to do this database knowledge is needed for example on oracle create index with parallel nologging (maybe compressed).

There is also a possibilty in doing parallel BDLS, check this blog: [Execute conversion of logical system names (BDLS) in short time and in parallel - Intermediate|https://www.sdn.sap.com/irj/scn/weblogs?blog=/pub/wlg/4796] [original link is broken] [original link is broken] [original link is broken];

I have used the both ways so far, careful testing is needed. Good luck.

Regards, Michael

anindya_bose
Active Contributor
0 Kudos

>It may be helpful to create specific indexes to speed up large tables. But to do this database knowledge is needed for example >on oracle create index with parallel nologging (maybe compressed).

Sorry, I should have explained the index creation procedure for a particular table. Here is the procedure.

Say the table is RSSELDONE, we generally create index for field LOGSYS because RBDLSMAP search for this field.

Go to SE11-Enter the database table name->Click on "Display"-->Click on "Indexes"--->Click on "Create" buttong

Enter any name (say ZSS) >maintain in logon language->Enter "LOGSYS" in the fieldname>Enter some description>Click ok. then Save and Activate.

Hope this will help.