cancel
Showing results for 
Search instead for 
Did you mean: 

BDLS - How to monitor the process?

Former Member
0 Kudos

Dear All,

I am running the BDLS sessions in Background (excluding the known huge tables) now in my Testing System just after the system refresh activities from Production to the Testing.

The BDLS sessions run more than 22 hours (still in progress now ) , may I know where ( which tcode ) I can see the progress of the BDLS?

I've tried to check in Txcode SM50, the BGD process performing a sequential read on a table for more than 8 hours...Is it possible hang in somewhere ? May I know how to determine if the BDLS is 'hang' ?

FYI, the table size that the BDLS running on is around 140GB..

Best Regards,

Ken

Accepted Solutions (0)

Answers (10)

Answers (10)

bglobee
Active Participant
0 Kudos

Hi,

1. run the report RBDLSMAP_RESET to clear up old or last stuck BDLS run.

2. have the temp index created for the tables which are identified as huge and contains the fields (AWSYS, LOGSYS, SNDSYSTEM, RCVSYSTEM, etc.,) - this can be identified if you do a test run or write a code snippet to identify such tables.

3. After the indexes are created - execute the report RBDLS2LS (through SE38)

4. Normal BDLS screen appears - here, enter the table range

5. After executing the RBDLS2LS report for the first time in the system, it creates a new report RBDLS<client.no.>

6. Now, use the report RBDLS<client No> to schedule the BDLS further (this allows parallel run as well).

We are trying this out now and it has a significant improvement in saving the BDLS run time. This approach is a must for systems with are in TB size.

johnk_smith
Discoverer
0 Kudos

If this thread is still active ... I have a little info to add - as well as a question to the group.

I used to make use of tcode BDLSC to exclude various tables from the BDLS run. BDLSC is a simple transaction that updates table BDLSEXZ. The only option I ever used in BDLSC was the "Excluded Table" option. We would enter tables that we knew there was no reason to update.

How did we know what tables did not need update? ... watch BDLS in SM50 - wait for it to 'hang' for a long time on a table. Then check that table via SE16 - count the number of records where LOGSYS (or AWSYS, etc) is not empty. If this comes back as zero records, there's nothing for BDLS to update. We'd kill the job, add the table to exclsuion list via BDLSC & start over. Eventually, we added the known tables to BDLSC in the production system, let this copy back to Q during the refresh & we were good to go.

The other way we sped things up was via a custom ABAP program that made direct updates to some large tables where we really did need to update LOGSYS. I'm sure folks out there will call this a big no-no. That's fine - understood... but I would guess there's plenty of SAP customers who have done this.

My question here regards the use of BDLSC. I've been told this is not necessary. That BDLS will only spend time updating tables that have data in LOGSYS. If all entries are empty, it will skip the table. I'll try to verify in the code, but can anyone out there comment - is this the case?

thanks,

-john

richard_howard
Active Participant
0 Kudos

John,

I like the possibilities of this post and the timing is perfect. I have a 6TB refresh next week and it's our slowest for BDLS conversion. There is a new BDLSS transaction that has two steps to it. The first step scans all of your tables that contain LOGSYS to see if any rows have data in the field. The second step updates only the tables that are relevant (the actual update part is fast). So under this process, I can see how some might say excluding tables is irrelevant because the first scan phase is identifying the only tables that are relevant.

However, the scan to see which tables needs updating takes forever if you're like us and have millions of rows in some tables. The good news is that I can take the results from the last run and identify many of these monster tables that aren't using their LOGSYS in any way and try to exclude them from the initial scanning step. That has the possibility of saving me lots of time.

Currently, my BDLS will run for several DAYS if I just let it scan everything. I'm going to try your approach on our ECC 6.0 (Basis 7.0 SP19) system and see just how much I can trim my times down. Last time, it ran for over 3 days just to find 9 tables that need a few thousand rows updated. It was a huge waste of time.

Thanks,

Richard

Georgia-Pacific

Former Member
0 Kudos

Hi All,

Note down the tables which is taking long time or huge in size and buid the index prior to BDLS. You can drop these index after the BDLS. This has helped me well.

Hope this helps.

Cheers

Gopal.

0 Kudos

Hi All,

I had a very bad experience at running BDLS jobs in my last Conversion Project.We had to refresh the Conversion systems with production systems and the BDLS jobs took more than 3 days to run.

So we found a diffrent approach to run BDLS.

Here is what we did:

1. Create indexes on the big tables for field MANDT / LOGSYS (or whatever the relevant field is).

2. Use the report RBDLS2LS (via SE38) instead of BDLS transaction (this will allow multiple streams) Depending on the size of tables you can run multiple streams for the big tables and run a single stream for all the rest .

3. Use a different commit interval (Number of entries per commit)

- default is 100,000 - Seems to work the best

- will try 5,000,000

- also 100,000

- 10,000

- 1,000

Thanks & Regards,

Shashank

Former Member
0 Kudos

Hi,

when i run this program : RBDLS2LS, it doesn't prompt me for imput variants ( max commits, Old Log sys, New log sys ...etc. )

Is that how should it be ?

Best Regards,

Ken

0 Kudos

Hi Ken,

The RBDLS2LS report displays the same screen as BDLS.I am not aware if there is some release constraint for this report.

I will let you know if i find something on this.

Thanks,

Shashank

richard_howard
Active Participant
0 Kudos

Ken,

I started a BDLS run that took over 80 hours so I feel your pain. And like you, I couldn't find any adequate logging to tell me what was going on. As I searched, I found the new BDLSS transaction. Even though I started the run with BDLS, I could still see some limited logging in BDLSS [Display Log].

The next time I ran, I used BDLSS [Display Log] and got a bit more info. I could at least watch it go alphabetically through it's list of tables and when I saw the VBA* tables in the log, I knew it was at least getting to the end of the initial scan.

My next refresh for BI is in May. I have until then to figure out a better way to run this more efficiently.

Good Luck,

Richard

Former Member
0 Kudos

Our story was much worst than what you imagine.

We used to run BDLS continuously in our system for 5 days stretch.

Then we tried multiple iteration and found the following helpful.

1. we kept Number of Entries per Commit to 5 million( the more this value or less the value slowed down)

2. DBA used his database tools(ours is db2 / os390) to run the logsys conversion for the top 50 tables(which does a sweep in hours than the traditional BDLS), then we exclude those tables ran by dba.

Former Member
0 Kudos

Hi Richard,

Yes, the BDLSS log does shows the all the table, but that it is not immediate updates, the log will be updated every time after the conversion of one table is complete or end of one commit.

Thanks for the info.

Hi Vijay,

For the maximum entries per commit, actually I'm not sure what does it meant and how does it affect the system ? and How to determine what is the maximum value I can key in for my system... due to all these, I just leave it as default..

Normally I will manually filter out those table that is huge and does not contain Logical System (get from the previous BDLS), because the BDLS will still read the table row by row to check if there is any Logical System even though the table does not contain any, keeping a record of this might help to reduce the BDLS run-time in future...

Best Regards,

Ken

Former Member
0 Kudos

Ken,

maximum entries per commit means, database commit happens for each 1.000.000(for example) records.

The higher the value more the performance.

But in our experience anything above 5 millions setting, initially the speed was there and then worsen to low.

We usually set to 5 million that was optimal in our case and that may not be true for you.

Try setting a higher value and see if it improves.

Link: [http://help.sap.com/saphelp_nw04/helpdata/EN/33/c823dbaea911d6b29500508b6b8a93/content.htm]

alex_bernaards
Explorer
0 Kudos

thanx for naming "BDLSS [Display Log]" i was search for something like that

Former Member
0 Kudos

Dear Utpal Patel,

Thanks for your reply.

However, in this table : BDLSPOS seems like doesnt shows all the table right ?

For example, i can't see the common table like VBRK, VBAK, ....etc.........

Best Regards,

Ken

Former Member
0 Kudos

Hi there !!!

You can monitor the BDLS statastic with table BDLSPOS ..

Hope you will get the ans from this.

Regards,

Utpal Patel

anindya_bose
Active Contributor
0 Kudos

Yes,,,sometime BDLS hangs....You can check the table from Sm50.

To make BDLS faster you can create index on LOGSYS field on that table. You should also run BDLS in no archivemode.

Regards

Anindya

Former Member

Running the Statistics on Database may help some time, you can run this while your other processes running on system

Former Member
0 Kudos

Dear Raja,

the log i get from BDLSS shows the table processed until :s

.

.

.

CATSHR EXTSYSTEM 0

LOGSYS 0

CATS_BW_TIME I_RLOGSYS 0

CATS_GUID_KEY* EXTSYSTEM 0

CBPR LOGSYSTEM 0

CC1ERP SRCSYS 0

CCMCTIADMIN LOGSYS 0

Dear sekhar,

Do you meant the job log from SM37 ? :

 
22.02.2010 09:59:19 Job started                                                                         00
22.02.2010 09:59:19 Step 001 started (program RBDLSMAP, variant &0000000000000, user ID BASISADM7)      00
22.02.2010 09:59:19 The new logical system name T00CLNT300 is assigned to the current client 300        B1

Edited by: Wei Jian Kwan on Feb 24, 2010 1:00 AM

Former Member
0 Kudos

Dear Raja,

There is not entries in table : BDLSEXZ when I view via SE16.

Best Regards,

Ken

Former Member
0 Kudos

Hi Wen,

Am extremely sorry the table BDLSEXZ is to exclude the tables while doing BDLS conversion.

You need to check the log at the transaction BDLSS.

Pls chk and let me know the status.

Regards,

Raja. G

Former Member
0 Kudos

Dear sekhar,

There is no log in the SLG1.

Between, I thought the logs will be generated only after the BDLS is done?

Best Regards,

Ken

Former Member
0 Kudos

Hi Wei,

Pls chk the table BDLSEXZ but am not sure.

Regards,

Raja. G

Former Member
0 Kudos

Check for Job - rsbdlsmap

and its corresponding log to know the current status

-Sekhar

Former Member
0 Kudos

You can check the log for progress

t-code SLG1, object CALE, subobject LOGSYSTEM*.

-Sekhar