cancel
Showing results for 
Search instead for 
Did you mean: 

Very long export time for table VBRP

yakcinar
Active Contributor
0 Kudos

Hello,

I am trying to increase # of datafiles for my SQL DB of an ERP system.

It is about 10 TB.

I am using heteregenous system copy method and tried to export my system.

I used attached split_input.txt file for table splitting. When I was preparing this I divided table sizes to 4 GB pieces.

Table splitting took about 8 hours.

Export time took in 2.5 days.

I run TimeAnalyzer and got attached "export_time.txt" result file.

When I examine the export times of tables, I see that there are 23 hours lasting VBRP table split exports.

The other tables mostly exported in less than 2-3 hours. Even the biggest MSEG table.

Why could VBRP table take too much time to export?

Is there a way to accelerate VBRP table export time?

Can you help, please?

Regards,

Yuksel AKCINAR

Accepted Solutions (0)

Answers (1)

Answers (1)

Former Member
0 Kudos

Hello Yuksel,

I don't quite follow why you would be performing a heterogeneous copy to increase # of datafiles but I guess you have your reasons. I'm not a SQL Server person.

If you could provide info on a couple of things.

1. The R3ta split log files

2. Confirm you used the standard R3ta_hints file

3. There was no cpu or i/o contention during the export

4. Provide a explain plan of the select statement from the VBRP WHR file

5. Check you have a index that properly supports the select statement from (4)

When I'm splitting tables I work to two criteria, size and number of entries in the table. I've found on quite a few occasions that size splitting isn't always the most optimal so if the table in question is huge (entries not size) I split into packages of between 5-10 million entries.

Just taking Oracle as an example, having too many splits and split packages running in parallel actually ends up slowing the export down as you get read contention.

Will wait for your reply before we look into this deeper.

KR,

Amerjit

Former Member
0 Kudos

Hi

In the mean time please check below mentioned note

1650246 - R3ta: new split method for MSSQL

1875778 - Performance Optimization for System Copy Procedures

Regards

Prithviraj

yakcinar
Active Contributor
0 Kudos

Hello Amerjit,

Our new system has 48 cores. We have performanse issue so we thought that wee could equate the # cores to # of datafiles. This is the reason.

1.

Top 2 tables are;

Table NameReserved Size# of Rows
MSEG1.262.664.7843.295.461.564
VBRP773.487.8723.234.756.095

I tried to divide tables to 4GB pieces. MSEG didnot last too long like VBRP.

R3ta log for VBRP is;

R3ta.exe: START OF LOG: Sat Aug 13 11:26:25 2016

R3ta.exe -o Split/VBRP.WHR -t VBRP%193 -check_utf8 -f Split/VBRP.STR -l R3ta_VBRP.log

INFO: connected to DB

Processing Split/VBRP.STR

INFO: Hint files are not supported on Microsoft SQL Server

Processing table VBRP

Rowcount by sys.dm_partition_stats for table 'VBRP': 3.23476e+09

INFO: disconnected from DB

R3ta.exe: END OF LOG: Sat Aug 13 13:49:22 2016

2. SQL Server does not support hints.

3. CPUs were 100% at the beginnning. Whenever they were processing VBRP there were no CPU bottleneck.

Our disks are full SSD. There is not I/O bottleneck.

4. WHR file conten for the longest split VBRP-95  is;

tab: VBRP

WHERE

( "MANDT" = '200' AND "VBELN" > '6015720145' AND "VBELN" < '6015802626') OR

( "MANDT" = '200' AND "VBELN" = '6015720145' AND "POSNR" >= '000239') OR

( "MANDT" = '200' AND "VBELN" = '6015802626' AND "POSNR" < '000339')

and I attached the explain plan for this select.

5. VBRP~0 primary index contains all fields

VBRP~0MANDT,VBELN,POSNRclustered,unique,primary key,located on PRIMARY

As I mentioned before I splitted top 100 tables according to their sizes and divided them into 4GB pieces.

Not the other tables take much time except VBRP and VLPMA. As you could see from the timeanalyzer output if I have more CPU and distribute R3loads to several hosts I will decrease the export time alot. But VBRP has too much export time.

Looking for your valuable comments.

Thanks and Regards,

Yuksel AKCINAR

JPReyes
Active Contributor
0 Kudos

Hi Yuksel,

First thing I would check is if this is a sorted or unsorted export, unsorted exports are a lot faster.

Unsorted exports have its own disadvantages so please read the documentation.

Regards, JP

yakcinar
Active Contributor
0 Kudos

Hello Juan,

I see that as of SQL Server 20008 R2 unsorted export is supported.

Do you know that it is default for SQL Server exports or swpm (R3lod) uses sorted export as default?

Thanks and Regards,

Yuksel AKCINAR

JPReyes
Active Contributor
0 Kudos

As far as I'm aware sorted exports are the default for MSSQL.

Regards, JP

yakcinar
Active Contributor
0 Kudos

Hello Juan,

Thanks for quick the answer.

Could not find a note or document about the default but as you also stated DDLMSS.TPL file is used in R3load parameters.

We should change below line in cmd files of the tables that we want to export as unsorted. to DDLMSS_LRG.TPL.

dcf: "Y:\Export\ABAP\DB\DDLMSS.TPL"

Thanks and Regards,

Yuksel AKCINAR

Former Member
0 Kudos

Hello Yuksel,

3.2 billions entries is indeed a decent table.

If I were in your boots I'd be tempted to split even further based on 10 million entries per split and creating a temporary index on only MANDT-VBELN. As I said I'm not a MSSQL person but it's what I would consider on Oracle.

What I find interesting is the select statement which is using a 'or' and POSNR whereas I would have thought splitting on VBELN would have been optimal. Maybe SQL just does things in a different way using the full primary key.

Can I ask that you create a secondary index on VBRP and try the splitting again and see if the split is on MANDT-VBELN after the creation of the new index.

A '>' and '<' with a 'or' is always going to be killer on any database.

Prithviraj and Juan have provided good additional info on optimisation and unsorted unloading.

I know it might sound silly but you are using the latest R3* tools and MS DB lib ?

KR,

Amerjit

Former Member
0 Kudos

Hi Yuksel,

Out of curiosity how did you get on and what approach did you finally take ?

KR,

Amerjit

yakcinar
Active Contributor
0 Kudos

Hello Amerjit,

We were using virtual system for test purposes.

We moved the system to a more powerful physical system.

And also I applied below notes.

1054852 - Recommendations for migrations using Microsoft SQL Server

1241751 - SQL Server minimal logging extensions

On that system we exported all tables in 20 hours.

Regards,

Yuksel AKCINAR