cancel
Showing results for 
Search instead for 
Did you mean: 

Database migration to MAXDB and Performance problem during R3load import

0 Kudos

Hi All Experts,

We want to migrate our SAP landscape from oracle to MAXDB(SAPDB). we have exported database of size 1.2 TB by using package and table level splitting method in 16 hrs.

Now I am importing into MAXDB. But the import is running very slow (more than 72 hrs).

Details of import process as per below.

We have been using distribution monitor to import in to target system with maxdb database 7.7 release. We are using three parallel application servers to import and with distributed R3load processes on each application servers with 8 CPU.

Database System is configured with 8CPU(single core) and 32 GB physical RAM. MAXDB Cache size for DB instance is allocated with 24GB. As per SAP recommendation We are running R3load process with parallel 16 CPU processes. Still import is going too slow with more that 72 hrs. (Not acceptable).

We have split 12 big tables in to small units using table splitting , also we have split packages in small to run in parallel. We maintained load order in descending order of table and package size. still we are not able to improve import performance.

MAXDB parameters are set as per below.

CACHE_SIZE 3407872

MAXUSERTASKS 60

MAXCPU 8

MAXLOCKS 300000

CAT_CACHE_SUPPLY 262144

MaxTempFilesPerIndexCreation 131072

We are using all required SAP kernel utilities with recent release during this process. i.e. R3load ,etc

So Now I request all SAP as well as MAXDB experts to suggest all possible inputs to improve the R3load import performance on MAXDB database.

Every input will be highly appreciated.

Please let me know if I need to provide more details about import.

Regards

Santosh

Accepted Solutions (0)

Answers (6)

Answers (6)

Former Member
0 Kudos

Just to add to what markus has said.

Note 1016732 will help.

markus_doehr2
Active Contributor
0 Kudos

I would also recommend using the paramter -force_repeat. This will significantly increase the import time because the database will not do a savepoint after each table import (also empty tables) is finished.

During the import you can check the database activity using

x_cons <SID> sh ac 1

this command will show you the current database activity.

Markus

Former Member
0 Kudos

Hello again,

by the way table splitting on MaxDB isn't recomended at all

[Note 1385089 - R3ta for table splitting with MaxDB|https://websmp130.sap-ag.de/sap(bD1lbiZjPTAwMQ==)/bc/bsp/spn/sapnotes/index2.htm?numm=1385089]

....At the moment, the tool does not consider the properties of the particular RDBMSs. For MaxDB, this can result in SQL statements that do not split the table in primary key order, which might lead to very long export and import times....

Regards Stanislav

0 Kudos

Thank you Stanislav.

I will check according to all your inputes and let you know the status of migration process.

Meanwhile all further inputs are welcome.

Thanks and Regards

Santosh

0 Kudos

Hi stanislav,

In one of your reply you have mentioned that "table splitting on MaxDB isn't recomended at all".

But in our landscape source database is oracle and our target DB is MAXDB, So in this case is it advisable to split very big table in source database and import in MAXDB.?

We have split few big table in source DB during export.

Could you please let me know if I am doing wrong with splliting big table during export?

Regards

Santosh nikam

Former Member
0 Kudos

Hello,

description of parameter:

MaxTempFilesPerIndexCreation(from version 7.7.0.3)

Number of temporary result files in the case of parallel indexing

The database system indexes large tables using multiple server tasks. These server tasks write their results to temporary files. When the number of these files reaches the value of this parameter, the database system has to merge the files before it can generate the actual index. This results in a decline in performance.

as for max value, I wouldn't exceed the max valuem for 26G value 131072 should be sufficient. I used same value for 36G CACHE SIZE

On the other side, do you know which task is time consuming? is it table import? index creation?

maybe you can run migtime on import directory to find out

Stanislav

Former Member
0 Kudos

Hi,

did u checked this note?

Note 1327874 - FAQ: SAP MaxDB Read Ahead/Prefetch

consider also parameter:

MaxTempFilesPerIndexCreation -> 131072 ???

(Possible values include 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072 up to 1/3 of cache memory size).

regards

Stanislav

0 Kudos

Hi,

Thanks Stanislav for you reply with helpful inputs.

We have assigned 26 GB ( 3407872 pages) as CACHE memory size and MaxTempFilesPerIndexCreation=131072.

I am not sure about parameter MaxTempFilesPerIndexCreation value. As per SAP Possible values include 1024, 2048, 4096, 8192, 16384, 32768, 65536, 131072 up to 1/3 of cache memory size. so I have set maximum number-131072

So Can we increase further from its current value 131072. and if yes what should be the value if considering cache memory size is 26 GB.

So please advice.

Regards

Santosh

0 Kudos

I would like to add more details here.

R3load options

-nolog -c 10000 -loadprocedure FAST -force_repeat -para_cnt 16

former_member188883
Active Contributor
0 Kudos

Hi,

Can you check the load on the server being using inyour distribution monitor.

If the load is less than 50% , increase the number of parallel jobs until the load is around 80%.

Hope this helps.

Regards,

Deepak Kori

0 Kudos

Hi,

We increased R3load process till server load reaches about 90% . But not of use with same slow performance

Regards

Santosh Nikam

former_member189725
Active Contributor
0 Kudos

Did you do the export of database tables sorted ?

0 Kudos

Hi,

As per SAP note 954268 for MaxDB target database, you must export all of the

tables as sorted. We did export in sorted order only.

Regards

Santosh nikam

former_member189725
Active Contributor
0 Kudos

Please check the I/O stats as import is I/O intensive .