cancel
Showing results for 
Search instead for 
Did you mean: 

Table spliting for a table size 500MB

Former Member
0 Kudos

Hello all,

I exported several time an oracle database (due to export time optimization for unicode conversion). I splitted several tables with size > 100 GB. But I found one table with size 500MB only taking around 10hours to export. If I split this table I'll be able to save time through parallelisation. The table is ZFI_SDOKCONT1 which is copy SDOKCONT1 used for DMS.

The question is why the export takes so long for this table? (table highly fragmented?).

Is there any way to speed up the export except of unsorted load and table splitting?

Thanks in advance

Best Regards

Muris

Accepted Solutions (1)

Accepted Solutions (1)

former_member188883
Active Contributor
0 Kudos

Hi Muris,

To export a single block of 500 MB will definitely take more time. If you want to speed up the operation only alternative is to split such a table into smaller chunks and increase your R3load processes based on available CPU and RAM.

Regards,

Deepak Kori

Answers (2)

Answers (2)

nicholas_chang
Active Contributor
0 Kudos

Hi,

I've faced this kind of issue, when longer time for a small table. Perhaps you can try to reorg the table and ensure statistic for the particular table is up to date.

For my case, export time didnt reduce much even i split the table......

Cheers,

Nicholas Chang

Former Member
0 Kudos

Hello together,

@Venkatesh: I tested different options I found that the optimum is 15 processes on 16 cores. Standard method (R3load and migmon)

@Deepak: According to the time which I have now, I export from the mentioned table 50MB/Hour. For the other tables I have around 500MB/hour (10 times faster!). So Nicholas got the point.

@Nicholas: I'll surely consider these options. But as you also said, I am not sure if its worth as the reduction isn't really significant and I'll have to consider impact of reorg and statistics update to the production.

I thought there is maybe a trick behind that particular table SDOKCONT1. As it keeps documents for DMS. A regularly scheduled cleanup or what so ever.

I've just got an other idea. I'll rebuild the primary index short before export. And give you an update about it...

But still expecting some good experience for the related table ;).

Regards

Muris

former_member188883
Active Contributor
0 Kudos

Hi Muris,

You can also think of dropping the existing indexes on your table SDOKCONT1. This will also save sufficient for export and import later. Indexes can be rebuild at later stage once the activity is completed.

Regards,

Deepak Kori

Former Member
0 Kudos

Hi,

How much parallelism factor did you give?

What is the method followed for the export?

Br,

Venky