cancel
Showing results for 
Search instead for 
Did you mean: 

long time compression of infocube.

Former Member
0 Kudos

Hi Gurus!

I have big problem. In one infocube we have too many uncompress reguests (approx 40 000). When i try compress this infocube everything is OK but compresiion take too much time. When I check this job in SM37 it can be seen that problem is in subproccess ALTER TABLE <table name - it is F fact table of infocube> DROP PARTITION <name of partition - /BIC/F<cubename000000000x>>. Time duration of ALTERING table is aproxx. 4 minutes for one partition and this is big problem because I think that amount of request is equal amount of partiton of F fact table of corresponding infocube.(in my case 40000req x 4min...........)

What can i do for improve PERFORMANCE of compression.

Thank you for all answers!

KOX

Edited by: Michal Pikus on Oct 1, 2008 4:35 PM

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

This should not be a problem, please select only moderate no of requests at a time for compression not all at a time.

You can also maintain system parameters for compression job.

Former Member
0 Kudos

Hi, thank you for answer, but I was trying yet compresion in parts. Time duration of altering is still too long

And please tell me what did you mean with maintain the system parametres for compression job because I have no idea what did you talk about. Beforhead, thank you very much for answer...

pizzaman:

we are using ORACLE 10g

Regards,

kox

Edited by: Michal Pikus on Oct 1, 2008 8:28 PM

Former Member
0 Kudos

My guess is that it is related to the number of partitions.

From my oracle guru buddy Jonathan Lewis -

If you look in the data dictionary at the dba_segments view, you will find that each partition of a table (and same for indexes) has a position number that sorts the partitions into the correct order. When you drop the oldest partition (part# zero in the dictionary) every other partition has to be renumbered.

Inevitably, because every partition definition has changed, the entire dictionary cache relating to this object is flushed and all dependent SQL in the library cache is invalidated and has to be reparsed.

Now 10g should have improved some of the dictionary flushing algorithm, but you start to get a feel for the overhead actually occuring.

You could try and trace the session and see where teh time is actually spent. Best I can offer is review the Note I mentioned and see if any of the options it mentions for improving data dictionary access might help. Otherwise, run your compressions off hours, and hopefully you'll see some improvement as the number of partitions is reduced. Queries on the cube should benefit.

Answers (1)

Answers (1)

Former Member
0 Kudos

Reducing the number of Requests being compressed in on erun is probably a good idea just for mgmt purposes, but it sounds like your problem may be db dictionary related due to having so many segments (partitions).

You don't say what DB and version you're using so you may need to see if there are similar Notes for other DB.

Oracle 10g has improved performance for partitioned objects over 9i.

Review Note 871455 - Poor performance when accessing DBA and V$ views. You might also search oss notes on "data dictionary performance".

As the number of segments decrease, the drop partition time shoudl decrease if the cause is the dictionary access.