cancel
Showing results for 
Search instead for 
Did you mean: 

comprrssion takes very long time

Former Member
0 Kudos

Hi All

I have started a compression for a request in to my cube having 84 million records. The job is runiinig now more than2 days,

Is it normal to take this time for this much numbet of records.

I am not sure weather i should cancel this job . Let me know if i can check somewhr , How much time it will take more or the job is not working anymore.

Thanks

DD

Accepted Solutions (0)

Answers (6)

Answers (6)

Former Member
0 Kudos

na

FCI
Active Contributor
0 Kudos

Hello,

According to the following blog, you can speed up the compresion time by deleting only the indexes on the E-fact table.

Regards,

Fred

Former Member
0 Kudos

Did you delete the Index before compressing the Infocube?

Normally it takes more time, if you compress the request without deleting the index from the Infocube.

--- Thanks...

Former Member
0 Kudos

Hi,

Compressing such a large no of records. will take time. It will put additional burden on the system. Keep a track of the job log and contact teh basis team for any indiscrepancy.

Also, its better to limit the compression req for less than 2million records. however, even that would take considerable amount of time depending on the system load.

Regards,

Rahul

former_member182470
Active Contributor
0 Kudos

Hi,

It doesn't looks abnormal. When compression activity goes on, the data in F-Fact table will be compressing to E-Fact table. In this process, a huge logs will be created in transaction log. Contact your BASIS team to clear the log regularly. Otherwise, the CPU load will be too high.

Your Infocube will also be locked during the compression.

Hope it helps.....

Regards,

Suman

Former Member
0 Kudos

We had the same issue. There seems to be a threshold number of rows that once you pass that threshold the job goes from taking a short period of time, to taking days to complete. I read somewhere the recommended size of each compression run is < 2million records. We keep it under that and they complete very fast. You could just set up a recursive process chain to compression so many at a time.

I have killed the job before without noticing any corruption of data. I assume it does a rollback but I cannot confirm that for certain.