on 05-16-2011 2:09 PM
Hi All
I have started a compression for a request in to my cube having 84 million records. The job is runiinig now more than2 days,
Is it normal to take this time for this much numbet of records.
I am not sure weather i should cancel this job . Let me know if i can check somewhr , How much time it will take more or the job is not working anymore.
Thanks
DD
na
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Did you delete the Index before compressing the Infocube?
Normally it takes more time, if you compress the request without deleting the index from the Infocube.
--- Thanks...
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
Compressing such a large no of records. will take time. It will put additional burden on the system. Keep a track of the job log and contact teh basis team for any indiscrepancy.
Also, its better to limit the compression req for less than 2million records. however, even that would take considerable amount of time depending on the system load.
Regards,
Rahul
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi,
It doesn't looks abnormal. When compression activity goes on, the data in F-Fact table will be compressing to E-Fact table. In this process, a huge logs will be created in transaction log. Contact your BASIS team to clear the log regularly. Otherwise, the CPU load will be too high.
Your Infocube will also be locked during the compression.
Hope it helps.....
Regards,
Suman
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
We had the same issue. There seems to be a threshold number of rows that once you pass that threshold the job goes from taking a short period of time, to taking days to complete. I read somewhere the recommended size of each compression run is < 2million records. We keep it under that and they complete very fast. You could just set up a recursive process chain to compression so many at a time.
I have killed the job before without noticing any corruption of data. I assume it does a rollback but I cannot confirm that for certain.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
87 | |
23 | |
11 | |
9 | |
8 | |
5 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.