on 07-23-2014 10:52 AM
Dear expert,
We are planning oracle OLTP compression on a IS-U system, could you tell me
Why should avoid OLTP compression on tables with massive update/insert?
What kind of impact on the performance in the worst case?
Best regards,
Kate
Hi Kate,
For additional reading refer to below SAP note and oracle link
1847870 - Oracle 11g: Advanced Compression - Known issues and solution
Master Note for OLTP Compression (Oracle DB/EM Support)
Regards,
Deepak Kori
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi
When updating compressed data Oracle has to read it, uncompress it and update it.
The compression is then performed again later asynchronously. This does require a lot more CPU than for a simple update.
An other drawback is that compression on highly modified tables will generate a major increase in redo / undo log generation. I've experienced it on an DB where RFC tables where by mistake compressed, the redo increase was over 15%.
Check the remark at the end of Jonathan Lewis post.
Regards
http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/
Possibly this is all part of the trade-off that helps to explain why Oracle doesn't end up compressing the last few rows that get inserted into the block.
The effect can be investigated fairly easily by inserting about 250 rows into the empty table - we see Oracle inserting 90 rows, then generating a lot of undo and redo as it compresses those 90 rows; then we insert another 40 rows, then generate a lot of undo and redo compressing the 130 rows. Ultimately, by the time the block is full we have processed the first 90 rows into the undo and redo four or five times.
Hi Kate,
If you don't know how to answer my question.
Please leave it to others,
Everyone who is contributing here is a volunteer, they spend their own time to help people with their experience/guidance/solutions.
You can wait for a proper solution if you are not satisfied with the response that you are getting.
Suggest you to read and follow the RoE.
Coming to your question - Refer to Yves response.
Massive insert/update means high DML operations. So inserting/updating the table within the compressed format takes time. ACO related notes suggests you not to compress the tables where you have massive insert/updates, but again it's up to you if your system is ready to bear the extra CPU load.
We compressed all tables (Other than the recommended ones like cluster, pool, RFC etc.) and our system is able to handle the additonal 5-10% of CPU load. We had some issues with tables which are having more than 255 columns where we disabled the OLTP compresion.
Regards,
Nick Loy
Hi Kate ,
There could be huge CPU utilization with OLTP compression activated.
Additionally RAM utilization may also increasing resulting in transactions being very slow or resulting in ABAP dumps.
Regards,
Deepak Kori
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
85 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.