cancel
Showing results for 
Search instead for 
Did you mean: 

why should avoid OLTP compression on tables with massive update/insert?

Former Member
0 Kudos

Dear expert,

We are planning oracle OLTP compression on a IS-U system, could you tell me

Why should avoid OLTP compression on tables with massive update/insert?

What kind of impact on the performance in the worst case?

Best regards,

Kate

Accepted Solutions (1)

Accepted Solutions (1)

former_member188883
Active Contributor
0 Kudos

Hi Kate,

For additional reading refer to below SAP note and oracle link

1847870 - Oracle 11g: Advanced Compression - Known issues and solution

Master Note for OLTP Compression (Oracle DB/EM Support)

Regards,

Deepak Kori

Former Member
0 Kudos

This message was moderated.

former_member188883
Active Contributor
0 Kudos

This message was moderated.

ACE-SAP
Active Contributor
0 Kudos

Hi

When updating compressed data Oracle has to read it, uncompress it and update it.

The compression is then performed again later asynchronously. This does require a lot more CPU than for a simple update.

An other drawback is that compression on highly modified tables will generate a major increase in redo / undo log generation. I've experienced it on an DB where RFC tables where by mistake compressed, the redo increase was over 15%.

Check the remark at the end of  Jonathan Lewis post.

Regards

http://allthingsoracle.com/compression-in-oracle-part-3-oltp-compression/

Possibly this is all part of the trade-off that helps to explain why Oracle doesn't end up compressing the last few rows that get inserted into the block.

The effect can be investigated fairly easily by inserting about 250 rows into the empty table - we see Oracle inserting 90 rows, then generating a lot of undo and redo as it compresses those 90 rows; then we insert another 40 rows, then generate a lot of undo and redo compressing the 130 rows. Ultimately, by the time the block is full we have processed the first 90 rows into the undo and redo four or five times.

Former Member
0 Kudos

Thank you Deepak,

Though i'm still curious about what is  " additional work performed"when writting data.

But you know largest table in OLTP system are usually DML intensive.

So avoid massive update/insert tables , is really hard to judge.

Kate

former_member188883
Active Contributor
0 Kudos

This message was moderated.

Former Member
0 Kudos

Hi Yves K.

This seems the problem i'm now experiencing...

I've also find that blog minutes ago.

Thank you very much

Kate

Former Member
0 Kudos

Hi Kate,


If you don't know how to answer my question.

Please leave it to others,

Everyone who is contributing here is a volunteer, they spend their own time to help people with their experience/guidance/solutions.

You can wait for a proper solution if you are not satisfied with the response that you are getting.

Suggest you to read and follow the RoE.

Coming to your question - Refer to Yves response.

Massive insert/update means high DML operations. So inserting/updating the table within the compressed format takes time. ACO related notes suggests you not to compress the tables where you have massive insert/updates, but again it's up to you if your system is ready to bear the extra CPU load.

We compressed all tables (Other than the recommended ones like cluster, pool, RFC etc.) and our system is able to handle the additonal 5-10% of CPU load. We had some issues with tables which are having more than 255 columns where we disabled the OLTP compresion.

Regards,

Nick Loy

Former Member
0 Kudos

Hi Nick,

Thanks for point it out.

I also feel the reply is realy not proper... but i can't find way how to modify or delete it.

And thanks for the case sharing.

Best regards,

Kate

Former Member
0 Kudos

Hi Deekpak,

Thank you for this more information

Best regards,

Kate

ACE-SAP
Active Contributor
0 Kudos

Hi

2 graphics about the impact of compression when you activate it on tables with heavy update (in that case : TST03, CDCLS, APQD, ARFCSDATA).

Redo log                + 35%

CPU                       + 42%

Regards

Answers (1)

Answers (1)

former_member188883
Active Contributor
0 Kudos

Hi Kate ,

There could be huge CPU utilization with OLTP compression activated.

Additionally RAM utilization may also increasing resulting in transactions being very slow or resulting in ABAP dumps.

Regards,

Deepak Kori

Former Member
0 Kudos

Hi Deepak,

But why huge CPU utilization will occurs with OLTP compresssion?

That must be link with how OLTP handle the UPDATES, and this is the point i want someone to clarify.

Best regards

Kate