cancel
Showing results for 
Search instead for 
Did you mean: 

OLTP compression process

Former Member
0 Kudos

Hi guys,

Currently, I have just upgraded my Oracle 9i on SAP R/4 4.7 to Oracle 11g.

I have couple times try the newest feature OLTP compression. Is there any rule of thumb on how to do this table compression (OLTP compression).

I use BRTOOLS 7.20 patch 13 to do this step.

Thannk you.

ardhian

http://sapbasis.wordpress.com

http://ardhian.wordpress.com

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello

- focus on large tables with much redundant data

- also compress the indexes while compressing the table

- in some special cases it might be worth to use the sort option to achive a better compression

- if possible test it first on the QAS system, make sure you document which tables/indexes are compressed

- monitor the compression ratio (careful a simple reorg might also shrink the table a bit) and the system performance

Also be aware that the OLTP compression feature needs to be properly licensed by Oracle.

Good luck, Michael

Bonus question: Should we change pctfree with compressed tables?

lbreddemann
Active Contributor
0 Kudos

>

> Bonus question: Should we change pctfree with compressed tables?

Why do you think this could be necessary?

How, do you assume, does PCTFREE works differently with compressed tables?

regards,

Lars

Former Member
0 Kudos

Hey Lars

Because you do it when compressing indexes? -> no, but probably for the same reasons

Because any update potentially need much less space, thus you can reserve less free space.

Cheers Michael

stefan_koehler
Active Contributor
0 Kudos

Hello Michael,

Because any update potentially need much less space, thus you can reserve less free space.

Yes, but think about that the compression itself is an asynchronous process, which starts nearly round about PCTFREE.

So you will still need the space for any update rows until the compression process sets in.

Regards

Stefan

Former Member
0 Kudos

@Stefan/Lars:

Do you have a test case where pct_free=10 is better than pct_free=1 ?

Because i wasn't able to create one. I tested with EDIDC data and updates on status/upddat/updtime. When i for example update roughly 12'000 rows on a 2.5mio table i get:

SQL> select table_name, pct_free, num_rows, chain_cnt, avg_row_len
     from dba_tables where table_name like 'EDIDC_%';

TABLE_NAME                       PCT_FREE   NUM_ROWS  CHAIN_CNT AVG_ROW_LEN
------------------------------ ---------- ---------- ---------- -----------
EDIDC_COMP_10                          10    2500000      10496         199
EDIDC_COMP_1                            1    2500000      10818         199
EDIDC_BASE                             10    2500000          0         199

So both pct_free=10 and pct_free=1 compressed tables are totally screwed. I can supply all test statements if you want.

@Ardhian:

Sorry for getting out of line concerning the PCTFREE discussion, but i would like to seize to opportunity, after two of the top class oracle experts jumped in here. Please insist if we are going to far, leave pctfree on 10 as everybody suggests. But lets not forget the initial question.

You must definitely read sap note [1436352 - Oracle 11g Advanced Compression for SAP Systems|https://service.sap.com/sap/support/notes/1436352]. I contains the basic advice needed to start, also keep my input from the first post in mind.

Cheers Michael

lbreddemann
Active Contributor
0 Kudos

Hi Michael,

as I've currently really no time to do any tests, just a few thoughts:

- the row_chaining is something I would expect to happen, when we compress the block. Since the avg. rowlength that is stored in the block is reduced, we can pack the rows more densely and thereby moving the rows within the block.

I would expect that this kind of row_chaining/migrating does not have a bad influence on performance, since it's an intra-block reference.

- interesting for the compression effect to see would be to have the number of used and free blocks for the tables.

regards,

Lars

Former Member
0 Kudos

You save pretty much the additional 9% on space. The segments did not grow when i updated the 12'000 rows (they did when i updated all rows, but were still 50% smaller than the uncompressed table).

SQL> select segment_name, blocks from dba_segments where segment_name like 'EDIDC_%';

SEGMENT_NAME                       BLOCKS
------------------------------ ----------
EDIDC_BASE                          71936
EDIDC_COMP_1                        14336
EDIDC_COMP_10                       15744

SQL>  select table_name, avg_space, num_freelist_blocks
      from dba_tables where table_name like 'EDIDC_%';
TABLE_NAME                      AVG_SPACE NUM_FREELIST_BLOCKS
------------------------------ ---------- -------------------
EDIDC_COMP_10                         935                   0
EDIDC_COMP_1                          139                   0
EDIDC_BASE                            965                   0

stefan_koehler
Active Contributor
0 Kudos

Hello Michael,

well let's see if we can get a little bit deeper into this stuff.

At first i have to say that i have no real live example, because of we are still on 11.2.0.1 with our most SAP databases.

The second stuff that i wonder is - you execute the query on NUM_FREELIST_BLOCKS ... i assume, that you are using ASSM .. so i would recommend the PL/SQL procedure DBMS_SPACE.SPACE_USAGE to get a detailed view .. because of the free statistic with ASSM is a little bit different (block gets only marked as "free" if it dropped under the next "25 percent group" under PCTFREE).

http://download.oracle.com/docs/cd/E11882_01/appdev.112/e16760/d_space.htm#CACBDJIH

Your notice of the chained rows its pretty interesting ... but i can imagine that it depends how you update the rows status/upddat/updtime and when the async compression process takes place. Just imagine about a lot of updates on status/upddat/updtime, that are located in the same block. "A lot of rows" get decompressed and updated .. they are not fitting all in the same block .. they get chained, before the new async compression takes place. As the updated column examples are pretty "huge" it doesn't matter if you will use PCTFREE 1 or 10 .. i would assume, that if you will rerun the examples with a column which only contains one letter (for a really extreme example) you will see some better results with PCTFREE 1. Also if you update many rows in random blocks i would also assume another result.

Regards

Stefan

Answers (1)

Answers (1)

lbreddemann
Active Contributor
0 Kudos

>

> I have couple times try the newest feature OLTP compression. Is there any rule of thumb on how to do this table compression (OLTP compression).

>

> I use BRTOOLS 7.20 patch 13 to do this step.

> ardhian

Hi Ardhian,

not really sure, what you're looking for... rule of thumb? You mean, which tables should be compressed and which not?

Have you read the sap notes about advanced compression in Oracle 11g? In there you should find this....

Or, just tell us, what exactly you want to know.

regards,

Lars