Skip to Content

Archived discussions are read-only. Learn more about SAP Q&A

Support for "Advanced Compression Option (ACO)"

Hi everyone,

as "SAP on Oracle Development Update 02/2008" (https://www.sdn.sap.com/irj/sdn/go/portal/prtroot/docs/library/uuid/a07754d4-edd4-2910-0683-c9473aaeca51) describes, SAP planned support for Oracle 11.2 in Q3/2009.

I'm missing a statement wether SAP plans an own certification of ACO alternatively a support for "Advanced Compression Option (ACO)" is included in support for 11.2.

Any ideas about that topic?

regards

Jörg

Former Member
Former Member replied

Hallo Jörg

Obviously 11g support for SAP is not in discussion in the near future

ACO: Any ideas about that topic?

I assume SAP will not support ACO from the beginning, but might do after a while, like they did with index compression, see this thread for more info: . But is it useful at all? I did a quick test on a 11g system (version 11.1.0.6). Here is what i did:

- rebuild table MARC (this was to eliminate any "fragmentation" due to deleted rows)

- create table MYMARC compress for all operations as select * from sapr3.marc;

- new stats for MARC and MYMARC

Findings:

TABLE_NAME                         BLOCKS   NUM_ROWS AVG_ROW_LEN
------------------------------ ---------- ---------- -----------
MYMARC                              13150    5866886         551
MARC                               452257    5866886         551

So with compression the table size has decreased by a factor ~30 !!!

Further comments / questions are welcome - Best regards, Michael

-

-


Edit: I was wondering, why the compression was so efficient, and did another test. In the above version both tables were sorted by its primary keys. I wanted to have a comparison to completely random sorted table data, this might hurt the compression, because there will be less similar values in the same block. To disorder the data i used dbms_random.value:

SQL> create table MYMARC_unsorted compress for all operations
      as select * from sapr3.marc order by dbms_random.value;

The result shows a less efficient compression factor of about 12, so we can expect a compression which lies somewhere in between these two cases in the real world for this table. But even 12 is still amazing.

TABLE_NAME                         BLOCKS   NUM_ROWS AVG_ROW_LEN
------------------------------ ---------- ---------- -----------
MARC                               452257    5866886         551
MYMARC_unsorted                     37626    5866886         551

Edited by: mho on May 27, 2008 12:54 PM

0 View this answer in context
Not what you were looking for? View more on this topic or Ask a question