cancel
Showing results for 
Search instead for 
Did you mean: 

Bad idea: compressing table TST03

Former Member
0 Kudos

To make a long story short: It is obviously a bad idea to activate compression for table TST03. It wasn't my idea and I wonder how/why sapinst decided to compress table TST03 during import, because in a similar migration the table TST03 ended up uncompressed (just as one would expect).

Anyway, the symptoms were quite confusing. Performance was really horrible and most SAP work processes were blocked due to INSERTS on table TST03. When analyzing the problematic SQL statement 9y1g9p15fmtja

INSERT INTO "TST03" VALUES( :A0 , :A1 , :A2 , :A3 , :A4 , :A5 )

we saw that most of the time was spent on CPU and db file sequential read:


PLAN_HASH  ACTION                                                 SAMPLES PLAN_P CT TOTAL_PCT
0          CPU                                                      35654    57. 00     57.00
0          db file sequential read                                  20787    33. 23     33.23
0          read by other session                                     3270     5. 23      5.23
0          latch: cache buffers chains                               2505     4. 00      4.00
0          resmgr:cpu quantum                                         253     0. 40      0.40
0          db file scattered read                                      63     0. 10      0.10
0          enq: TX - contention (4 / Share)                            11     0. 02      0.02
0          latch free                                                   7     0. 01      0.01
0          buffer busy waits                                            4     0. 01      0.01
0          enq: HW - contention (6 / Exclusive)                         1     0. 00      0.00
0          wait list latch free                                         1     0. 00      0.00

The execution plan was as simple as possible:


0 INSERT STATEMENT (Plan Hash: 0, Child: 0, Optimizer: "ALL_ROWS")        Total costs: 1
    1   LOAD TABLE CONVENTIONAL

The segment statistics showed that there were huge amounts of logical reads on table TST03:


SEGMENT_NAME                          LOG_READS  PHYS_READS   DIR_READS PHYS_WRI TES  DIR_WRITES   ITL_WAITS   ROW_LOCKS BUFFER_BUSY BLK_CHANGES SEGMENT_SCANS

TST03                               18973176032     2105204           0      716 868           0           0           0        3008      990208             1
TST03~0                                 2722720        9392           0        3 652           0           0          74          12      503856             0

SEGMENT STATISTICS (DBA_HIST_SEG_STAT):

SEGMENT_NAME                          LOG_READS  PHYS_READS   DIR_READS PHYS_WRI TES  DIR_WRITES   ITL_WAITS   ROW_LOCKS BUFFER_BUSY BLK_CHANGES SEGMENT_SCANS

TST03                               63116518272   132236983    10540550     5416 009           0           0           0      131637    12342992             3
TST03~0                                 3158912       93258           0       18 469        9571           0          68          76      575088             0

During a downtime we exported table TST03 and reimported it as an uncompressed table. This fixed the performance problem. What I don't understand:

Why were there performance problems in the first place? I understood Advanced Compression as a background task, so if some SAP transaction inserts a row into a compressed table, it is inserted UNCOMPRESSED, and only after some time some background process checks uncompressed blocks of compressed tables and compresses them. So there shouldn't have been any difference for the inserts on table TST03.

Regards,

Mark

Accepted Solutions (1)

Accepted Solutions (1)

stefan_koehler
Active Contributor
0 Kudos

Hello Mark,

Klaus Reimers also wrote a nice article about that topic (and in german): http://www.ordix.de/ORDIXNews/2_2009/Datenbanken/oracle_11g_teil_vi.html

It is for Oracle 11g R1, but it doesn't matter in this case. As Michael already mentioned - TST03 has some LOB column (DCONTENT) and i would also guess that this is the issue here with secure files and compression. (check the article for details).

If you are able to reproduce this case - a wait event trace of an INSERT would be nice, because of oracle has improved its tracing technology for LOB operations .. so we can be sure where the time is spent.

http://blog.tanelpoder.com/2011/03/20/lobread-sql-trace-entry-in-oracle-11-2/

Regards

Stefan

Former Member
0 Kudos

Sorry, I forgot to include a very important fact in my description:

This is the import of an R/3 4.70 Enterprise 1.10 system, so

there are no LOB columns. Table TST03 has got a LONG RAW column.

I tried to perform an SQL trace of the operation, but didn't succeed in turning on traceing before the INSERT on TST03 started.

stefan_koehler
Active Contributor
0 Kudos

Hello Mark,

very interesting topic.

I have performed a first test case and the issue is reproducable. This test was run on an Oracle 11.2.0.2 on Linux x86_64.

Unfortunately i don't have the time right now to do deeper analysis ...

Here is the testcase:


-- Drop all tables
DROP TABLE "SAPSR3"."TEST_NOCOMP";
DROP TABLE "SAPSR3"."TEST_COMP";

-- Create all tables
CREATE TABLE "SAPSR3"."TEST_NOCOMP"
   (	"DCLIENT" VARCHAR2(3) DEFAULT '000' NOT NULL ENABLE,
	"DNAME" VARCHAR2(20) DEFAULT ' ' NOT NULL ENABLE,
	"DPART" NUMBER(5,0) DEFAULT 0 NOT NULL ENABLE,
	"DROWNO" NUMBER(10,0) DEFAULT 0 NOT NULL ENABLE,
	"DDATALEN" NUMBER(5,0) DEFAULT 0 NOT NULL ENABLE,
	"DCONTENT" LONG RAW NOT NULL ENABLE
   )
   TABLESPACE "PSAPSR3";


CREATE TABLE "SAPSR3"."TEST_COMP"
   (	"DCLIENT" VARCHAR2(3) DEFAULT '000' NOT NULL ENABLE,
	"DNAME" VARCHAR2(20) DEFAULT ' ' NOT NULL ENABLE,
	"DPART" NUMBER(5,0) DEFAULT 0 NOT NULL ENABLE,
	"DROWNO" NUMBER(10,0) DEFAULT 0 NOT NULL ENABLE,
	"DDATALEN" NUMBER(5,0) DEFAULT 0 NOT NULL ENABLE,
	"DCONTENT" LONG RAW NOT NULL ENABLE
   )
   TABLESPACE "PSAPSR3"
   COMPRESS FOR ALL OPERATIONS;


-- Check SELECTs
SELECT OWNER, TABLE_NAME, TABLESPACE_NAME, COMPRESSION, COMPRESS_FOR FROM DBA_TABLES WHERE TABLE_NAME IN ('TEST_NOCOMP', 'TEST_COMP');

-- Perform INSERTs
EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();

BEGIN
   FOR count IN 1 .. 100000 LOOP
      INSERT INTO "SAPSR3"."TEST_NOCOMP" VALUES ('010', 'DTA030723121120_0065', 1, 1, 572, '3C0248443A31312030383032323720202020202030303030303120202020200D0A4B433A3230303030302030303030303020435A4B0D0A55443A203939393939393920435A4B204B4F4E544F0D0A41443A303030303030303030300D0A44493A42524F534520545255544E4F56204155544F4D4F0D0A202020564F4C414E4F56534B41203531382020202020202020202020202020202020202020200D0A20202035343120303120545255544E4F562020202020202020202020202020202020202020200D0A554B3A313920323135353139303230370D0A414B3A303030303030303030300D0A4B493A50524F4D4120435A2C20532E522E4F2E202020200D0A2020204D454C43414E59203338202020202020202020202020202020202020202020202020200D0A202020444F425255534B412020202020202020202020202020202020202020202020202020200D0A45433A202020200D0A5A443A303030303030303030300D0A5A4B3A303030303030303030300D0A41563A444F432E203030373331353030303030303034323030392020202020202020202020200D0A2020202E202020202020202020202020202020202020202020202020202020202020202020200D0A20202030303039303732303037202020202020202020202020202020202020202020202020200D0A20202050524F4D4120435A2C20532E522E4F2E202020202020202020202020202020202020200D0A53313A303030303030303031203230303030300D0A53333A303030303030303030203030300D0A');
      INSERT INTO "SAPSR3"."TEST_COMP" VALUES ('010', 'DTA030723121120_0065', 1, 1, 572, '3C0248443A31312030383032323720202020202030303030303120202020200D0A4B433A3230303030302030303030303020435A4B0D0A55443A203939393939393920435A4B204B4F4E544F0D0A41443A303030303030303030300D0A44493A42524F534520545255544E4F56204155544F4D4F0D0A202020564F4C414E4F56534B41203531382020202020202020202020202020202020202020200D0A20202035343120303120545255544E4F562020202020202020202020202020202020202020200D0A554B3A313920323135353139303230370D0A414B3A303030303030303030300D0A4B493A50524F4D4120435A2C20532E522E4F2E202020200D0A2020204D454C43414E59203338202020202020202020202020202020202020202020202020200D0A202020444F425255534B412020202020202020202020202020202020202020202020202020200D0A45433A202020200D0A5A443A303030303030303030300D0A5A4B3A303030303030303030300D0A41563A444F432E203030373331353030303030303034323030392020202020202020202020200D0A2020202E202020202020202020202020202020202020202020202020202020202020202020200D0A20202030303039303732303037202020202020202020202020202020202020202020202020200D0A20202050524F4D4120435A2C20532E522E4F2E202020202020202020202020202020202020200D0A53313A303030303030303031203230303030300D0A53333A303030303030303030203030300D0A');
      COMMIT;
   END LOOP;
END;
/

EXEC DBMS_WORKLOAD_REPOSITORY.CREATE_SNAPSHOT();

SELECT SQL_ID, SQL_TEXT FROM V$SQL WHERE SQL_TEXT LIKE '%INSERT%TEST_%';

It seems like it really depends on the LONG RAW column and the OLTP compression. The CPU load was increased by factor 4 and the runtime by factor 3. But as you can see i have performed a huge number of INSERTS (100.000) - can not imagine that you have that load on your system, but maybe it also depends on the length of the LONG RAW value.

Regards

Stefan

Former Member
0 Kudos

I think the bad idea is to still have the long data type with 11g (i have to add the long2lob migration to the 11g upgrade check list...).

Cheers Michael

Former Member
0 Kudos

Hello Stefan,

thanks a lot for the test case! I ran it on my system as well and analyzed the results with the SAP tool "SQL_SQL_ID_DataCollector_11g" from note 1438410. The compressed table also compresses the

LONG RAW column, so TEST_NOCOMP is 80 MB in size but TEST_COMP is only 2 MB in size and

resides fully in the buffer cache. Therefore accesses to TEST_COMP are largely logical reads and

to TEST_NOCOMP there are much more physical reads. This can alone explain the performance difference

between compressed and uncompressed LONG RAW.

I will keep in mind that a compressed TST03 with LOB should be ok, I couldn't find performance problems with that combination.

A compressed TST03 with LONG RAW columns should be avoided, but I don't know what causes the excessive logical reads there.

Regards,

Mark

Answers (1)

Answers (1)

Former Member
0 Kudos

and only after some time some background process checks uncompressed blocks of compressed tables and compresses them.

Nope, the block is compressed on the fly everytime pctfree is reached ([here|http://www.rittmanmead.com/2008/09/testing-advanced-oltp-compression-in-oracle-11g/] with a nice picture in the middle). But more important TST03 has a lob column, thus the securefile compression might be the thing to look at. Do you know the securefile storage clause if there was any? Because it is possible to compress with different degrees like compress high and it is also possible to use the deduplication feature.

If you have still access to a compressed example, then you can use the dbms_metadata.get_ddl procedure to check it:

SQL> set long 5000
SQL> select dbms_metadata.get_ddl('TABLE','TST03','SAPSR3') from dual; 

Cheers Michael