cancel
Showing results for 
Search instead for 
Did you mean: 

Oracle Advanced Compression - Performance

Former Member
0 Kudos

Hi all,

Three Performance-related points in this thread:

1. Index Compression:

I have read in several threads and in the SAP  note 1436352 - Oracle Database 11g Advanced Compression for SAP Systems than the indexes should compressed before than the table.

My question is, can this impact in the performance after compression?

I mean, if I follow this sequence would I have any performance problem:

- Table Online Reorg with brSpace and option -c ctab:

brspace -u / -f tbreorg -o <Owner> -t <Table> -c ctab -p 4

- Index Rebuild with brSpace and option -c cind:

brspace -u / -c force -f idrebuild -o <Owner> -i <Index> -c cind -p 4

2. Number of columns:

I noticed that there is a limitation of 255 columns in the OLTP compression, but my question is if the number of columns could negatively impact in performance accessing compressed tables (Read Operations). In my case the table has 225 columns.

3. Poor performance in Read Operations:

I have compressed a serveral GL tables in our test environment and the times for the same SELECT sentence is around 30% slower after compression. The compression ratio was very good.

As you can imagine, we have applied the compression following the point number 1 of this thread and the table has got 225 columns.

Is there any factor that can impact negatively in read operations in compression tables?

My Oracle version is 11.2.0.3.0 and SAP 7.01

Thanks in advance.

Accepted Solutions (1)

Accepted Solutions (1)

fidel_vales
Employee
Employee
0 Kudos

Hello,

if you are going to compress a table, you can compress the indexes at the same time, you could save time:

brspace -u / -f tbreorg -o <Owner> -t <Table> -c ctabind -p 4

If your table has 255 you should NOT compress the table. it is a limitation of how oracle handles such tables, the likelihood of having performance problems due to chained/migrated rows is much higher. It is recommended that you do not compress tables with more than 250 columns (or decide on a more conservative approach) in order to avoid problems if more columns are added to the table in the future.

for the 3rd point I would have to guess and I do not like it. there could be several reasons and they might be related to the reorganitation itself  (like the clustering factor is different) not to the compression.

it is also important that, in order to avoid known bugs in compression, you have the latest SBP (February 2014) but it should not affect much to SELECT, more to INSERT (in general)

Answers (2)

Answers (2)

Former Member
0 Kudos

Hi,

On point#3 - I trust it is not a clustered table?

Compression impacts the update/inserts badly if you compress the tables which have >255 columns or if you compress the clustered tables. I don't think compression is the direct cause which is impacting SELECT statement in your case (As said above, it need to be checked in detail).

Regards,

Nick Loy

Former Member
0 Kudos

Hi Nick,

No it's not a clustered table it's a transparent table.

Kind Regards

Former Member
0 Kudos

Hi,

On transparent tables it's a known issue, take a look at below note which says not to compress the transparent tables.

We observed delayed insert/updates on few of the transparent tables after the compression.

Solution we applied was disabling the table compression for newly allocated blocks. You can uncompress the table without losing the compression benefits. Only new allocated blocks will be in uncompressed mode and you will get your performance back.

                         1847870  - Oracle 11g: Advanced Compression - Known issues and solution            

Regards,

Nick Loy

Former Member
0 Kudos

Hi,

But this case is not neither an insert nor an update operation, it's a SELECT sentence.

For SELECT sentences make no sense to have a different behavior just marking the table as uncompressed without table reorg, right?

Kind Regards

Former Member
0 Kudos

Yes,

It shouldn't behave like this!

Can you just "uncompress" the table and check whether select is faster or not?

Regards,

Nick Loy

Former Member
0 Kudos

Hi Nick,

I have deactivated the compression and run the query again but there is no difference in the CPU time.

Kind Regards

Former Member
0 Kudos

Hi all,

More information here. Here you go the plan statistics before and after the compression for the sentence:

Before Compression:

Stat NameStatement TotalPer Execution% Snap Total
Elapsed Time (ms)16,17516,174.771.19
CPU Time (ms)16,16016,160.005.36
Executions1
Buffer Gets757,549757,549.009.84
Disk Reads00.000.00
Parse Calls00.000.00
Rows393,062393,062.00
User I/O Wait Time (ms)0
Cluster Wait Time (ms)0
Application Wait Time (ms)0
Concurrency Wait Time (ms)0
Invalidations0
Version Count1
Sharable Mem(KB)180

After Compression:

Stat NameStatement TotalPer Execution% Snap Total
Elapsed Time (ms)38,53238,532.473.33
CPU Time (ms)38,46038,460.007.25
Executions1
Buffer Gets263,628263,628.000.53
Disk Reads00.000.00
Parse Calls00.000.00
Rows393,062393,062.00
User I/O Wait Time (ms)0
Cluster Wait Time (ms)0
Application Wait Time (ms)0
Concurrency Wait Time (ms)0
Invalidations0
Version Count1
Sharable Mem(KB)168

As you can see after compression the number of buffer gets are much less as expected (757,549 vs 263,628) but the CPU time is much higher after compression (16,160 vs 38,460).

The DB host has enough CPU capacity actually no other transaction was running in the meantime.

I have checked the Oracle bugs 13641076 or 14762511 but they are for INSERT and UPDATE operations not for SELECT.

Any idea?

Regards

ACE-SAP
Active Contributor
0 Kudos

Hello

On a query like the one for which you provide data compression could only have a negative impact. It seems that all the data is in the cache (disk read = 0) & Elapsed time = CPU time.

For compressed table CPU time will always increase, as the data, must be uncompressed to be processed... Nothing to do about that... except perhaps uncompress the table.

If the query had to read the data from disk compression might have a positive impact as the data to be read would be 2 time smaller and might have balanced the time lost for uncompressing data.

Judging compression impact focusing on one select statement might not be very representative.

Best regards

stefan_koehler
Active Contributor
0 Kudos

Hi Jose,

in addition to Fidel's reply about tables with more than 255 columns you may want to check out one of my old blog posts about this () - it includes the effect of multiple row pieces.

Indeed the statistic seems to be a bit strange (more than doubled CPU time for less logical I/Os with OLTP compression). Unfortunately the SQL and execution plan is missing and some "code path optimizations" can be hidden and are not accounted as logical reads (e.g. Pinned Buffers or Fastpath consistent gets). The easiest way to check this would be to run Snapper (Finding the reasons for excessive logical IOs by Tanel Poder) with stats option on the particular session for the whole SQL execution.

Depending on your OS you may also want to profile the Oracle / OS kernel functions to get an idea of the CPU usage - an example with perf on Linux ().

Regards

Stefan

Former Member
0 Kudos

Hi Fidel and Stefan,

Thank you so much for your feedback.

In regards the number of columns in my case the table doesn't reach the 255 columns limitation actualy it has 225 columns and per the Oracle statistics there is no Row Chaining with this table.

I also find a bit strange to have the double of CPU time with the half of Logical I/Os. It's true that the statistics of the first execution (no buffer cache) are better with the table compressed because the number of Disk Reads were much lower but this only improves the first execution.

Oracle always says that the OLTP compression doesn't have any negative impact in read operations but If you can only improve the first run and the others were worse because of a higher CPU time ... I'd doubt this sentence. This means that OLTP compression can have a negative impact in performance with a lot of Logical I/Os.

I'm gonna check the links about "code optimization" and CPU Usage.

I'm also thinking of apply the Oracle SBP 11.2.0.3.9 - 201402V3. There are a couple of reported bug about compression there:

- 13641076 - HIGH AMOUNT OF BUFFER GETS FOR INSERT STATEMENT - REJECTION LIST DOES NOT FIRE

- 14762511 - UPDATE ON A OLTP COMPRESSED TABLE WITH MILLIONS OF BUFFERGETS TAKES LONG TIME


Here you have the Execution plan just in case it can give any new clue:

Execution Plan

IdOperationNameRowsBytesCost (%CPU)Time
0SELECT STATEMENT 12438 (100)
1   FILTER
2     TABLE ACCESS BY INDEX ROWIDCE1GG00424273K12437 (1)00:02:21
3       INDEX RANGE SCANCE1GG00~Z1333K 416 (1)

00:00:05

Many thanks

stefan_koehler
Active Contributor
0 Kudos

Hi Jose,

> Oracle always says that the OLTP compression doesn't have any negative impact in read operations but If you can only improve the first run and the others were worse because of a higher CPU time

Can you please provide the source of that? I am aware of statements like "Overall system throughput is not negatively impacted and may improve." and this is true for (most) I/O driven databases. There also could be a great benefit for subsequent executions (e.g. direct path reads, etc.).

However your specific CPU case is a pretty strange one and should be investigated in detail. The response time is one aspect of your issue, but the used CPU time can be a big (scalability) issue in high load environments as well.

> I'm gonna check the links about "code optimization" and CPU Usage

Yes. Please provide the Snapper output (or stack trace profiles) for your issue.

> There are a couple of reported bug about compression there

I explained and troubleshot one of these mentioned bugs (#13641076) in detail here: ... it also includes the procedure how to use Snapper (and stack traces) for drilling down to the root cause. As you can see the described scenario and issue is absolutely different from yours.

Please provide the requested Snapper output (or the CPU / stack trace profiling) for one whole SQL execution. I am pretty sure that we can spot a hint to the root cause in your case.

Regards

Stefan

Former Member
0 Kudos

> Can you please provide the source of that? I am aware of statements like "Overall system throughput is not negatively impacted and may improve." and this is true for (most) I/O driven databases. There also could be a great benefit for subsequent executions (e.g. direct path reads, etc.).

Here you go a selection from the Oracle Advance Compression white Paper:

I will try to get the Sanpper and send it over.

Kind Regards

Former Member
0 Kudos

Hi Stefan,

Enclosed you can find the Snapper outputs.

The file called "snapper1_2504.txt"  contains the first execution (no buffer) with the table compressed. The SID is 2504.

The file called "snapper3_257.txt" contains the second execution (buffer - no Disk Reads) with the table compressed. The SID is 257.

The SQL statement ID is 3dvg63q65fmsz  

Many thanks

stefan_koehler
Active Contributor
0 Kudos

Hi Jose,

which of the snapshots is covering the whole and single SQL (3dvg63q65fmsz) run in session 257? You have snapped a 20 second interval 150 times, but your SQL runs 38 seconds as previously mentioned.

So you have not covered the whole SQL execution in one snap and the SID 257 was snapped several times with various SQL statements. Unfortunately it is not comprehensible which statistics are related to a specific SQL execution.

Please capture only that one whole SQL (3dvg63q65fmsz) execution for a specific SID as i mentioned in my previous posts, otherwise the data is useless.

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

Enclosed you can find the snapper output that should cover the whole execution in one snapshot.

The session ID is 2008 and the SQLID 3dvg63q65fmsz.

I have noticed this:

2008  @1, SAPRGG    , STAT, buffer is pinned count                                    ,       8062956,     19.85k,         ,             ,          ,           ,      96.32 % buffer gets avoided thanks to buffer pin caching

But I'm not sure if it can be the reason of the issue.

Kind Regards

stefan_koehler
Active Contributor
0 Kudos

Hi Jose,

now you have captured the whole SQL execution in one snapshot, but you still have captured various SQLs (and not isolated 3dvg63q65fmsz) in that session snap and so we can still not focus on the corresponding stats.

For example you have captured 307.927 logical reads in sum vs. 263.628 logical reads (as previously provided) for that particular SQL (keep in mind that the optimizations are not included in that stat). It also seems like you have done some DMLs in that session, which introduce a new level of stats. However you are also using Oracle RAC (e.g. "gc cr block 2-way" or "LMS1" process), which can make it even more tricky.

Please isolate that particular SQL "3dvg63q65fmsz" (the easiest way is to reconstruct it in SQL*Plus) and snap only that. I assume that you are running that system on Linux as you are using RAC - please collect the CPU profile for that particular process as well (). That CPU profile gets down to it.

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

Here you go another trace (snapper_2756_1.txt), in this case another SQL was executed by the session in the meantime but the SQL Statement (ccjupy2cq70rj)  only perform 200 logical reads, so it should not impact in the results.

I have enclosed an extra trace (snapper_2756_2.txt) that cover only the SQL ID (3dvg63q65fmsz) but as per the data it is not covering the whole execution.

Bear in mind that I don't have access to SQL*Plus, so I have to ask to my Basis supplier to activate the trace for every execution, that's why is very tricky to get the propper trace.

On the other hand, we are running on Solaris not Linux.

Please let me know if this trace gives you any idea.

Many thanks for you help.

stefan_koehler
Active Contributor
0 Kudos

Hi Jose,

well that is the issue with the division of work ... we need to go low-level for such issues, but as you are running on Solaris it is much easier to capture the necessary low-level information with help of DTrace.

There is a DTrace toolkit script called "hotuser" (stored in /usr/dtrace/DTT/ on my Solaris 11.1  box) which samples the on-CPU (user land) functions at 1001 hertz and provides a summary afterwards. This report will state the sampled on-CPU functions that are burning up your CPU (if the issue is in user land space, which i currently assume).

The summary report looks something like that - i just sampled a 4 table join on dba_source for demonstration purpose.

$/usr/dtrace/DTT/hotuser -p 1458

Sampling... Hit Ctrl-C to end.

^C

FUNCTION                                                COUNT   PCNT

oracle`kdilmfilt                                            1   1.0%

oracle`krta2bz                                              1   1.0%

ld.so.1`callable                                            1   1.0%

...

oracle`evadcd                                               3   3.1%

oracle`qksbgGetVal                                          3   3.1%

oracle`expepr                                               4   4.1%

libc.so.1`memset                                            4   4.1%

oracle`smbput                                               9   9.2%

libc.so.1`memcpy                                           36  36.7%

You don't necessarily need SQL*Plus access to isolate the issue. Follow this procedure to isolate the SELECT statement and capture the necessary information:

  1. Lock SAP WP with help of ABAP report RSTRC000
  2. Identify Oracle shadow server process PID for that locked SAP WP
  3. Goto DBACOCKPIT / SQL editor and re-construct SELECT
  4. Contact your SAP basis supplier and ask him to profile that PID (from step 2) with DTrace script "hotuser"
  5. Execute SELECT in SQL editor
  6. Ask your SAP basis supplier to end "hotuser" and provide the output

Regards

Stefan