cancel
Showing results for 
Search instead for 
Did you mean: 

latch: cache buffers chains

Former Member
0 Kudos

Hello ,

One of the queries that we are using is causing DB CPU usage to max out.

On analysis following was identified

1. Waiting for event "latch: cache buffers chains" in wait class "Concurrency" accounted for 12% of the database time spent in processing the SQL statement with SQL_ID

2. Waiting for event "latch free" in wait class "Other" accounted for 3% of the database time spent in processing the SQL statement with SQL_ID

We are using Oracle 10.2.0.4.0. as our DB system, and the query is

    select field 1 field 2

     from ever
    into table lt_ever
     for all entries in lt_tmp
   where anlage  eq lt_tmp-anlage
     and einzdat le lt_tmp-adatsoll
     and auszdat ge lt_tmp-adatsoll.

there is a initial table check on lt_ever before executing the query. the run plan for the query is as below

 

-----------------------------------------------------------------------------------------

| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |

-----------------------------------------------------------------------------------------

| 0 | SELECT STATEMENT | | 2 | 100 | 3 (34)| 00:00:01 |

| 1 | CONCATENATION | | | | | |

|* 2 | TABLE ACCESS BY INDEX ROWID| EVER | 1 | 50 | 1 (0)| 00:00:01 |

|* 3 | INDEX RANGE SCAN | EVER~ANL | 1 | | 1 (0)| 00:00:01 |

|* 4 | TABLE ACCESS BY INDEX ROWID| EVER | 1 | 50 | 1 (0)| 00:00:01 |

|* 5 | INDEX RANGE SCAN | EVER~ANL | 1 | | 1 (0)| 00:00:01 |

-----------------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):

-------------------------------------------------------------

1 - SEL$1

2 - SEL$1_1 / EVER@SEL$1

3 - SEL$1_1 / EVER@SEL$1

4 - SEL$1_2 / EVER@SEL$1_2

5 - SEL$1_2 / EVER@SEL$1_2

Predicate Information (identified by operation id):

---------------------------------------------------

2 - filter("EINZDAT"<=:A6 AND "AUSZDAT">=:A7)

3 - access("MANDT"=:A4 AND "ANLAGE"=:A5)

4 - filter("EINZDAT"<=:A2 AND "AUSZDAT">=:A3 AND (LNNVL("ANLAGE"=:A5) OR

LNNVL("EINZDAT"<=:A6) OR LNNVL("AUSZDAT">=:A7) OR LNNVL("MANDT"=:A4)))

5 - access("MANDT"=:A0 AND "ANLAGE"=:A1)

Column Projection Information (identified by operation id):

-----------------------------------------------------------

1 - "EVER".ROWID[ROWID,10], "MANDT"[VARCHAR2,9], "VERTRAG"[VARCHAR2,30],

"ANLAGE"[VARCHAR2,30], "EINZDAT"[VARCHAR2,24], "AUSZDAT"[VARCHAR2,24],

"ZZCONT_REASON"[VARCHAR2,6], "ZZCONTEND_REASON"[VARCHAR2,6]

2 - "EVER".ROWID[ROWID,10], "MANDT"[VARCHAR2,9], "VERTRAG"[VARCHAR2,30],

"ANLAGE"[VARCHAR2,30], "EINZDAT"[VARCHAR2,24], "AUSZDAT"[VARCHAR2,24],

"ZZCONT_REASON"[VARCHAR2,6], "ZZCONTEND_REASON"[VARCHAR2,6]

3 - "EVER".ROWID[ROWID,10], "MANDT"[VARCHAR2,9], "ANLAGE"[VARCHAR2,30]

4 - "EVER".ROWID[ROWID,10], "MANDT"[VARCHAR2,9], "VERTRAG"[VARCHAR2,30],

"ANLAGE"[VARCHAR2,30], "EINZDAT"[VARCHAR2,24], "AUSZDAT"[VARCHAR2,24],

"ZZCONT_REASON"[VARCHAR2,6], "ZZCONTEND_REASON"[VARCHAR2,6]

5 - "EVER".ROWID[ROWID,10], "MANDT"[VARCHAR2,9], "ANLAGE"[VARCHAR2,30]

Can you please help me understanding what the ORACLE analysis means

and is it possible to further tune the query in ABAP side so as not to cause the above issues?

Accepted Solutions (0)

Answers (2)

Answers (2)

stefan_koehler
Active Contributor
0 Kudos

Hi Shadab,

well at first Orkun's suggestion seems to be pretty strange as SAP table buffering is only used, if it is enabled in technical table settings. I can't imagine, that this is the case here at all (just think about the table size, change frequency and its corresponding needed buffer) regarding the online table documentation of table EVER (i don't have an IS-U system right here)

> Can you please help me understanding what the ORACLE analysis means

Sorry, but that is not an analysis at all - this is just the basic execution plan information. However i am pretty sure, that this performance issue is caused by massive logical I/O regarding the mentioned wait events and the CPU usage. The question now is what step is causing that high amount of logical I/O. We got some information about effective index access here, but this is just a tiny part of the whole puzzle.

The SQL statement itself is missing as well (as you are using Open SQL - the ABAP SQL statement differs from the "real" SQL statement).

You can check my blog regarding further SQL analysis: http://scn.sap.com/community/oracle/blog/2012/08/06/oracle-db-optimizer-part-ii--extending-execution...

> is it possible to further tune the query in ABAP side so as not to cause the above issues?

Depends on the root cause, the database objects and the business logic of course.

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

>> well at first Orkun's suggestion seems to be pretty strange as SAP table buffering is only used, if it is enabled in technical table settings.

Please check the document about buffering, below;

http://help.sap.com/abapdocu_702/en/abensap_puffering.htm

You'll find what I mean, under "SELECT with FOR ALL ENTRIES in the following cases" title.

Best regards,

Orkun Gedik

stefan_koehler
Active Contributor
0 Kudos

Hi Orkun,

yes i got your key-point about FOR ALL ENTRIES regarding the table buffering the first time, but i can not imagine that the table EVER got buffering enabled regarding the documentation of EVER (i don't have an I-SU system right here, so i can not check the technical table definition of it).

FOR ALL ENTRIES have no impact at all (on SAP table buffering), if buffering is not enabled (by the way this is also mentioned in the documentation that you are referring).

When accessing a database table using Open SQL, SAP buffering is usually active if it is defined for the database table in question. The definition of a database table in ABAP Dictionary determines whether and how it is buffered.

Regards

Stefan

Former Member
0 Kudos

Hi,

Not exactly bypass the buffer, but there are some cases. Instead of using FOR ALL ENTRIES with high loaded sql statements, I recommend to my customers to split sql statament into different statements and logic. Maybe with the parallel async calls. The developer shouldn't load all the work to the database engine.

As in this example, DBSL can create a heavy sql statement and create latch cache buffers chains waits and cause high CPU utilization.

Best regards,

Orkun Gedik

stefan_koehler
Active Contributor
0 Kudos

Hi Orkun,

> I recommend to my customers to split sql statament into different statements and logic. Maybe with the parallel async calls. The developer shouldn't load all the work to the database engine.

It is highly recommended to off-load as much as SELECT work to the database engine (if it is done right of course). You run into nasty issues (especially with a high parallel usage), if you split up SQL statements in very tiny SELECTs (with a huge amount of data). Just think about pinning the library cache object, client statement caching or SQL*net packets. You can produce such test cases very easily on your own.

> As in this example, DBSL can create a heavy sql statement and create latch cache buffers chains waits and cause high CPU utilization.

Especially in this case this has nothing to do with a "heavy generated SQL" by DBSL at all. You see that the FOR ALL ENTRIES is translated into a simple (not a complex or large) SQL with one concatenation, if you look closely at the predicate information (and column projection). The logical I/O work, that is causing this issue has to be performed in any case.

Regards

Stefan

Former Member
0 Kudos

>> It is highly recommended to off-load as much as SELECT work to the database engine (if it is done right of course). You run into nasty issues (especially with a high parallel usage), if you split up SQL statements in very tiny SELECTs (with a huge amount of data). Just think about pinning the library cache object, client statement caching or SQL*net packets. You can produce such test cases very easily on your own.

I don't share this idea. This is because, it does not mean that simply just the statement should be splitted into small parts. It can be optimized by changing the program logic for example parallel calls an so on, as I noted above.

Additionally, I checked the statements translated by DBSL many times and changed the program logic. After the optimization work, I saw better response times than calling by FOR ALL ENTRIES method.

Because of this reason the process must be balanced between database and application server logic, instead of loading on the database.

Best regards,

Orkun Gedik

Former Member
0 Kudos

I fully agree with Stefan. Oracle DBMS must do all work that it can do. And only after that application server code must come into play. All arguments are provided in Stefan previous answer. For that you must allocation required resources to both Oracle and application server. If you have resource bottlenecks on Oracle DBMS side as a workaround you can perform some process balancing and offloading some work to application server if it has has free resources.

: you need to perform more investigation in this SQL statement if it has such impact on Oracle. Check it execution plan first.

Former Member
0 Kudos

By your perspective all the calculations and processes should be performed at the database side? In theory, you can execute all the process in the sql statement. No, I don't agree with this. If the program require to get huge amount of data, it needs to be loaded into application server memory. Then, should be calculated.

Additionally, the process should be splitted into parallel processes, in order to increase the performance. High loaded queries shouldn't be executed at the database side. That kind of processes eat the database resources quickly and affects the other processes by the bad way.

Best regards,

Orkun Gedik

Former Member
0 Kudos

>> If the program require to get huge amount of data, it needs to be loaded into application server memory. Then, should be calculated.

Correction: Not huge amount of data, but high loaded SQL statements

stefan_koehler
Active Contributor
0 Kudos

Hi Orkun,

> It can be optimized by changing the program logic for example parallel calls an so on, as I noted above.

Ok, but changing the application logic has nothing to do with SQL tuning. You also have a lot of possibilities on database level (assuming that it is done right of course) to parallel the data processing, if you talk about parallelism. However you would make it even worse, if you split up the SQL (regarding the issue from Shadab) into several pieces or logic units (look closely at the provided information section) and run it in parallel or tiny pieces.

> Additionally, I checked the statements translated by DBSL many times and changed the program logic. After the optimization work, I saw better response times than calling by FOR ALL ENTRIES method.

There are just very few cases where changing the "SQL logic" is really necessary (thinking about sorting data set in some cases). In most (SAP) cases the database engine does not have enough information (e.g. correlated columns, duplicate values, etc.) with the SAP default settings or the underlying data (object) structure is not sufficient for a really great performance (if needed).

Especially in SAP environments most of the usual (SAP) consultants don't know much about the used database engine and its possibilities as they try to support a lot of platforms like MaxDB, DB2, SQL, Oracle and so on. In such cases they use the technique that works the most on all platforms (like off-loading work on the application layer), but this does not mean that it is the best one regarding scalability and the future.

.. let's go back to the original topic of Shadab.

Regards

Stefan

Former Member
0 Kudos

Hi Stefan,

>> Especially in SAP environments most of the usual (SAP) consultants don't know much about the used database engine and its possibilities as they try to support a lot of platforms like MaxDB, DB2, SQL, Oracle and so on.

Yes, it is correct at the first sight, but in order to support platform independency, SAP does not support some of the database specific features, at the application server level. For example, even partitioning engine has just been supported with the latest releases by SAP.

Ok it is better to return to main issue.

Best regards,

Orkun Gedik

Former Member
0 Kudos

Hi Shadab,

First of all FOR ALL ENTRIES keyword bypasses "SAP table buffering" which lead a performance problem while reading the data. There's a misconception that using "for all entries" is better than select single than a loop. In practice FOR ALL ENTRIES creates heavy select statement over the database. It can be caused to the high CPU utilization during the execution. I recommend you to read the SAP note 48230.

Regarding to the execution plan, CBO concatenated two index range scan results while reading data from the table. Regarding to your profile parameters, DBSL creates an select statement and execute over the database.

At this stage I recommend you not use FOR ALL ENTRIES and split your sql statement into parts.

Best regards,

Orkun Gedik