cancel
Showing results for 
Search instead for 
Did you mean: 

Copy from SAPonzSeries.de: 'SAPon z/DB2 Performance Notes - DSC' by Bohlsen

thomas_vogt
Advisor
Advisor
0 Kudos

Dynamic Statement Cache (DSC) statistics

This can be one of the most valuable sources of information in DB2 tuning for many purposes. By now almost all customers will be using a EDM pool dataspace, which will contain the major part of the storage required for SQL statement caching. The first issue to be addressed is the size of the dataspace, and the simple answer to the question of how large it should be is As big as it needs to be . With a statement such as this, I obviously need to point out that these notes are my own opinion based on real world experience.

In the context of these tuning notes what I would tend to recommend is sizing the EDM pool dataspace based on prepared statement residency time in peak load periods. With the technique for measurement that is outlined below, my opinion is that you need to aim for keeping a minimum residency of 24 hours. The reasoning is that this will allow SQL statements associated with daily jobs and tasks to continuously accumulate key DB2 statistics over an extended period of time. Put another way, if a statement is discarded from cache due to LRU algorithms within a 24 hour period, and is associated with ABAP/Java code that is only executed daily, when the statement next executes it will result in a full prepare. DB2 statistics such as execution time, various wait times, rows examined and processed etc will then be accumulated from zero. While the full prepare on a daily basis is not of performance concern, it is the continual resetting of statistics that is the challenge to overcome.

As we will see, longer term residency then becomes extremely valuable to allow identification of the best targets for tuning effort. To put it into context, it usually takes and EDM pool dataspace size of 250-300MB at most to achieve this goal, and in the present day context of memory sizing and pricing, this is usually quite acceptable. The increased residency time will also result in reduction in the number of full prepares of SQL statements which also is a good thing for overall performance.

Measurement of residency time of DSC can be achieved in this way:

Execution of transaction DB2, click on Cached Statement Statistics button. In the popup window, simply select execute without entering any values.

[This has the effect of retrieving all statements in the cache, with one caveat: If you receive a message More data is available& , this indicates that to minimize response time, the number of statements has been limited to 2,000. In the case of our analysis, it is critical that we get all statements. Also note if you get a message indicating there is no data, IFCID 318 trace may not be started.]

Click on the Retry button.

[While we will get something like a minute response time, this is a one off and once we have the data we can manipulate it& Needless to say it does not make sense to do this too often !!]

Once you have a list of statements, click on the Initial Prepare column heading, and click on the sort descending button. This effectively give us a track record of statement preparation by DB2 into the past. Note the date/time stamp of the first statement on the list.

Again click on the Initial Prepare column, and now click Find . Then enter LRU into the search field and click enter. Note the date/time stamp of the statement that is found. By finding the time difference between the two statements, you now have the residency time of DSC at this point in time.

This procedure outlines the general approach for finding residency time, which hopefully you follow the logic of. Now& the important part&. When to do this task&. Essentially what we need to do is find out whether we have 24 hour plus residency. In this regard it does not matter when during the day the process is carried out, it is a binary answer yes/no that will result. If the answer is no, I would recommend that you increase EDMDSPAC dynamically (unless there is evidence of memory over-allocation& covered later on in another section) and try again&. If you perform the above process to calculate residency time during your busiest time of the day, and the time is less than 5 minutes, you really need to consider increasing the DSC cache both for outright performance reasons, as well as the lack of further analysis possibilities.

Once you have the knowledge that you have adequate residency time of SQL statements, we can now take the full list of statements (i.e. not entering any input parameters, and hitting the Retry button to get all statements). This list can now be used in a variety of ways to discover potential performance tuning opportunities. The response time cost of gathering the list is non-trivial, but the potential results probably far outweigh the occasional paying of this price.

First task is to click the heading of the column CPU Time which is a couple of pages scrolling right in the default ALV view, and then click sort descending. What this will show is essence is the opportunity to reduce z/OS cycles, by way of showing the statements that have accumulated the most CPU resource. Here you may start to appreciate the above comments on the value of this technique, and the impact of residency time. If there are statements whose execution frequency result in them being periodically discarded with LRU invalidation and re-prepared, over time they will not appear anywhere near the top of the list relative to long term residents. This is despite them being in fact the precise statements we want to find and address to proactively tune resource usage.

In the same manner, select the Elapsed Time column and also perform a sort descending. This list of statements now shows in descending order the statements whose tuning will have the most benefit flowing through to response time of transactions or batch jobs (which particular application components are unknown at this point of analysis).

The analysis of individual statements and where to go next is the topic for another posting !!

Accepted Solutions (0)

Answers (2)

Answers (2)

Former Member
0 Kudos

Thomas,

Thanks for the info.

Could you provide a clarification on the CPU time - does it represent CPU seconds consumed or a wall clock response time from DB2?

thomas_vogt
Advisor
Advisor
0 Kudos

Rob,

CPU time does represent CPU seconds, wall clock response time is displayed in the 'time elapsed' field.

Regards,

Thomas

thomas_vogt
Advisor
Advisor
0 Kudos

Hi Tim,

thanks for this article. Can it be, that now this forum starts getting valuable.

Just a question to DSC: I have SAP 4.6C on DB2 V8 and would really like to use REOPT(ONCE) for my ICLI-Plackages. But SAP say "better not" because you see not the real Accesspath with Explain in ST04(DB2). Wait until you are on NW, 6.40 or futher ...

Question is, will ST04 then show the stored Accesspath? Can I see the Variables of the first occurence? How can I do that within DB2?

Michael

thomas_vogt
Advisor
Advisor
0 Kudos

Hi Michael,

Sorry for the delayed reply, I did not see your posting until today.

While it may be that REOPT(ONCE) may function quite well with 46C, it is understandable that SAP focus on NW04 and above versions development, and not spend time and effort testing and developing the toolsets to allow it.... for versions that have a limited life left.... Unfortunate for many sites, but (in my opinion) the cases where the optimiser gets it wrong are few and almost always these are known and addressed with dstats etc etc.

The accesspath as shown in ST04 in NW04/DB2V8 is indeed the actual path of the original optimised statement (which will be with the vars using REOPT(ONCE)). As for whether the variables are visable, I believe not as I have not seen them, but as to whether they are actually stored anywhere and could be externalised..... very good question that I'll have to follow up on.

Tim.