cancel
Showing results for 
Search instead for 
Did you mean: 

Bad performance due to "db file parallel write"

Former Member
0 Kudos

Hi everyone,

I have a case where I am somewhat struck. The end users are complaining

that the performance is really bad. System data: ECC 6.0 / Oracle 10.2.0.4.

We can easily see that the wait event "db file parallel write" from the

8 DBWR processes is really high. Also the number of archived redo logs

written is much higher (20-30 per hour compared to normally 2 per hour).

When analyzing the ADDM data for the periods experiencing bad performance

I can find 5 expensive SQL statements with roughly the same tuning

benefit estimation:

6tvk271cbac3t DELETE FROM "VAPMA" WHERE "MANDT"=:A0 AND "VBELN"=:A1
6c0pw8gqhbbbf UPDATE "INDX" SET "LOEKZ"=:A0 , "SPERR"=:A1 , "AEDAT"=:A2 , ...
49rc57zc9nc3x UPDATE "USR02" SET "BCODE"=:A0 , "GLTGV"=:A1 , "GLTGB"=:A2 , ...
3cmgznkdm332x INSERT INTO "VBMOD" VALUES(:A0 ,:A1 ,:A2 ,:A3 ,:A4 )
c33yvn1gan03y INSERT INTO "ARFCSDATA" VALUES(:A0 ,:A1 ,:A2 ,:A3 ,:A4 ,:A5 ,:A6 ,:A7...

Is there any simple way of extracing the redo log sizes for these statements

from AWR data? I already checked the resource kit from note 1438410, but

couldn't find anything which would help with this specific problem.

Do I have to use Oracle LogMiner to get this information? Or is this information buried

somewhere in some DBA_HIST_.... view?

Regards,

Mark

Accepted Solutions (1)

Accepted Solutions (1)

Former Member
0 Kudos

Hello Mark,

some comments regarding SAP Note 1438410: In fact there is no command available that could give you exactly what you need. Personally I would usually start with the following two simple overview commands:

1. SegmentStatistics_TopSegmentsForStatisticPerAWRInterval.txt with STAT_NAME = 'DB Block Changes'

--> This returns the top segments in terms of changed blocks.

2. Segments_Tables_TablesWithTopDMLRate.txt

--> This shows the tables with the highest amount of DML operations (based on ALL_TAB_MODIFICATIONS).

I know that both is not the absolute truth and many other factors also contribute, but for a first overview the results can be helpful.

Kind regards

Martin

Former Member
0 Kudos

Hello Martin,

sorry, but you are too modest! You may be right that technically the number of block changes isn't exactly the same as used redo log space. Anyway, your tip was great, with the help of the script SegmentStatistics_TopSegmentsForStatisticPerAWRInterval.txt I could nail down that it was in fact ARFCSDATA and ARFCSSTATE which caused the high amount of redo logs being written.

Regards,

Mark

BEGIN_TIME             TOTAL_VALUE       VALUE_PER_S SEGMENT_1      VALUE_1      %_1  SEGMENT_2      VALUE_2      %_2  SEGMENT_3      VALUE_3      %_3  SEGMENT_4      VALUE_4      %_4  SEGMENT_5      VALUE_5      %_5
---------------------- ----------------- ----------- -------------- ------------ ---- -------------- ------------ ---- -------------- ------------ ---- -------------- ------------ ---- -------------- ------------ ----
BEGIN TIME:            10.01.2011        11:00:55
END TIME:              10.01.2011        21:00:38
STAT NAME:             DB Block Changes
AGGREGATED BY:         SNAPSHOT
WEEKENDS EXCLUDED:     NO

2011-01-10 20:00:50            21737808      6058.47 ARFCSDATA           6240000   29 ARFCSSTATE~04       3505216   16 ARFCSSTATE~01       3346672   15 ARFCSSTATE~05       2613712   12 ARFCSSTATE          2041456    9
2011-01-10 19:00:08            30306000      8321.25 ARFCSDATA           8799952   29 ARFCSSTATE~04       4765008   16 ARFCSSTATE~01       4621744   15 ARFCSSTATE~05       3665536   12 ARFCSSTATE          2833952    9
2011-01-10 18:00:07            27743216      7704.31 ARFCSDATA           8000832   29 ARFCSSTATE~04       4308880   16 ARFCSSTATE~01       4285104   15 ARFCSSTATE~05       3352512   12 ARFCSSTATE          2633552    9
2011-01-10 17:00:58            22066576      6217.69 ARFCSDATA           7480384   34 ARFCSSTATE~04       3878576   18 ARFCSSTATE~01       3839840   17 ARFCSSTATE~05       2999280   14 ARFCSDATA~0         1465408    7
2011-01-10 16:00:03            22779632      6232.46 ARFCSDATA           6965968   31 ARFCSSTATE~04       3683392   16 ARFCSSTATE~01       3566816   16 ARFCSSTATE~05       2820976   12 ARFCSSTATE          2210768   10
2011-01-10 15:00:50            20588144      5794.58 ARFCSDATA           5989728   29 ARFCSSTATE~04       3002352   15 ARFCSSTATE~01       2969840   14 ARFCSSTATE~05       2323216   11 ARFCSSTATE          1844000    9
2011-01-10 14:00:03            30775472      8438.57 ARFCSDATA           8027872   26 ARFCSSTATE~01       3997488   13 ARFCSSTATE~04       3978384   13 ARFCSSTATE~05       3105776   10 ARFCSSTATE          2454944    8
2011-01-10 13:00:35            25900096      7259.00 ARFCSDATA           8142288   31 ARFCSSTATE~04       4301232   17 ARFCSSTATE~01       4250560   16 ARFCSSTATE~05       3310032   13 ARFCSSTATE          2603808   10
2011-01-10 12:00:08            30982080      8542.07 ARFCSDATA           9674688   31 ARFCSSTATE~01       4916912   16 ARFCSSTATE~04       4913520   16 ARFCSSTATE~05       3809552   12 ARFCSSTATE          3011792   10
2011-01-10 11:00:55            24095312      6781.68 ARFCSDATA           7347152   30 ARFCSSTATE~04       3916496   16 ARFCSSTATE~01       3903168   16 ARFCSSTATE~05       3065872   13 ARFCSSTATE          2443824   10

Answers (1)

Answers (1)

Former Member
0 Kudos

First simple approach could be to look these statement up in the cursor cache (ST04 -> SQL Statement Analysis). Look for the number of processes rows -> the more rows -> the more redo (for DML

How is the size of your redo logs?

Kind regards, Michael

Edit: or you can dump the redolog to a trace file

SQL> ALTER SYSTEM DUMP LOGFILE u2018/oracle/SID/origlogA/log_g11m1.dbfu2019;

The size of the redo can be found here: ktudb redo: siz: 144

You can dump only parts of a file. But anyway the dump will be large and you wil have to group the entries somehow, to find the cause of your increase redo.

Former Member
0 Kudos

Hello Michael,

thanks for your feedback! The online redo log files are 1 GB each.

ST04 -> Shared Cursor Cache doesn't allow to search for a SQL_ID, so it requires some manual work.

VAPMA; 345000 processed rows (avg row len. 174) -> estimated 57 MB changes

INDX: 80000 processed rows (avg row len. 680) -> estimated 51 MB changes

USR02: 25000 processed rows (avg row len. 263) -> estimated 6 MB changes

VBMOD: 815000 processed rows (avg row len. 63) -> estimated 50 MB changes

ARFCSDATA: 115000000 processed rows (avg row len 1850) -> estimated 200 GB changes

So the information clearly points to ARFCSDATA to be the culprit.

I wouldn't want to dump the archived redo log files, better might

be to have a look at V$LOGMNR_CONTENTS column REDO_LENGTH

or the like.

Regards,

Mark

Former Member
0 Kudos

Yeah i agree dumping the redo is too complicated, i shouldn't have mentioned it.

ARFCSDATA could have a lob segment, depending on your installation release. And there is a whole bunch of ARFC/TRFC tables that are changed together. There should be a similar amount of deletes too. This can greatly influence your calculation.

But i think you should check, if your RFC activity has increased significantly. You can also try to consult ST03 for increased RFC steps. And you can check a few older AWR reports to compare with the actual situation.

The RFC transactions SM58/SMQ1/SMQ2 might be worth a look too.

Good luck!

Former Member
0 Kudos

Hi Mark,

Can you please check the alert log also that you are getting any check point not completed warnings in the alert log.

If yes then u need to increase the size of the redo containers.

also u check for the expensive SQL statements based on the disk hits and ask your abaper to optimize it this will help u on improving the performance.

Thanks,

CHaitanya.