cancel
Showing results for 
Search instead for 
Did you mean: 

brconnect stats doesnot improve the performance.

Former Member
0 Kudos

Hi Experts,

I am running a BRCONNECT external program in one of my billing batch jobs. This BRCONNECT shows suucessful. However the performance of the read from the table is not improved.

The BRCONNECT code is as follows:

BRCONNECT

-u / -c -f stats -o SAPSR3 -t ZVDEVICE_HISTORY -e null -m EI -s P10 -f allsel,collect,method,sample,keep

The Z table has more than 15,000,000 entries. The recommended sample is 1%. ie I have to change the -sP10 to P1. I want to try this however, one issue is the batch job is still active. and it is reading from the Z table. So If I change the sample percentage now and rerun the brconnect, will it impact the already active batch job.

Accepted Solutions (0)

Answers (4)

Answers (4)

Former Member
0 Kudos

Kirti,

Looks like your accessing a Z* table which is custom code. So we need to know how Oracle is trying to get to the data.

1) If you have an SAP transaction running long, you can use SAP transaction ST12 to trace the transactions which will give you an SE30 (runtime analysis) and an ST05 (SQL Trace) see SAP 755977. Especially if you have a reproducible troubled transaction in SAP.

Use SM51 to logon to the app server the user is logged onto, go to ST12 -> Enter a description in the "Comment" field of the trace -> Put in the USERNAME of the user having the problem in "Username", hit START TRACE. After the transaction is completed hit "END Trace". Then down at EVALUATE (at the bottom) the ABAP button will down the runtime analysis and will quickly identify if the problem is in the ABAP or DB. If it is ABAP then the function or program with the most time should jump out at you. If it is in the DB, then we click on on the PERFORMANCE TRACES to see ALL the SQL statements. If there are thousands of SQL statements, then just click on TOP LEFT MENU option "Trace List" and then go to "Summarize Trace by SQL Summary". This will summarize the SQL statements and show which SQL statement is taking the MOST time. Play around with ST12 and you will get better at the tracing.

Also if you have authorizations to run the transaction, then in ST12 you can click on "Current Mode", enter the transaction/program, then click "Execute/Start Trace". Once the transaction is completed then "Green Arrow" back to the St12 screen. Wait for SAP to gather the analysis and then click on the "ABAP Trace" or "Performance Traces" at the bottom as before.

2) Now if the ABAP trace shows you have long runtimes in the ABAP programs/functions then you show this to the developers and they can debug why. If you see long runtimes in the DB and especially on certain SQL statements, then you need to look into SQL tuning. This could mean

a) Oracle needs to pick a better execution plan

b) We need a better SQL statement (if this is custom code)

c) We may need a better index for this query.

Good Luck,

Mike Kennedy

Former Member
0 Kudos

Hi All,

The Z* table has only 2 indexes. The primary index and 1 secondary index. I had put the trace and found that 1 select statement is taking a longer time. However what I am not able to understand is the select query has been written to check the primary keys. hence the query should use the primary index to get the output. but it seems like it is not using the primary key.

The select statement is like this:

select single * from zvdevice_history where

kund1 eq i_shippers-kunnr

and bzirk eq itab_zones-bzirk

and bill_from eq s_date-low

and bill_to eq s_date-high

and equnr in s_equnr

and zzpald eq p_pald.

And the primary key for the table is like:

key MANDT

key KUND1

key BZIRK

key EQUNR

key BILL_FROM

key BILL_TO

key CHARGE_FROM

key CHARGE_TO

non key ZZPALD

non key ZZYOI

I suppose by default the query should pick the primary index and if so it should not take this long to fetch a record. Also, its select single.

This query had been working perfectly 2 months back. However since last 2 months it is taking a longer time. This query runs once a month. The only difference in the system that we can see is that the BRCONNECT was not run in july 2011. however, we ran it in aug 2011 before the query was run. But it din't help. Hence the question comes back to how does the BRCONNECT create stats. If it bases the stats on the previous months run (july 2011) then it would have created wrong stats because the query took a very long time when ran in july. So the same is carried forward the next month. I have to resolve this issue within this month so that it doesn't happen next month.

Former Member
0 Kudos

Hi Kirti,

select single * from zvdevice_history where
kund1 eq i_shippers-kunnr
and bzirk eq itab_zones-bzirk
and bill_from eq s_date-low
and bill_to eq s_date-high
and equnr in s_equnr
and zzpald eq p_pald.

The query statement that you've provided will not use primary key, because of you didn't fulfill all the fields in primary key, regarding your index definition. Even you execute, "select single" statement, the CBO may decide to use primary or secondary index or full table scan, regarding its statictics and your where conditions. Additionally, I see that you used "in" operator in your where condition which lead the CBO to use secondary key if it match with it.

As an additional recommendation, check the secondary index is exits on the database by using SE11 -> <Enter Table Name> -> Display -> Indexes... -> <double click on the index> path. Then, check the "status" field. It should be "Active" and "Index <Index name> exists in database system ORACLE"

Best regards,

Orkun Gedik

Former Member
0 Kudos

Karti,

Can you show us the SQL statement and the execution plan from the trace? Also how long did that SQL statement run for in the trace? Was it the main statement that took most of the runtime? I would be interested in seeing the SQL and explain from the TRACE (not the ABAP) like:

SELECT * FROM ZVDEVICE_HISTORY WHERE... <This is what I want to see>. The WHERE clause is what is important since we need to see which columns you are passing to Oracle.

And the explain should looke like (This is just an EXAMPLE):

SELECT * FROM ZVDEVICE_HISTORY WHERE...

Execution Plan

-


SELECT STATEMENT ( Estimated Costs = 1,310 , Estimated #Rows = 1 )

--- 2 COUNT STOPKEY

-


1 TABLE ACCESS FULL ZVDEVICE_HISTORY

So Oracle will have 3 options, a full table scan, use the unique index, or use the secondary index. So we need to know how the indexes are created. So in the trace, when you do the explain, if you click on the table, it will show you all the indexes and which columns are in the index. Can you show defintion of both the ~0 and the secondary index so we can see how they are defined. This information will tell us what the CBO is doing and why.

If that does not work, then send the output from:

sqlplus "/as sysdba"

set lines 132

col index_name format a30

col column_name format a30

break on index_name skip 1

select index_name, column_name, column_position from dba_ind_columns

where index_name in (select index_name from dba_indexes where table_name = 'ZVDEVICE_HISTORY ')

order by index_name, column_position

Good Luck,

Mike K

former_member204746
Active Contributor
0 Kudos

you can create anew index with these fields:

kund1

bzirk

bill_from

bill_to

equnr

zzpald

good luck.

Former Member
0 Kudos

Hi,

You say performance.

As it's a bespoke table I am assuming there is a bespoke piece of abap involved?

What is the select criteria against this table.

Is it using a proper index for the select criteria ? is your table index heavily fragmented.

Have you tried to rebuild the table and index online....

Mark

Former Member
0 Kudos

Hi,

Stats doesn't neccessarily always improve performance. Infact it might degrade performance in some cases. What you need to do first is analyze the query. See the execution plan, check the fields in the "Where" clause. Do you have the right indexes ? How many distinct values in the field. So it depends on multiple factor's. For all you know the query might think that FTS is the best way to go based on the query and stats on the table.

And i believe changing the stats during the runtime of the job shouldn't affect anything, it might improve / degrade / or remain as is.

Regards

Nainesh

volker_borowski2
Active Contributor
0 Kudos

Well,

calculating stats doe only improve performance, if the different stats result in a

different and better execution plan.

You might need to do an SQL trace (ST05) of the statement which runs long and analyze the execution plan.

Most likely you will need to create an additional index to speed up your program.

Volker