cancel
Showing results for 
Search instead for 
Did you mean: 

Performance issue in KONP or KONV table

Former Member
0 Kudos

Hi,

We are preparing a SD rebate agreement report where we need to fetch data from condition tables like KONP, KONV, KONH.

Firstly we need key fields (KNUMH, KOPOS) from KONP by providing Agreement no (KNUMA). Its taking a absurd time to fetch the data, sometimes showing time out error.

When we normally accessing the table via se16 or se11 or se16n, its also taking long time to run.

I have collected the trace also( attaching herewithin).

Kindly help us in how to reduce the time to fetch data from the tables.

Accepted Solutions (0)

Answers (4)

Answers (4)

fidel_vales
Employee
Employee
0 Kudos

Hi,

as already metioned, there is a lack of information to do a proper analysis. But I will try to guess a little.

98% of the trace time is accessing the KONP table. Only one fetch for 1 row

The condition seems to be


WHERE "MANDT" = '450' AND "KNUMA_BO" = '0000000602' AND ROWNUM <= 200

but there is no complete sql statement.

I do assume that there is no index where the first fields are MANDT and Knuma_bo, isn't it?

dmitry_kuznetsov1
Active Participant
0 Kudos

My two cents in here from not-so-techy standpoint: the problem with KONV is that it's:

a) big

b) clustered

What helped us - pre-filtering on one of the key fields, KNUMV, with a particular number-range. If you still need the whole thing, stop reading at this point

The documents are numbered sequentially, therefore filtering the number (greater than a particular) will give you a piece of (recent) history.

To find out the number range you might want to turn to tables, linked to KONV and get the numbers used there recently. The tables are (at least):

VBRK

VBAK

BKPF

MKPF

If you find a time-range (let's say 3 years history) in those tables and respective KNUMV numbers, you might be able to get an idea of the range to filter on in KONV.

In our case, scraping a number of years of history that we didn't, really, need, had quite positive implications. Hope this might add value to you too.

Former Member
0 Kudos

Hi Ananya,

The best way to collect the data is using BAPI instead of accessing data from the table(s), directly. Under the specific conditions, you may need to read the data from the tables, without calling a standard  function module.

Regarding the sql statement you provided, using KNUMA_BO field in the where conditions. To read the data efficiently, where condition and the index fields should be corresponded. Be sure that the optimizer has been selected correct index and prepared execution plan. You can check this on shared cursor cache, by using ST04 transaction.

Additionally, be sure that the database statistics have been collected, regulary. You can check this on DB13 transaction.

One more thing that check for the index which is used in the call, exist on the database. You can check this, on the DB02 transaction.

In short, you can create a secondary index with the "MANDT" and "KNUMA_BO" fields and see the result. As a rule of thumb, number of indexes shouldn't be more than 5, on a table.

You question is very generic question. So, check the BAPI is exist first. If it is not available, apply the steps I provided.

Best regards,

Orkun Gedik

stefan_koehler
Active Contributor
0 Kudos

Hi Ananya,

these traces are fine from a SAP perspective, but they are useless for query performance troubleshooting.

Please check my blog for a "drill down approach" of that particular query:

http://scn.sap.com/community/oracle/blog/2013/01/10/oracle-advanced-performance-troubleshooting-with...

Hand it over to your DBA team ... this is a typical Oracle DBA / Engineer troubleshooting task.

Regards

Stefan