on 06-01-2010 12:20 PM
Hi,
I like to change the array size the sap system is using for a specific statment
and the turn it back to normal. Is there a way to read the arraysize of the
current session from somewhere, change it and change it back to normal later?
I'd like to try this, to speed up a specific selection.
Volker
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
I'll never understand, when this hapens to my typed text...
Hi Stefan,
interesting things you tried to check out.
From my point of view, the performance difference does not come
from DB-layer, because as you confirmed, the db-times are the same.
So i guess, in case one it is doing a single "memcpy" on kernel level
to move all the rows from one fetch to the application storage,
where as in the other option it has to deal around with calculation
offsets for single rows to address them one by one.
Volker
Thanks Eric,
it never occured to me that there was a limit.
I just concluded, that the input is garbled when I use to put something in from the clipboard.
Hey Mods, this should be noted in the "further markup posibilities" to avoid confusion.
Lars, looking into the notes mentioned, it seems that this is a systemwide setting.
No way to change this for me, because this is a BW (Bank-Analyzer) system, which in generall
need a huge arraysize for speedy inserts.
Currently this paramter it is set to 1.000.000 for us according to a special BW note.
I was merely looking for something like an "alter session" pendent.
Looks like there is no solution to this.
Volker
For frequent readers: Sorry for fooling you with a customer Userid instead of my usually,
did not intend to. Happend due to certificate login at customer site. Sorry for that.
Hello Volker,
> Looks like there is no solution to this.
If you are using the OPEN SQL it seems like there is no solution, but if you would use NATIVE SQL and the PL/SQL package DBMS_SQL their could be a solution.
Please check the procedure DEFINE_ARRAY (parameter cnt) of the DBMS_SQL package:
http://download.oracle.com/docs/cd/B19306_01/appdev.102/b14258/d_sql.htm#i1025685
To be honest i have never tried and tested it, because of i used SQL*Plus and changed the array fetch size there, but it maybe a solution. You can crosscheck this by checking the "consistent gets" after changing the number of rows.
Regards
Stefan
Hi Stefan,
no, unfortunately this would be no solution as well, because I'd need to
modify the code as well for this, because as I understood the link,
you need to rebuild the entire stuff with fetches of your own.
I would never intend to do this.
As for code of our own, we can achieve the wanted result
as described above, by selecting the keys first.
But we are as well calling a SAP BAPI from our own coding, which
is doing the same. And of course I'd never intend to change a BAPI
I was just considering
set array small,
call BAPI
set array normal
Volker
Hi Volker,
the arraysize used by the SAP application server is determined by the DBSL parameter dbs/ora/array_buf_size.
Notes like
618868 FAQ: Oracle Performance
1164043 DBSL (Oracle): Maximale Puffergröße für Array-Operationen
provide some recommendations about it.
Anyhow, usually the default arraysize is pretty large.
Can you provide some more information about why you think that the statement performance would increase with a different array size?
regards,
Lars
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Lars,
the problem is, that the array size is too large in my case.
I have a piece of code (which is not mine btw) which is doing
Select ...
where something < DATE
order by DATE desc
exit
endselect.
Basicly it is searching for a last version to a given date.
The problem is the there are a whole lot of versions available, but we
only need the first row to get the last version.
The sequence goes like this:
open cursor
prepare resultset as large as fits into array (mostly it fits complete)
execute fetch
... but the bloody application just needs the first row of it.
Allthough oracle did spend so much work on it to get i.e. 53 rows to full fill the request.
The overall cost is bad, because the statement is doing let's say 60 blockinspections.
Steping down al level 3 index -> 4 Blocks
Rangescanning the index -> 3 Blocks to get 53 keys (Index supports WHERE and ORDER by)
But due to bad clustering factor, it needs to get 53 diffrent data blocks.
(and no way to do a reorg on that table)
Luckily, the index is the primary key. And the WHERE is fully supported by the PK.
So in code that belongs to us, we did bend the rules of simple programming by
doing a select of the key fields first to get the key of the row we like to get
and then get the data after this with a full-key-specified select. Cost is
Steping down al level 3 index -> 4 Blocks
Rangescanning the index -> 3 Blocks to get 53 keys (Index supports WHERE and ORDER by)
No need to fetch data blocks here
store key
select with key, by getting down the index again (4 Blocks) and getting a single data block.
Now in code it looks like somebody is doing foolish things but effectively we are downing
60 blockinspections to ~10-15. This is times 5.5 faster in average (which is confirmed by mesurement).
We even have data which has some 700 versions, because it is geting a version each day.
And allthough it has 700 versions, it fit's into one array set (so that is the worst case).
Now there are some functionmodules that SAP uses in the very same way, and of course
we do not like to change the code in that function module. But if I could reduce the
arraysize dynamicly before the call, the goal is the make the function module to not select
all the 53 rows before it returns, but to make the buffer so small that it returns after 2 rows,
thus doing less IO. It only makes sense because I know it will only need to go for row #1
and discard the rest.
Hope this clearifies what I am trying to achieve.
Did not completly checked the notes yet, will be back later on this.
Volker
>
> Select ...
>
> where something < DATE
> order by DATE desc
>
> exit
>
> endselect.
>
> Basicly it is searching for a last version to a given date.
> The problem is the there are a whole lot of versions available, but we
> only need the first row to get the last version.
> ... but the bloody application just needs the first row of it.
Ok - but, really, this sounds like a coding bug.
In fact to obey the ORDER BY clause, a intermediate result set needs to be created before any row is delivered at all.
So even with a small array size much work would be wasted if this query is used for your use case.
> Luckily, the index is the primary key. And the WHERE is fully supported by the PK.
> So in code that belongs to us, we did bend the rules of simple programming by
> Steping down al level 3 index -> 4 Blocks
> Rangescanning the index -> 3 Blocks to get 53 keys (Index supports WHERE and ORDER by)
> No need to fetch data blocks here
> store key
> select with key, by getting down the index again (4 Blocks) and getting a single data block.
And why not just write a subselect for that??
> Now there are some functionmodules that SAP uses in the very same way, and of course
> we do not like to change the code in that function module. But if I could reduce the
> arraysize dynamicly before the call, the goal is the make the function module to not select
> all the 53 rows before it returns, but to make the buffer so small that it returns after 2 rows,
> thus doing less IO. It only makes sense because I know it will only need to go for row #1
> and discard the rest.
If there are such standard codings, then I would say: these are buggy and need to be reviewed
regards,
Lars
Hi Lars,
no, the statement itself and the execution plan is rather optimal.
It is doing an INDEX RANGE SCAN DESCENDING so no intermediate set is build
in advance, because the index supports the order by. It simply scans down the index.
I am absolutly sure about this, because the descending access showed
up after some patchset / bugfix in 9.2.something some two years ago
and it pulled the old plan from SORT ORDER BY which was about 15
times slower than the descending range scan. That was funny, because
we thought something was wrong with our data suddenly.
It is really only the problem, that we only need the first row, and oracle preceeds
to fill the entire fetch array before it returns.
Meanwhile I think the best access fort this type of select would be utilizing
native Oracle SQL with an analytical fuction. But I do not think that there is
so much to gain against what we have in place right now.
So right now I just try to optimize the parts where I can not change the code,
without changing the code.
Volker
> no, the statement itself and the execution plan is rather optimal.
> It is doing an INDEX RANGE SCAN DESCENDING so no intermediate set is build
> in advance, because the index supports the order by. It simply scans down the index.
Ok - in that case there's no need for a intermediate resultset.
Difficult to say without knowing the full statement and execution plan...
> It is really only the problem, that we only need the first row, and oracle preceeds
> to fill the entire fetch array before it returns.
And still: only one row is required, so just the one should be queried correclty from the database.
Analytics may help with this, that's right,
regards,
Lars
User | Count |
---|---|
85 | |
23 | |
11 | |
9 | |
8 | |
5 | |
5 | |
5 | |
5 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.