on 04-28-2010 1:53 PM
Hi,
I am considering doing a long2lob conversion for table BALDAT
to be capable of online reorgs after archiving/delete runs for BALDAT.
I am a bit scared about the possible performance impacts that note 835552
is threatening about.
Has anyone done this already for table BALDAT and can tell a bit about
the impacts that had to be dealt with.
Thanks
Volker
Hello Volker,
at first let me point to some nice performance white paper about LOBs by Oracle:
With this white paper you are able to understand in which cases the caching of LOBs can be efficient or in which cases caching is done automatically (In-Line LOBs).
> Has anyone done this already for table BALDAT and can tell a bit about the impacts that had to be dealt with.
Everybody who is running an 7.00 system is using LOBs on BALDAT (or any other table, which was using LONG RAW before). This conversion is done by an unicode migration implicitly or in case of a new installation your are already using LOBs.
We never faced any performance issues after migrating from LONG RAW to LOBs.
Regards
Stefan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Stefan,
thanks for the document, it is quite interesting.
Meanwhile I did a conversion on a small sandbox with a stunning result.
Baldat Export (exp, no PQ) 470k rows in 14 sec
BALDAT long2lob (took 27 sec)
Baldat Export (exp, no PQ) 470k rows in 96 sec
BALDAT reorg online (took 29 sec)
Baldat Export (exp, no PQ) 470k rows in 94 sec
now 14 up to 95 is extremly bad, allthough tghis might not be
a representable access anyway. The lob itself appears to be stored inline,
as the lob segment got a 64k size and did not grow.
I am still not convinced.
Additional comments welcome.
Volker
Hello Volker,
> I am still not convinced.
Great ) ... i like guys who don't just believe and want to proof.
> now 14 up to 95 is extremly bad
That's right ... but there are so many options right now why it is slower...
1) How did you export the table (with exp or expdp)
2) After the reorg - the table (and lob segment) can be located on "slower disks"
3) How many data of BALDAT was located in buffer cache by performing the first, second and third export
4) What is the average i/o time?
If you really want to dig into that .. you have to do deeper analysis as just an "simple export" and compare the times.
Regards
Stefan
ya, I tend to be tedious sometimes
Now the sandbox is my own toy (No other users on it),
BALDAT there has some 400M and after a couple of counts right before
and after and around I am pretty confident that every action was already
completely in the buffercache with nearly no I/O involved for any action
(This is why I did the reorg right after the long2lob again, just to be sure).
Now 80 secs Delta do not seem to do harm, but the beast I need to apply
this to has quite a couple of TB, BALDAT has 100G and I have processings
that do write to BALDAT with 120-200 processes when they are finished
all at the same time.
I do not like to find these to be serializing on a BLOB Segment header later.
And as this step is only revertable with quite some efford, I like to be sure beforehand.
Volker
Hello Volker,
> after a couple of counts right before and after and around I am pretty confident that every action was already completely in the buffercache
I hope you don't mean that "counts" is a SQL like that "SELECT COUNT(*) FROM BALDAT" .. because in most cases (disregard the fact of possible NULLs) you are using an index and not the base table for counting the rows
> I do not like to find these to be serializing on a BLOB Segment header later.
As you told most data will be IN-Line (if i don't misunderstand you some posts before) .. so no need to worry about that
If you really think about LOB headers, etc .. you maybe find these presentations very interesting:
http://www.juliandyke.com/Presentations/LOBInternals.ppt
http://integrid.info/Poder_LOB_Internals.pdf
Regards
Stefan
Hi Stefan,
no, when doing such analysis, I try to avoid the I/O effects
by running a count(*) with full and cache hints several times
before measurements starts.
In general it works ok, if one has enough RAM and when using
sqlplus with autotrace and you'll barely see any "reads", but only "gets"
one should be fine. I crosschecked this in the early days, but I have
to admit, not in this case (It worked many times ok in the past).
As far as the INLINE stuff is concerned:
I did not know that before I checked, and unfortunately, my sandbox
just has some stuff from the standard BC application logs, not the ones
that my application is writing, those entries might be bigger(?).
May be I need to move the big BALDAT over (exp/imp) to the sandbox
and repeat the conversion with real data to be sure.
Volker
Hello Volker,
the CLUSTD field of BALDAT usually has a length of 512 byte. That is significantly below the 4 KB limit and so all values are stored "in row" and not in a LOB segment. This means that you don't have to worry about LOB segment / LOB index contention at all.
Regards
Martin
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
101 | |
13 | |
13 | |
11 | |
11 | |
7 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.