on 08-14-2008 4:48 AM
Dears,
Some days back we got a requirement from our client that they want to divide the datafiles into small size files.They were having datafiles of 10 GB each and they want to convert them into 2 GB files.As according to them back up of large file size is slower than large number small size files.As till now my concept of Adding datafile was initially to give 1 GB space and then autoextend upto 10 GB.
Now I am thinking that am i folllowing right procedure.Which one is good Adding 2 GB datafiles in tablespace with autoextend off or am i following right procedure.
Beside it is it also possible to divide a 10 GB file to five 2 GB files as i think it is not possible as datafile also may be corrupt in this case.
Please suggest.
Deepak
Hi Deepak,
the only way to accomplish this "conversion" is a tablespace reorganisation.
Create a target tablespace with datafiles that suit your needs and move all segments from the original tablespace into it, e.g. using BRSPACE (DBMS_REDEFINITION).
The performance benefit, well, it might be there - just by having more I/O handles (aka files) open in parallel when reading/writing data from that tablespace. It might also be that the performance remains unchanged or that it gets worse.
I highly recommend to setup a smaller test case for this to make sure that you don't waste your time.
regards,
Lars
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.