on 04-29-2010 3:16 AM
I'm currently contemplating on how do design my datafiles layout. I'm running ERP6 on AIX 6.1 with Oracle 10g. Total DB size is 1300GB.
The default is to have 10G of datafiles which will leads to 130 data files which I think too much of a hassle to keep track all of those files. Is it ok to have datafiles of size 100GB? what are the pro and cons?
Thanks guys for giving me some tips. I guess I've to settle for 32 GB as the max size.
I miss understood the oracle docs. The 32TB max size is when the option of BIGFILE TABLESPACE being use when creating the tablespace. FYI, this option is not supported by SAP SNOTE 105047
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Mohammad,
> With 8KB block size we can go up to 32TB, not 32GB.
That's not correct ...
Regarding to metalink note #271482.1 (Database anf File Size Limits in 10G) you got the following limits for each data file:
4,194,303 multiplied by the value of the DB_BLOCK_SIZE parameter
So in your case it is 4194303 x 8kb = 33554424 kb = 32 GB
We are running a 3.6 TB database - most of the data files are round about 10G .. no need to worry.
The only thing you have to think about is about "Backup & Recovery" and "Spreading the workload", if you design your database file system structure. Larger data files take much longer to recover (if the whole file is lost) and spreading the workload can also be a topic if you are not using an centralized storage system.
But this stuff is environment dependent - nobody can say something in general about that.
Regards
Stefan
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi Mohammad,
It's basically depend on what Oracle "Blocksize" you are using. If it is 8K, then maximum datafile size can be 32G.
Again, you should think about backup/recovery and performance issues.
You also need to check your backup tool limitations about same.
There can be a OS limitations also for larger datafiles.
Checkpoint operation may take less time for larger datafiles, as it need to update the headers of few datafiles as compare to large no. of small datafiles.
Restore time would be same in both case. Because, restoring five 4GB datafiles and one 20GB datafile will take same amount of time.
It would be better, if you consult with core DBA persom for your doubts.
Regards.
Rajesh Narkhede
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hi jeffrey,
the max filesize in AIX is 32 GB, that is you can create a datafile with maximum size of 32 GB, Also there is a limitation of the total number of datafiels in a tablespace i.e., 1024 files max (hardcoded in oracle).
Better always if the work is distributed among so many peoples. so as many files in as many mount points will give you good performance. So better to hav 10 datafiles, then your DB performance will be good. it differs like you hve 5 datafiles and 10 datafiles, the I/O in 10 files will be beter than in 5 datafiles.
Hope this clarified your doubt.
Regards,
Kishore Soma
User | Count |
---|---|
87 | |
10 | |
10 | |
9 | |
7 | |
6 | |
6 | |
5 | |
4 | |
3 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.