cancel
Showing results for 
Search instead for 
Did you mean: 

big datafile vs many datafile

Former Member
0 Kudos

I'm currently contemplating on how do design my datafiles layout. I'm running ERP6 on AIX 6.1 with Oracle 10g. Total DB size is 1300GB.

The default is to have 10G of datafiles which will leads to 130 data files which I think too much of a hassle to keep track all of those files. Is it ok to have datafiles of size 100GB? what are the pro and cons?

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

Thanks guys for giving me some tips. I guess I've to settle for 32 GB as the max size.

I miss understood the oracle docs. The 32TB max size is when the option of BIGFILE TABLESPACE being use when creating the tablespace. FYI, this option is not supported by SAP SNOTE 105047

stefan_koehler
Active Contributor
0 Kudos

Hello Mohammad,

> With 8KB block size we can go up to 32TB, not 32GB.

That's not correct ...

Regarding to metalink note #271482.1 (Database anf File Size Limits in 10G) you got the following limits for each data file:

4,194,303 multiplied by the value of the DB_BLOCK_SIZE parameter

So in your case it is 4194303 x 8kb = 33554424 kb = 32 GB

We are running a 3.6 TB database - most of the data files are round about 10G .. no need to worry.

The only thing you have to think about is about "Backup & Recovery" and "Spreading the workload", if you design your database file system structure. Larger data files take much longer to recover (if the whole file is lost) and spreading the workload can also be a topic if you are not using an centralized storage system.

But this stuff is environment dependent - nobody can say something in general about that.

Regards

Stefan

Former Member
0 Kudos

Hi Mohammad,

It's basically depend on what Oracle "Blocksize" you are using. If it is 8K, then maximum datafile size can be 32G.

Again, you should think about backup/recovery and performance issues.

You also need to check your backup tool limitations about same.

There can be a OS limitations also for larger datafiles.

Checkpoint operation may take less time for larger datafiles, as it need to update the headers of few datafiles as compare to large no. of small datafiles.

Restore time would be same in both case. Because, restoring five 4GB datafiles and one 20GB datafile will take same amount of time.

It would be better, if you consult with core DBA persom for your doubts.

Regards.

Rajesh Narkhede

Former Member
0 Kudos

Dear Rajesh,

With 8KB block size we can go up to 32TB, not 32GB.

kishore_soma
Active Participant
0 Kudos

Hi jeffrey,

the max filesize in AIX is 32 GB, that is you can create a datafile with maximum size of 32 GB, Also there is a limitation of the total number of datafiels in a tablespace i.e., 1024 files max (hardcoded in oracle).

Better always if the work is distributed among so many peoples. so as many files in as many mount points will give you good performance. So better to hav 10 datafiles, then your DB performance will be good. it differs like you hve 5 datafiles and 10 datafiles, the I/O in 10 files will be beter than in 5 datafiles.

Hope this clarified your doubt.

Regards,

Kishore Soma

Former Member
0 Kudos

Dear Kishore,

thanks for your input. I'm using AIX 6.1 with JFS2 fileystem. The max file for this setup is 32TB.

kishore_soma
Active Participant
0 Kudos

I m not sure of this setup, but it will be difficult for the OS also, if the file size is very BIG and even for backup/restore will also be a bit difficult.

I mean maintainance by OS as well as u will be difficult, it affects performance also.

Regards,

Kishore Soma