on 10-02-2014 3:22 PM
Hello All,
We have an ecc system running on ms sql server. The database has 4 data files each of 900 GB and because of this huge data files we are facing problem with the backup and restore.
So for avoiding this and other issues we have planned to change the number of data files from 4 to 12. So can anyone suggest idea and approach for performing such task.
Regards,
Mohit
Hi Mohit,
if you have a downtime the best way is to export the system with SWPM and R3load and import it again while this time using a higher number of data files. This will ensure that the data is evenly distributed over all files.
If you just add files right now all current data remains in the first 4 files while most of the new data will go into the newly added ones. Usually one tries to avoid that as it can impact performance.
Regards,
Beate
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Hello Beate,
Thank you for your answer. The size of the database is around 3 TB which is spread-ed in 4 datafiles.
Is it a good idea to make a export of this huge database and then import it again with new data file structure. Can you please provide some link or documentation for my better understanding.
Many thanks
Regards,
Mohit
Mohit,
The procedure to do this with SWPM is described in the System Copy Guide for your NetWeaver release. Instead of the Database-Specific procedure, follow the instructions for the Database-Independent procedure. That basically has you use SWPM to export your database and then re-import it (i.e., as if copying it to a different system, but in this case you are importing it back to the same system).
It will likely take quite a bit of time, though, so you might want to try it on a sandbox or test system that is a copy of your production so as to get an idea of how much downtime is required.
Regards,
Matt
Hi Mohit,
as usual - it depends
Mainly on:
- How fast your I/O is
- How many CPU power you have
- How experienced and good you are with defining helpful table splits and export / import orders
This will influcene the export and import time.
I am not saying this is the only option - I'm only saying it's the best option to evenly distribute all data over all existing datafiles after adding new ones.
You can as well just add four files but then I'd have a very close look at how the I/O access times change in transaction ST04 after doing this. It's also a very good idea not to do this in the production system directly.
Another option to proactively make the DB distribute the data into the newly added files is:
- Add four files or as many as you want
- Use report MSSCOMPRESS to reorganize the DB objects
This way you don't need a downtime but it's still I/O intensive. At least this gives you the option to not touch the whole DB at once but to do it bit by bit.
Still, the best distribution will be achieved by export/import.
Regards,
Beate
Hi Mohit
You can refer the SAP Note & SCN blog about Sql server tuning
1238993 - Proportional File Auto-Growth with SQL Server 2008
and also refer the Microsoft link Add Data or Log Files to a Database
Regards
Sriram
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
User | Count |
---|---|
84 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.