on 11-14-2007 4:02 PM
Greetings
Long standing problem with 100 inconsistencies which range across COSP,COEP & GLPCA. (data changes too fast in our system to consider a restore).
We need to identify and remove these errors before we change collations.
The only option offered by DBCC is "repair_allow_data_loss" which I definitely don't want to do.
Is there a way out of this?
Thanks in great anticipation!
Hello,
with these prerequisite (no restore and no repair) you can only pray for a no-dataloss solution.
As you worked with the database while it is corrupt (not recommended) you have only two ways out of it (maybe):
1) do a dbcc with repair_allow_data_loss and live with the logical inconsistencies
If the DBCC cannot repair the database you have to do a homogenous system
copy with R3load to export as much data as possible.
2) restore the database from a know good backup and apply transactionlogs until
now. When you use a old DB backup, which is not corrupt and all your TA
backups are accessible and not corrupt, you will end up with a current and
not corrupt database. If you don't have a non-corrupt DB backup or only one of
the TA backups is damaged, see option 1).
As database corruptions are caused by faulty hardware, you should do an extensive check on your hardware. Ask your hardware vendor to assists you during these checks. You have to do this check before you attempt to restore or repair or even use the database on this hardware. When you keep using this faulty hardware you will encounter more and more corruptions in more and more tables over the time.
Regards
Clas
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.
Thanks Clas,
The problem unfortunately predates me.
Since then I have moved all the instances to new server hardware and SAN storage.
What would happen if I just ran instcoll.exe against it?
I might try it with a copy of the DB on a test instance to see what happens.
Given, at most this apears to only be a few rows of data, is there any way of knowing how much data will actually be trashed without "suck it and see"?
Would rebuilding all indexes help?
Tim
User | Count |
---|---|
85 | |
10 | |
10 | |
10 | |
7 | |
6 | |
6 | |
5 | |
4 | |
4 |
You must be a registered user to add a comment. If you've already registered, sign in. Otherwise, register and sign in.