cancel
Showing results for 
Search instead for 
Did you mean: 

Minimizing Downtime during large database Migration with Oracle Data Guard

hannover_sap-basis
Participant
0 Kudos

Hello community,

we have to migrate a large database to a new datacenter. My approch to avoid downtime was to install a Oracle DataGuard system completely and do log shipping of the current live produciton system to this target system.


We are aware that the SAP provides the following propositions (but no option):

Note 1508271 - Oracle to Oracle Online Migration - Triple-O

Note 693168 - Minimized Downtime Service and Incremental Migration (IMIG)

Would you recommend this method and there are documentation for this (DataGuard)- method?

Many thanks for your help!

Lutz

Accepted Solutions (1)

Accepted Solutions (1)

stefan_koehler
Active Contributor
0 Kudos

Hi Lutz,

> Would you recommend this method and there are documentation for this (DataGuard)- method?

Absolutely. I have used this method several times (at client site) for moving critical / large Oracle databases (> 7 TB) from one site to another or for OS migrations. You can also use heterogeneous Data Guard for various OS combinations (e.g. from Windows to Linux). The downtime itself depends on the DG configuration, but mostly the business allows a downtime of just a few minutes which is no problem at all with DG.

Long distances (with minimal performance impact) can be realized with cascading standby databases or Far Sync in 12c (12c is currently not supported for SAP, but you can use the "manual" cascading approach). In combination with redo compression (ACO option) it becomes even more smoothly.

Another benefit of using physical standby databases is that you do not change any "physical" data structure by applying the changes logical. All the statistics and "physical" databases objects are kept the same and so the risk of of Oracle related performance issues after migration is also minimized. The standby site (including SAP) itself can also be tested and verified before Go-Live with a feature called "snapshot standby database" without rebuilding the environment at all.

> Where to find documentation about Oracle DG?

Oracle Database Online Documentation 11g Release 2 (11.2)

I recommend to use the Data Guard Broker (except by using a cascading standby solution as this is only possible with DG Broker 12c onwards).

Go for it

Regards

Stefan

Answers (5)

Answers (5)

former_member602456
Discoverer
0 Kudos

Lutz, 
just my two cents, it seems like a lot of un-necessary work to setup dataguard,  and even go so far as to configure the broker for a one-time copy of a database.


Regardless of whether you choose dataguard or not,  I have a suggestion regarding the method used for duplication of the current production database: 

If you're running 11gR2, consider foregoing a full backup / restore .

Save some time, and run  an rman duplicate using "FROM ACTIVE DATABASE"  clause .

for example :

CONNECT TARGET SYS/<sys_password>@(Source Database)

            CONNECT AUXILIARY SYS/<sys_password>@(Target Database)

            CONNECT CATALOG rman/<rman_password>@(Rman Catalog Database)

run{

allocate channel p1 type disk;

allocate channel p2 type disk;

allocate channel p3 type disk;

allocate channel p4 type disk;

allocate auxiliary channel a1 type disk;

DUPLICATE TARGET DATABASE

                        FROM ACTIVE DATABASE      # allows the primary to remain open and running

DORECOVER                           # automatically apply archived redo logs

NOFILENAMECHECK;             # permits DUPLICATE to use identical /path/file names on  the target.

}

If you do decide to go the Data Guard Route,  this RMAN example would need to change the "DUPLICATE" statement to say  DUPLICATE TARGET DATABASE FOR STANDBY in order to give you a standby controlfile on your target, rather than a copy of the primary controlfile.

Good luck to you,

Dave Hays

Former Member
0 Kudos

I agree with Stefan,

If manual archive log shipping/copy is your option, then just do a full backup restore and then transfer all the archive logs on DC2. During cut-over you can stop the DC1 instance and apply all pending archive logs on DC2 then do post recovery steps, it's simpler for small databases.

DG is not at all required for this.

If your database/environment is large in size, then the smooth cutover can be done using DG (Mentioned by RB in his update). It need very minimum downtime of primary and you need to do (almost) nothing other than post recovery during cutover.

Regards,

Nick Loy

hannover_sap-basis
Participant
0 Kudos

Hi Benjamin and Stefan,

many, many thanks for reply!

I want to transfer the archived redo logs from the primary site to the standby site manually

and plan to perform the following steps:

  • "alter database force logging;" on source database
  • "alter database create standby controlfile as '..../stbyctl.dbf';"  on source database
  • copy source to target- database
  • "startup nomount pfile pfile=initstandby.ora;" with "*.control_files='.../stbyctl.dbf';" on target database
  • "alter database mount standby database;" on target database
  • "alter database recover standby database cancel;" on target database
  • "alter system archive log all;" on source database
  • "shutdown immediate" on source database
  • "recover standby database" on target database

I have forgotten something?

kind regards

Lutz

stefan_koehler
Active Contributor
0 Kudos

Hi Lutz,

> I want to transfer the archived redo logs from the primary site to the standby site manually

I am sorry, but you may have not mentioned an important fact or i just do not get it right.

What's the purpose of a physical standby database in your scenario, if you want to do all of that stuff manually / by hand? You can just restore the database (in the new data center) and perform a "recover database using backup controlfile until cancel" several times in a row, if you want to do all of it manually. There is no need for a standby database (DG) at all.

However manual tinkering solutions are nothing for enterprise systems, if this is really a large database (and larger begins by several TBs in my usage) with a corresponding high amount of redo and the need of verification before Go-Live.

Regards

Stefan

Reagan
Advisor
Advisor
0 Kudos

If you want to create a standby database and use the dataguard then it is also possible.

This will take care of the automatic log shipping.

Regards

RB

Reagan
Advisor
Advisor
0 Kudos

Hello

This is what I have done to move a database from DC 1 to DC 2

On the target system install SAP and Oracle with the same SID.

Take a full backup of the primary database (live production system) and restore it to the target system.

Start the target database in mount status.

Manually ship the archived logs from the primary system to the target system.

Apply the transferred archived logs on the target system.

On the final day of the move shutdown SAP (CI and AS's), do a log switch and shutdown the database and listener of the primary system.

Transfer all the archived logs generated by the system to the target system.

Apply all the logs on the target system.

Open the database.

Start SAP and apply the license (This can also be done before)

Reconfigure operation modes, Logon Groups (if required), RZ12, Profiles.

Give system back to production.

Regards

RB