cancel
Showing results for 
Search instead for 
Did you mean: 

Issue with Dataguard:Failed to request gap sequence

Former Member
0 Kudos

Hello Gurus,

We are facing an issue with our Dataguard node.

Archive logs are getting shipped from Primary node to Dataguard correctly.

But due to some issue, the logs ar enot getting applied on Dataguard.

Querying of the v$archive_gap showed that log files 69918 & 69919 were not appplied.

Dataguard node alert_sid.log :

-


Media Recovery Waiting for thread 2 sequence 69918

Fetching gap sequence in thread 2, gap sequence 69918-69919

FAL[client]: Failed to request gap sequence

GAP - thread 2 sequence 69918-69919

DBID 1059364943 branch 756090934

FAL[client]: All defined FAL servers have been attempted.

-


Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization parameter is defined to a value that is sufficiently large

enough to maintain adequate log switch information to resolve archivelog gaps.

-


Archive log gap is for 69918 and 69919.

CONTROL_FILE_RECORD_KEEP_TIME value is 30 days in Dataguard.(and these logs are not 30 days old.so i think there is no need to think of changing parameter value).

STANDBY_FILE_MANAGEMENT is set to AUTO in Dataguard

The missing log files were shipped to Dataguard and are present there (But ddnt get applied for some reason).

In Standby database last applied log is : 69917

In Primary Last archived is : 73419

There are lot of archive logs to be applied in Standby.(73419-69917).

MRP process status in Dataguard is: WAIT FOR GAP

Could you please let me know what needs to be done in order to re-start Log applying in Data guard node.

Thanks in Advance,

Sam

Accepted Solutions (0)

Answers (3)

Answers (3)

Former Member
0 Kudos

Hello,

Thank you for your suggestion.

But the archive log files are present in primary as it will not get deleetd until applied in Standby.(configured like that)

I manually copied 69918 and 69919 and automatic recovery with below command

ALTER DATABASE RECOVER AUTOMATIC STANDBY DATABASE

(http://docs.oracle.com/cd/B19306_01/server.102/b14239/scenarios.htm#i1032254)

There are many log files missing in the standby. I copied around 10 files from primary and tried but it kept on asking for different log files.

Currently archive gap is for 69930 and 69931.

I checked in the standby database and found out that there are many files missing in between.

AlertSID.log of standby:

-


RFS[1]: No standby redo logfiles of size 204800 blocks available

RFS[1]: Archived Log: '/oracle/P64/oraarch/P64arch2_76507_756090934.dbf'

Primary database is in MAXIMUM PERFORMANCE mode

RFS[1]: Successfully opened standby log 123: '/oracle/P64/orig_stby_logA/log_g123m1.dbf'

Thu Feb 23 10:29:08 2012

Primary database is in MAXIMUM PERFORMANCE mode

RFS[2]: No standby redo logfiles of size 204800 blocks available

Thu Feb 23 10:32:03 2012

RFS[2]: Archived Log: '/oracle/P64/oraarch/P64arch1_61630_756090934.dbf'

Primary database is in MAXIMUM PERFORMANCE mode

RFS[2]: Successfully opened standby log 113: '/oracle/P64/orig_stby_logA/log_g113m1.dbf'

Thu Feb 23 10:32:03 2012

Primary database is in MAXIMUM PERFORMANCE mode

RFS[1]: No standby redo logfiles of size 204800 blocks available

-


There are multiple entries for RFS[1]: No standby redo logfiles of size 204800 blocks available

Please let meknow if this is why some arch logs are not getting copied.

p64_mrp0_18145.trc log file:

-


Check that the CONTROL_FILE_RECORD_KEEP_TIME initialization

parameter is defined to a value that is sufficiently large

enough to maintain adequate log switch information to resolve

archivelog gaps.

-


We are at oracle version 10.2.0.4

Thanks in advance,

Sam

stefan_koehler
Active Contributor
0 Kudos

Hello Sam,

There are multiple entries for RFS[1]: No standby redo logfiles of size 204800 blocks available

It seems like you have configured LGWR SYNC or ASYNC as redo transport mechanism, but you don't have configured standby log files on your standby database.

It is like guessing and looking into a crystal ball without knowing your exact configuration. There are two scripts in metalink note #241438.1 and #241374.1 to provide all the necessary information.

Regards

Stefan

Former Member
0 Kudos

Hello,

Thank you for your reply.

we use SYNC redo transport mechansim.Please find parameter value down.

log_archive_dest_2='service=P64 LGWR SYNC NOAFFIRM valid_for=(ONLINE_LOGFILES,PRIMARY_ROLE) db_unique_name=P64'

I am unable to check the metalink notes as i do not have oracle support Logon credentials.

Thanks,

Sam

stefan_koehler
Active Contributor
0 Kudos

Hello Sam,

The missing log files were shipped to Dataguard and are present there (But ddnt get applied for some reason).

... in this case it seems like the archive log files are not present on the primary site anymore and they are not registered on standby site. We don't know anything about your configuration and how you backup / delete your archive logs.

A solution for your current situation is to manually register the two archive logs with the sequence number 69918 and 69919. The root cause for this issue needs to be investigated on your own.

Register the missing archive log files with the following command on standby site:

shell> sqlplus / as sysdba
SQL> alter database register logfile '<PATH_TO_ARC_FILE>';

Documentation: http://docs.oracle.com/cd/E11882_01/server.112/e26088/statements_1004.htm#i2186977

Regards

Stefan

Former Member
0 Kudos

Hello,

Request you ty provide suggestions as the issue is still persisting.

Thank you,

Samaya