ORA-00742: Log read detects lost write on dataguarded standby RA database

We had an issue whereby the standby of our 5 node RAC cluster had the following errors reported:

ORA-00742: Log read detects lost write in thread 4 sequence 101460 block 6145
ORA-00312: online log 29 thread 4: '/oraredo/RACDB1_A/standby_redo29a.log'
Tue Feb 20 08:41:44 2018
MRP0: Background Media Recovery process shutdown (RACDB11)
Tue Feb 20 08:45:43 2018
Errors in file /u01/app/oracle/diag/rdbms/RACDB1dg/RACDB11/trace/RACDB11_arc3_18603.trc:
ORA-00742: Log read detects lost write in thread 4 sequence 101460 block 6145
ORA-00312: online log 29 thread 4: '/oraredo/RACDB1_A/standby_redo29a.log'
Tue Feb 20 08:45:43 2018
ARC3: All Archive destinations made inactive due to error 742
Tue Feb 20 08:45:43 2018
ARC3: Closing local archive destination LOG_ARCHIVE_DEST_1: '/oraarch/RACDB1/4_101460_813692162.arc' (error 742) (RACDB11)
Tue Feb 20 08:45:43 2018
ARC4: Archiving not possible: error count exceeded
Tue Feb 20 08:49:55 2018
Archived Log entry 444009 added for thread 4 sequence 101491 rlc 813692162 ID 0x974e092 dest 2:
RFS[30]: No standby redo logfiles available for T-4
RFS[30]: Opened log for thread 4 sequence 101492 dbid 4059507730 branch 813692162

This caused the following Cloud Control alert:

EM Event: Critical:RACDB1 - The Data Guard status of RACDB1DG is Error ORA-16766: Redo Apply is stopped.

On researching the error, it looked like logical log corruption on the standby redo log 29 thread 4.

MOS advised that there were some problems on reaching the stanbdy local archiving disk form pirmary, therefore the log corruption occurred. OERR: ORA-7286 “sksagdi: cannot obtain device information.” Reference Note ( Doc ID 20342.1 )

We confirmed with our Network Administrators to make sure the following Firewall Features are disabled.

SQLNet fixup protocol 
Deep Packet Inspection (DPI) 
SQLNet packet inspection 
SQL Fixup 
SQL ALG (Juniper firewall)

Also check with the Storage admin the issue from the standby disks, make sure enough space exist.

Then restart the standby arch – identify all archives on each node for this standby and then kill them

ps -ef|grep arc 
kill -9 <arc PIDs>

Then form the broker interface, dgmgrl, restart the trasport and apply

edit database <primary> set state=transport-off; 
edit database <primary> set state=transport-on; 
edit database <standby> set state=apply-off; 
edit database <standby> set state=apply-on;

This corrects the error on the primary although redo apply is still stopped. so the next step is to rollforward the physical standby using an RMAN incremental backup  (Doc ID 836986.1)

Important thing to note, is that even if you do not use RMAN to do the standard backup and recovery in your environment, you can still use it to rollforward the standby. We use Netapp snapshots for our general backup and recovery but this process uses the native RMAN method.

Following the steps in the RMAN rollforward document brings the database past the point of logical corruption and we can then resume apply on the standby and let it naturally catch up.

A couple of things to note:

I disabled our standard 30 minute backups from taking place by disabling our snapshot cron job.

We also wrote a little script to nohub the actual backup because it would have taken a while:

We created sufficient space in our archive log area to hold the rman backup on both the primary and the secondary database volumes.

On the Primary:

vi rman.cmd

allocate channel t1 type disk;
allocate channel t2 type disk;
allocate channel t3 type disk;
allocate channel t4 type disk;
BACKUP INCREMENTAL FROM SCN 795871180305 DATABASE FORMAT '/oraarch/RACDB1/forstandby/ForStandby_%U' tag 'FORSTANDBY';

We allocated 4 parrelel threads to create the backup. In hindsight, we could have allocated 8 channels as we run a 8 CPU host hyper threaded to 16 cores.

vi rman.sh

rman cmdfile=rman.cmd msglog '/oraarch/RACDB1/forstandby/rman.log'

to run the script:

nohup sh rman.sh &

Once the RMAN backup is running, it can be monitoring using:

AND OPNAME NOT LIKE '%aggregate%'

YOu can also get estimated completion time by running:

col dbsize_mbytes for 99,999,990.00 justify right head "DBSIZE_MB"
col input_mbytes for 99,999,990.00 justify right head "READ_MB"
col output_mbytes for 99,999,990.00 justify right head "WRITTEN_MB"
col output_device_type for a10 justify left head "DEVICE"
col complete for 990.00 justify right head "COMPLETE %" 
col compression for 990.00 justify right head "COMPRESS|% ORIG"
col est_complete for a20 head "ESTIMATED COMPLETION"
col recid for 9999999 head "ID"

select recid
 , output_device_type
 , dbsize_mbytes
 , input_bytes/1024/1024 input_mbytes
 , output_bytes/1024/1024 output_mbytes
 , (output_bytes/input_bytes*100) compression
 , (mbytes_processed/dbsize_mbytes*100) complete
 , to_char(start_time + (sysdate-start_time)/(mbytes_processed/dbsize_mbytes),'DD-MON-YYYY HH24:MI:SS') est_complete
 from v$rman_status rs
 , (select sum(bytes)/1024/1024 dbsize_mbytes from v$datafile) 
 where status='RUNNING'
 and output_device_type is not null

On an NFS filesystem, the files that are created will be hidden until they are fully written so it may be a while before you see a written file..unless you look for hidden files:

ls -llah

The end result when the backup is complete will look something like this:

-rw-r----- 1 oracle oinstall 15G Feb 20 22:21 ForStandby_0ssrntdu_1_1
-rw-r----- 1 oracle oinstall 28G Feb 20 22:38 ForStandby_0qsrntds_1_1
-rw-r----- 1 oracle oinstall 22G Feb 20 22:38 ForStandby_0tsrnte0_1_1
-rw-r----- 1 oracle oinstall 22G Feb 20 23:01 ForStandby_0rsrntdt_1_1
-rw-r----- 1 oracle oinstall 21G Feb 21 00:21 ForStandby_10sro3bd_1_1
-rw-r----- 1 oracle oinstall 15G Feb 21 00:40 ForStandby_0usro2c2_1_1
-rw-r----- 1 oracle oinstall 13G Feb 21 01:10 ForStandby_0vsro3ba_1_1
-rw-r----- 1 oracle oinstall 17G Feb 21 01:28 ForStandby_11sro4n4_1_1
-rw-r----- 1 oracle oinstall 22G Feb 21 02:47 ForStandby_12sro9co_1_1
-rw-r----- 1 oracle oinstall 25G Feb 21 02:56 ForStandby_15sroda0_1_1
-rw-r----- 1 oracle oinstall 22G Feb 21 03:45 ForStandby_13sroafn_1_1
-rw-r----- 1 oracle oinstall 30M Feb 21 03:45 ForStandby_18srolbl_1_1
-rw-r----- 1 oracle oinstall 14G Feb 21 04:03 ForStandby_14sroc7m_1_1
-rw-r----- 1 oracle oinstall 15G Feb 21 05:51 ForStandby_16srohte_1_1
-rw-r----- 1 oracle oinstall 27G Feb 21 05:54 ForStandby_17sroifq_1_1

Our RMAN backup created 15 backup pieces. When i was monitoring the backup, i could see that each ‘piece’ was reading around 500GB of data from the database, so if you imagine that each file above is the result of the churn that was found in each 500gb piece that RMAN generated.

At this point I renabled my backups on the primary and just in case took a manual snapshot straight away.

Then SCP the files over to the standby side – ideally in the same file structure:

scp For* oracle@standbyhost://oraarch/RACDB1/forstandby/.
scp For* oracle@standbyhost://oraarch/RACDB1/forstandby/.oracle@xxx.xx.xxx.xxx's password:

ForStandby_0qsrntds_1_1 100%   27GB  81.3MB/s   05:46
ForStandby_0rsrntdt_1_1 100%   22GB  90.2MB/s   04:07
ForStandby_0ssrntdu_1_1 100%   15GB  62.1MB/s   04:02
ForStandby_0tsrnte0_1_1 100%   22GB  40.4MB/s   09:11
ForStandby_0usro2c2_1_1 100%   14GB  45.6MB/s   05:19
ForStandby_0vsro3ba_1_1 100%   13GB  76.4MB/s   02:53
ForStandby_10sro3bd_1_1 100%   21GB  92.5MB/s   03:51
ForStandby_11sro4n4_1_1 100%   17GB  78.6MB/s   03:36
ForStandby_12sro9co_1_1 100%   21GB  90.3MB/s   04:01
ForStandby_13sroafn_1_1 100%   21GB  91.8MB/s   03:56
ForStandby_14sroc7m_1_1 100%   13GB  91.3MB/s   02:30
ForStandby_15sroda0_1_1 100%   25GB  76.1MB/s   05:33
ForStandby_16srohte_1_1 100%   14GB  69.0MB/s   03:34
ForStandby_17sroifq_1_1 100%   27GB  92.0MB/s   04:57
ForStandby_18srolbl_1_1 100%   30MB  29.8MB/s   00:00
ForStandbyCTRL.bck 100%   30MB  29.8MB/s   00:00

The next step is to restore the rman backup over the existing standby database – again we used scripts to do this:

vi rman.cmd

connect target /
CATALOG START WITH '/oraarch/RACDB1/forstandby';
allocate channel t1 type disk;
allocate channel t2 type disk;
allocate channel t3 type disk;
allocate channel t4 type disk;

vi rman.sh

rman cmdfile=rman.cmd msglog '/oraarch/RACDB1/forstandby/rman.log'

Before running the recovery, i spooled out the current datafiles in the standby

spool datafile_names_step9.txt 
set lines 200 
col name format a60 
select file#, name from v$datafile order by file# ; 
spool off

run the recovery with nohup:

nohup sh rman.sh &

The restore should be a lot quicker than the backup as its only having to read in the changes that have occurred and write them to the datafiles. Progress can be monitoring using the same script provided above for monitoring the backup.

you can also grep the log to see completed pieces:

less rman.log | grep "restore complete"
channel t3: restore complete, elapsed time: 00:10:45
channel t4: restore complete, elapsed time: 00:12:11
channel t2: restore complete, elapsed time: 00:16:17
channel t1: restore complete, elapsed time: 00:18:04
channel t4: restore complete, elapsed time: 00:10:27
channel t3: restore complete, elapsed time: 00:15:09
channel t1: restore complete, elapsed time: 00:10:38
channel t2: restore complete, elapsed time: 00:17:10

After the database has been recovered, the standby control file needs to be restored from the backup taken on the primary.

From RMAN, connect to STANDBY database and restore the standby control file:

Note: In a RAC environment i would advise not to run the shutdown/startup commands via RMAN because it will create conflicts with CRS and potentially cause your instances to crash. The MOS document is assuming that its a single instance database. I would suggest to do something similar to below instead:

srvctl stop database -d RACDB1DG 
srvctl start database -d RACDB1DG -i RACDB11 -o nomount
rman target / 
RMAN> RESTORE STANDBY CONTROLFILE FROM '<Backup_Path>/ForStandbyCTRL.bck'; <<< Controlfile backup piece name 
srvctl stop database -d RACDB1DG 
srvctl start database -d RACDB1DG -i RACDB11 -o mount

The last few steps are to clear the standby logfiles and restart the MRP process

This should kick everything into life and the standby should start applying redo.

DGMGRL> validate database 'RACDB1DG'

Database Role: Physical standby database
 Primary Database: RACDB1

Ready for Switchover: Yes
 Ready for Failover: Yes (Primary Running)

After a couple of days the incremental backup can be deleted:

RMAN> report obsolete; 
RMAN> delete obsolete;

The REPORT OBSOLETE and DELETE OBSOLETE commands work in two steps:

For each datafile for which there are full backup, datafile copy, or level 0 incremental backups, RMAN identifies the oldest full or level 0 backup or copy that is not obsolete under the retention policy being tested. Any full backup, level 0 incremental backup, or datafile copy of a datafile older than the one identified in this step is considered obsolete.
Any archived logs and level 1 incremental backups that are older than the oldest non-obsolete full backup are then obsolete because there is no full or level 0 backup to which they can be applied.

There is a possible bug that causes the redo log corruption but oracle have not confirmed that this was the cause for our issue yet:



ADRCI error – DIA-48448: This command does not support multiple ADR homes

Problem: When running adrci on an oracle server with multiple homes, you can get the following error when trying to complete some actions:

DIA-48448: This command does not support multiple ADR homes

Reason is existence of multiple oracle database instances/ASM/Listener homes etc. Solution is to set the ADRCI home for the instance you want to operate.

As the error above stipulates, this is because the adrci home is not set. To check the homes available on the server:

ADRCI: Release – Production on Tue Feb 21 22:03:57 2017

ADR base = “/u01/app/oracle”
adrci> show homes
ADR Homes:

to set the home to listener3, run the following command:

adrci> SET HOME diag/tnslsnr/myhost/listener_scan3

You should now be able to complete actions against this home, such as purging alert logs over X number of days.

ADRCI – Automatic Diagnostic Repository Command Interpreter

Notes on using ADRCI to package incidents for Oracle support.

From OS command line export Oracle SID and run adrci to launch the command line tool.



Some useful commands once in the tool

show home
show incident
show problem
ips pack problem <problem_number>

Some example output:

ADRCI: Release – Production on Fri Dec 4 11:46:46 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"

adrci> show home

ADR Homes:

adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

INCIDENT_ID          PROBLEM_KEY                                                                            CREATE_TIME
——————– ———————————————————–                            —————————————-

327230               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-03 09:33:35.465000 +00:00
330918               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-04 10:38:12.696000 +00:00

2 rows fetched

adrci> show problem

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

PROBLEM_ID           PROBLEM_KEY                                                 LAST_INCIDENT        LASTINC_TIME
——————– ———————————————————– ——————– —————————————-

1                    ORA 7445 [kolarsAssign()+1408]                              35914                2013-07-03 02:21:55.866000 +01:00

1 rows fetched

adrci> ips pack problem 1

Generated package 3 in file /u01/app/oracle/diag/rdbms/mydb/INST22/trace/ORA7445qm_20151204120758_COM_1.zip, mode complete


Full Credit to a colleague who wrote this up!

ORA 600s after applying PSU7 for

We were seeing the following two ORA 600s after applying PSU7. Technically the issue wasnt related to PSU 7 but more the switchover involved when patching…

SR 3-8646935731 : ORA 600 [ktbdchk1: bad dscn]
SR 3-8664869401 : ORA 600-[FILE:jsks.c LINE:2388 FUNCTION:jsksStartOCICall() ID:OCIKCallPus]


The solution was to set the following dynamic parameter on both the primary and standby database:

ALTER SYSTEM SET "_ktb_debug_flags"=8 SCOPE=BOTH;

This parameter is designed to heal blocks having invalid dependent scn’s on switchover operations.

From the traces provided to oracle, I was told that the affected object for this issue was ID 12152331.

Running the following SQL determines the object:

select owner,object_name,object_type,subobject_name,object_id,data_object_id
  from   dba_objects
  where  object_id in (12152331)
  or     data_object_id in (12152331);

The affected blocks were on index SYS.I_SCHEDULER_JOB_RUN_DETAILS on table SCHEDULER$_JOB_RUN_DETAILS.

As well as the dynamic parameter, I also did the following:



Retrieve Oracle SPID from unix OS PID

The Unix/Linux PID that you see in ‘top’ is visible as SPID in V$PROCESS. Then, you’d have to join V$PROCESS to V$SESSION to get the SID. (Note : PID in V$PROCESS is the Oracle PID, not the OS PID).

--Replace bind variable with OS PID List
select s.sid, s.serial#, s.username,
       to_char(s.logon_time,'DD-MON HH24:MI:SS') logon_time,
       p.pid oraclepid, p.spid "ServerPID", s.process "ClientPID",
       s.program clientprogram, s.module, s.machine, s.osuser,
       s.status, s.last_call_et
from  gv$session s, gv$process p
where p.spid=nvl('&unix_process',' ')
and s.paddr=p.addr
order by s.sidÔ…

Unregister Mview Snapshot Cleanly

Useful SQL to dynamically create unregister mview script for all mviews for a certain site or schema or both!

--Dynamic SQL - Unregister Snapshot info for a particular schema and database
'execute dbms_snapshot.unregister_snapshot(''' || OWNER || ''',''' || NAME || ''',''' || SNAPSHOT_SITE || ''');' 
from dba_registered_snapshots rs

This links to my previous posts on this topic:

Find Site holding up mlog & Register Orphaned Mviews

Investigation SQL for Mview and Mlog growth

Also see metalink note “How to Manually Register/Unregister Snapshots” [ID 67371.1]

Investigating high number of sessions/scraping

Some queries to help investigate scraping or max session errors on the database:

--Total Session counts on database per schema
select username, Service_name, count(*) from gv$session
group by username, Service_name
HAVING count(*) > 100
order by 3 desc;ì*
--Total Session counts split by specific instance 
-- Use if only certain instances recieving max session error
select username, inst_id, Service_name, count(*) from gv$session
where inst_id in (1,2)
group by username, inst_id, Service_name
HAVING count(*) > 100
order by 4 desc;

Out machine names have Data Centre references so we are able to further split:

--Split Sessions by Data Centre
    WHEN '%dc1%' THEN 'DC1'
    ELSE 'DC2'
, count(*)
FROM gv$session
where SERVICE_NAME = 'MYSERVICE_1' and username = 'MYSCHEMA_2'
    WHEN '%dc1%' THEN 'DC1'
    ELSE 'DC2'