ADRCI – Automatic Diagnostic Repository Command Interpreter

Notes on using ADRCI to package incidents for Oracle support.

From OS command line export Oracle SID and run adrci to launch the command line tool.

export ORACLE_SID=<SID>

adrci

Some useful commands once in the tool

help
show home
show incident
show problem
ips pack problem <problem_number>

Some example output:

ADRCI: Release 11.2.0.3.0 – Production on Fri Dec 4 11:46:46 2015

Copyright (c) 1982, 2011, Oracle and/or its affiliates.  All rights reserved.

ADR base = "/u01/app/oracle"

adrci>
adrci> show home

ADR Homes:
diag/rdbms/mydb/INST22

adrci>
adrci> show incident

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

*************************************************************************
INCIDENT_ID          PROBLEM_KEY                                                                            CREATE_TIME
——————– ———————————————————–                            —————————————-

327230               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-03 09:33:35.465000 +00:00
330918               ORA 7445 [qmcxdAddNmspcAttrFromPfxId()+216]                 2015-12-04 10:38:12.696000 +00:00

2 rows fetched

adrci>
adrci> show problem

ADR Home = /u01/app/oracle/diag/rdbms/mydb/INST22:

*************************************************************************
PROBLEM_ID           PROBLEM_KEY                                                 LAST_INCIDENT        LASTINC_TIME
——————– ———————————————————– ——————– —————————————-

1                    ORA 7445 [kolarsAssign()+1408]                              35914                2013-07-03 02:21:55.866000 +01:00

1 rows fetched

adrci>
adrci> ips pack problem 1

Generated package 3 in file /u01/app/oracle/diag/rdbms/mydb/INST22/trace/ORA7445qm_20151204120758_COM_1.zip, mode complete

adrci>

Full Credit to a colleague who wrote this up!

Unregister Mview Snapshot Cleanly

Useful SQL to dynamically create unregister mview script for all mviews for a certain site or schema or both!

--Dynamic SQL - Unregister Snapshot info for a particular schema and database
select OWNER, NAME, SNAPSHOT_SITE, 
'execute dbms_snapshot.unregister_snapshot(''' || OWNER || ''',''' || NAME || ''',''' || SNAPSHOT_SITE || ''');' 
from dba_registered_snapshots rs
WHERE rs.OWNER = 'MY_SCHEMA'
and SNAPSHOT_SITE = 'MY_SNAPSHOT_SITE';

This links to my previous posts on this topic:

Find Site holding up mlog & Register Orphaned Mviews

Investigation SQL for Mview and Mlog growth

Also see metalink note “How to Manually Register/Unregister Snapshots” [ID 67371.1]

Investigating high number of sessions/scraping

Some queries to help investigate scraping or max session errors on the database:

--Total Session counts on database per schema
select username, Service_name, count(*) from gv$session
group by username, Service_name
HAVING count(*) > 100
order by 3 desc;ì*
--Total Session counts split by specific instance 
-- Use if only certain instances recieving max session error
select username, inst_id, Service_name, count(*) from gv$session
where inst_id in (1,2)
group by username, inst_id, Service_name
HAVING count(*) > 100
order by 4 desc;

Out machine names have Data Centre references so we are able to further split:

--Split Sessions by Data Centre
SELECT USERNAME, 
CASE MACHINE 
    WHEN '%dc1%' THEN 'DC1'
    ELSE 'DC2'
END AS DC
, count(*)
FROM gv$session
where SERVICE_NAME = 'MYSERVICE_1' and username = 'MYSCHEMA_2'
GROUP BY USERNAME, 
CASE MACHINE 
    WHEN '%dc1%' THEN 'DC1'
    ELSE 'DC2'
END 
ORDER BY 3

OPENQUERY using Linked Server: SQL to Oracle

Handy T-SQL for manipulating data at destination oracle db from SQL Server.

Once the Linked Server has been created, the following DML can be used:

--SELECT
select * from openquery([ORADB.MYSCHEMA], 'select * from nvdimage')
--INSERT
insert openquery([ORADB.MYSCHEMA], 'SELECT IMA_IMAGEID,IMA_IMAGE FROM nvdimage')
SELECT IMA_IMAGEID, IMA_IMAGE FROM nvdimage
--DELETE
delete openquery([ORADB.MYSCHEMA], 'select * from nvdimage')
--UPDATE
update openquery([ORADB.MYSCHEMA], 
'select column_a, column_b from myschema.table_name where pk = pk_value')
SET column_a = 'value1', column_b = 'my_value2'

OEM Grid Control 10.2 – Purge logs

Came across an issue whereby the OC4J (Oracle Application Server Containers for Java EE) had exponentially grown over time and the em-application.log file was over 40gb. The other big file was the default-web-access.log file which was over 10gb.

As far as i am aware, these files don’t automatically purge and so a manual workaround is required.

To clean up and remove these files do the following. only remove if they are no longer required – if not copy the files first (as in the example below)

Steps taken:

1. Stop Grid Control
2. Move big files into backup file
3. Start Grid Control
4. Once happy, Remove backup files

[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             135G   74G   7G  96% /u01
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : ./opmnctl stopall
opmnctl: stopping opmn and all managed processes…
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : ./opmnctl status
Unable to connect to opmn.
Opmn may not be up.
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : cd /u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : ls -ltrh
total 49G
drwxrwx—    2 oracle   dba          4.0K Nov  8  2005 oc4j
-rw-r–r–    1 oracle   dba          8.9G Jun 22 10:59 default-web-access.log
-rw-rw—-    1 oracle   dba          828M Jun 22 10:59 server.log
-rw-rw—-    1 oracle   dba           40K Jun 22 10:59 rmi.log
-rw-rw—-    1 oracle   dba           20K Jun 22 10:59 jms.log
-rw-rw—-    1 oracle   dba           93K Jun 22 10:59 global-application.log
-rw-r–r–    1 oracle   dba           39G Jun 22 10:59 em-application.log
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : mv em-application.log em-application.log.backup
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : mv default-web-access.log default-web-access.log.backup
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : cd /u01/app/oracle/product/10.2/gc/oms10g/opmn/bin/
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : ./opmnctl startall
opmnctl: starting opmn and all managed processes…
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : ./opmnctl status

Processes in Instance: EnterpriseManager0..tcy.tradermedia.net
——————-+——————–+———+———
ias-component      | process-type       |     pid | status
——————-+——————–+———+———
HTTP_Server        | HTTP_Server        |   24406 | Alive
LogLoader          | logloaderd         |     N/A | Down
dcm-daemon         | dcm-daemon         |   25611 | Alive
OC4J               | home               |   24407 | Alive
OC4J               | OC4J_EMPROV        |   24408 | Alive
OC4J               | OC4J_EM            |   24409 | Alive
OC4J               | OCMRepeater        |   24417 | Alive
WebCache           | WebCache           |   24439 | Alive
WebCache           | WebCacheAdmin      |   24427 | Alive
DSA                | DSA                |     N/A | Down

[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/opmn/bin : cd –
/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : rm em-application.log.backup
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : rm default-web-access.log.backup
[**LIVE**]:/u01/app/oracle/product/10.2/gc/oms10g/j2ee/OC4J_EM/log/OC4J_EM_default_island_1 : df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sdb1             135G   74G   55G  58% /u01

Querying Oracle with Case Insensitivity

Before 10g R2, the only way to do case insensitive querying was by using the LOWER or UPPER functions, NLS_UPPER or NLS_SORT or NLS_COMP session parameters to put the data in a state in which it could be compared.

For example, take a scenario where a table has 100 entries, 60 have the category ‘cars’ and 40 have the category ‘CARS’

select Category, count(*) from SCHEMA.CATEGORY_TABLE
Group by Category;

CATEGORY    COUNT(*)

cars                  60

CARS               40  

To be able to return all the data, as one category the lower or upper function can be used:

select count(*) from SCHEMA.CATEGORY_TABLE
WHERE lower(category)='caRs';

the count returned in 100

To then speed up the data retrieval, a function based index can be added on the table (If the retrieval cost outweights the computation cost of the index:

CREATE INDEX SCHEMA.I_LOWER_CATEGORY ON SCHEMA.CATEGORY_TABLE
(LOWER("CATEGORY"));

Since 10g R2, the case insensitivity can be set at session level:

This is handled by the NLS_COMP session parameter. The sole purpose of this variable is to determine how the predicates in SQL statements will evaluate to each other when a comparison is required. The typical comparison operator is the WHERE clause but also includes such comparisons as ORDER BY, START WITH, HAVING, etc.

SQL> alter session set NLS_COMP=ANSI;

Session altered.

SQL> alter session set NLS_SORT=BINARY_CI;

Session altered.

SQL> select count(*) from schema.table where category = ‘CaRs’;

  COUNT(*)

———-

       100

this is discussed in full detail here

Find and Kill Scraping sessions

Firstly find out the offending schemas:

select username, count(*) from gv$session
group by username 
order by 2 desc

Then, if applicable, find the offending machine\terminal

select username, machine, terminal, count(*) from gv$session
where username = 'SCHEMA' 
group by username, machine, terminal
order by 4 desc;

If the sessions are not dying off buy stopping the application\process, generate Kill statements for each session:

SET LINESIZE 100
COLUMN spid FORMAT A10
COLUMN username FORMAT A10
COLUMN program FORMAT A45

SELECT --s.inst_id,
       --s.sid,
       --s.serial#,
       --p.spid,
       --s.username,
       --s.program,
    'ALTER SYSTEM KILL SESSION ''' || s.sid || ',' || s.serial# || ''' IMMEDIATE;'
FROM   gv$session s
       JOIN gv$process p ON p.addr = s.paddr AND p.inst_id = s.inst_id
WHERE  s.type != 'BACKGROUND' and s.username='SCHEMA';

Full details on how to Kill oracle sessions here

Investigation SQL for Mview and Mlog growth

--Find oldest snaptime for a mlog - i.e. time the mlog has been building up since 
select * from sys.slog$
where mowner = 'SCHEMA'
and Master = 'TABLE'
ORDER BY SNAPTIME DESC;
--row growth every time the mlog updates
select count(*)
from ATD_UK_FP.MLOG$_FP_INET_OPTIONS
group by SNAPTIME$$;
-- SNapshot sites for a particular mview
select owner, name ,snapshot_site, snapshot_id, refresh_method 
from dba_registered_snapshots rs
WHERE rs.OWNER = 'SCHEMA'
and rs.Name = 'TABLE'
order by rs.owner, rs.name;
--Find which snapshot_site is holding up the mlog
select sl.*, rs.snapshot_site, rs.owner, rs.name, rs.refresh_method 
from sys.slog$ sl
left join dba_registered_snapshots rs on rs.SNAPSHOT_ID = sl.SNAPID
where mowner = 'SCHEMA'
and Master = 'TABLE'
ORDER BY SNAPTIME DESC;
--Find unregistered snaps and snaps older than certain number of days 
--not being refreshed
select mowner, master, snapid,  nvl(r.snapsite, 'not registered') snapsite,snaptime
from   sys.slog$ s, sys.reg_snap$ r where  s.snapid=r.snapshot_id(+) and
s.snaptime < (sysdate-0)
and mowner = 'SCHEMA';

Useful Oracle Metalink Docs

236233.1 – Materialized View Refresh : Log Population and Purge

727632.1 – What to do when a Materialized View Log is not cleared automatically after a Fast Refresh

258634.1 – Materialized View registration at Master Site

ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

When trying to rebuild an index, the following error occured:

SQL> ALTER INDEX myschema.I_MV_CTR_ADVERT_ID rebuild online;
ALTER INDEX myschema.I_MV_CTR_ADVERT_ID rebuild online
*
ERROR at line 1:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

If the index is based on an mview, then the following error can also occur when trying to refresh the mview

ERROR at line 1:
ORA-12008: error in materialized view refresh path
ORA-08102: index key not found, obj# 4145783, file 172, block 364789 (3)
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2545
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2751
ORA-06512: at "SYS.DBMS_SNAPSHOT", line 2720
ORA-06512: at line 2

This can also be related to ORA-08104: this index object 4145783 is being online built or rebuilt

Metalink gives some useful info on the ORA-01452 error – ORA-01452: Cannot Create Unique Index; Duplicate Keys Found [ID 332494.1]

Run the following to find duplicate rows

-- Script find duplicate rows
select  
--Enter name(s) of column(s) which should be unique
&&c 
from 
-- Enter name of table with duplicate rows
&&t
where rowid not in (select min(rowid) from &&t group by &&c)

ORA-08104: this index object 4145783 is being online built or rebuilt

Whilst running an online index rebuild my session terminated abnormally and the session was killed. When trying to rerun the index rebuild statement, the following error occurs:

Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.7.0 - 64bit Production
With the Partitioning, Real Application Clusters, OLAP, Data Mining and 
Real Application Testing options

SQL> ALTER INDEX myschema.I_MV_CTR_ADVERT_ID rebuild online;
ALTER INDEX myschema.I_MV_CTR_ADVERT_ID rebuild online
*
ERROR at line 1:
ORA-08104: this index object 4145783 is being online built or rebuilt

Cause:

A session failure during an online index rebuild can leave the data dictionary in a state reflecting a rebuild is on going when in fact it is not.

Solution:

The dbms_repair.online_index_clean function has been created to cleanup online index rebuilds. More details about the function on metalink – Session Was Killed During The Rebuild Of Index ORA-08104 [ID 375856.1]

Run the following to resolve:

declare
isclean boolean;
begin
isclean :=false;
while isclean=false
loop
isclean := DBMS_REPAIR.ONLINE_INDEX_CLEAN(dbms_repair.all_index_id,dbms_repair.lock_wait);
dbms_lock.sleep(10);
end loop;
end;
/