Oracle RAC

Hi, My name is Jimmy Sam. If you need DBA help, please email to jimmy_sam001@yahoo.com for further discussion. Thanks.

top   prev   next  
Exadata 
Most Important fetures: Smart Cache, offloads the data search and retrieval processing to the storage level

Storage node: PhysicalDisks, LUN, Cell, Griddisk storage node 
cellcli
CellCLI> list physicaldisk
CellCLI> list lun
CellCLI> list cell
CellCLI> list griddisk

1z0-070 

 
Oracle RAC CRS Architecture And Processes
Overview in 11gr2 vs. 10g/11gr1
CRS: Cluster Ready Services
OHASD: Oracle High Availability Services (daemon): cssdagent, cssdmonitor, orarootagent, oraagent
       OHASD brings up GPnP Daemon and CSS Daemon
CSSD: Cluster Synchronization Services (daemon)
cssdagent
CRSD: Cluster Ready Services (daemon) : Maintaining and Monitoring the availability of Resources
EVMD: Event Manager (daemon)

Clusterware Level:(from ohasd)
orarootagent: crsd, ACFS Drivers, ctssd, diskmon
oraagent: ASM evmd gipcd gpnpd

Database Level:(from crsd)
orarootagent: GNS, GNS VIP, Node VIP, SCAN VIP, network
oraagent(grid): ASM NodeListener, SCAN Listener, ONS
oraagent(oracle): Database Instance Service

OLR: (Oracle Local Registry) stored on the local file system. OLR provides needed data to complete OHASD initialization
OCR: Oracle Cluster Registry
Voting Disk: Odd number of voting disks

GPnP: Grid plug n play 
CSS Daemon has access to the GPNP Profile stored on the local file system. 
The information regarding voting disk is on ASM is read from GPnP profile 
We can even read voting disk by using kfed utility ,even if ASM is not up.
GPnP 1 
GPnp 2 
/u01/app/11.2.0/grid/gpnp/paw-racnode1/profiles/peer/profile.xml
Main Usage: Voting disk and OCR on ASM disks, before asm instance started, Oracle RAC use GPnP Profile to find other instances


ONS: Oracle Notification Service: allows for Oracle RAC to communicate the status for the nodes, 
which are typically UP and DOWN events, to the Oracle Universal Connection Pool (UCP). 
To take advantage of ONS, you must configure the application server to use Oracle UCP instead of the application server's connection pooling feature. 
oracle rac ons configuration
onsctl

GDS: Global Data services  ( gdsctl  )

scan listener: scan listener distribute load based on information of local_listener and remote_listener
LOAD_BALANCE=ON 

Fast Application Notification (FAN)
fan 1 

fan 2 

FCF: Fast Connection Failover


STEP BY STEP GUIDE TO FAST CONNECTION FAILOVER WITH JDBC
fcf 

 

Oracle RAC 12.2 1 
Oracle RAC 12.2 2 

# vip relocation
crsctl relocate resource  MyTestVIP


srvctl config database -d orcl

srvctl status srvpool -a
srvctl status serverpool -detail
srvctl config srvpool -serverpool TopPriority 

 

# rac 12.2

srvctl status rhpserver
srvctl status  mgmtdb 
srvctl status ioserver
nslookup rhpserver

 

GES Locks and Deadlocks
ges blocking enqueue 


select be.inst_id,  be.pid, resource_name1, be.state, owner_node, 
blocked, blocker
from gv$ges_blocking_enqueue be
where be.resource_name1 like '%TX%';


select * from gv$lock where type='TX';
 


GNS: Grid Naming Service
# Oracle RAC GNS
$GRID_HOME/bin/srvctl config gns
$GRID_HOME/bin/srvctl config gns -list

crsctl get node role status -all
 


# scan listener
srvctl config scan
srvctl config scan_listener 

srvctl status scan 
srvctl status scan_listener

lsnrctl status listener_scan2

nslookup  myscan.abc.com

 


# get the Oracle RAC master node
oclumon manage -get MASTER

# get the repository db infor: mgmtdb
oclumon manage -get reppath

oclumon dumpnodeview -allnodes

 


crsctl get cluster mode status
# "flex" mode
crsctl get cluster class
crsctl get cluster name

# Oracle rac nodeapps

 


Exadata plugin installation on Oracle 13c Cloud control  1 

exadata official docs 

New Features 

installation   
prerequisite for installation  
install.sh -s 1-18  
upgrade 

Grid Infrastructure 12.1.0.2

./install.sh -cf client-mycluster.xml -s 1
./install.sh -cf client-mycluster.xml -s 2
./install.sh -cf client-mycluster.xml -s 3
./install.sh -cf client-mycluster.xml -s 4

./install.sh -cf client-mycluster.xml -s 18




top   prev   next  
latch: shared pool 
library cache: mutex X

top   prev   next  
Oracle ASMLib
asmlib
asm-support
asm-kernel
asm-smp-kernel


oracleasmlib
oracleasm-support

rpm -Uvh oracleasm-support-version.arch.rpm \
oracleasm-kernel-version.arch.rpm \
oracleasmlib-version.arch.rpm

$su -
# rpm -Uvh oracleasm-support-1.0.2-1.i386.rpm \
oracleasm-lib-1.0.0-1.i386.rpm \
oracleasm-2.4.21-EL-1.0.0-1.i686.rpm

/etc/init.d/oracleasm configure

/usr/sbin/oracleasm configure -i


/etc/init.d/oracleasm disable

/etc/init.d/oracleasm enable

su -
# cd /etc/rc3.d
# ln -s ../init.d/oracleasm S99oracleasm
# ln -s ../init.d/oracleasm K01oracleasm

#lsmod | grep oracleasm
oracleasm 14224 1
#dmesg | grep oracleasm
ASM: oracleasmfs mounted with options: 

#chown oracle:dba /dev/sd*
#chmod 660 /dev/sd*

/etc/init.d/oracleasm createdisk VOL1 /dev/sdb
/etc/init.d/oracleasm createdisk VOL2 /dev/sdc
/etc/init.d/oracleasm createdisk VOL3 /dev/sdd
/etc/init.d/oracleasm createdisk VOL4 /dev/sde

/etc/init.d/oracleasm createdisk VOL1 /dev/sdb

/etc/init.d/oracleasm listdisks

/etc/init.d/oracleasm scandisks

top   prev   next  

asmcmd ls ---> V$ASM_ALIAS, V$ASM_FILE

asmcmd lsattr -lm -G <diskGroup> 
    
asmcmd lsdsk

asmcmd lsdsk --statistics 

asmcmd iostat 

asmcmd lsop ---> V$ASM_OPERATION 

asmcmd lsct 

top   prev   next  
ALTER DISKGROUP FRA ADD DISK 'ORCL:SDE5' NAME 'FRA_DISK3','ORCL:SDE6' NAME 'FRA_DISK4';

ALTER DISKGROUP FRA DROP DISK FRA_DISK1, FRA_DISK4;

CREATE DISKGROUP mydgroup1 NORMAL REDUNDANCY
FAILGROUP mycontroller1 DISK
'/devices/diska1',
'/devices/diska2',
'/devices/diska3',
'/devices/diska4'
FAILGROUP mycontroller2 DISK
'/devices/diskb1',
'/devices/diskb2',
'/devices/diskb3',
'/devices/diskb4'
ATTRIBUTE 'sector_size'='4096';

top   prev   next  
http://www.puschitz.com/

http://www.puschitz.com/InstallingOracle10g.shtml

SQL*Plus
 SELECT * FROM V$PWFILE_USERS 

ASMCMD lspwusr

init.ora
INSTANCE_TYPE = ASM 
CLUSTER_DATABASE = TRUE 

ALTER SYSTEM SET ASM_DISKSTRING='ORCL:*' SID='*' SCOPE=SPFILE;

top   prev   next  
$ srvctl -h
$ srvctl start asm -n host01
$ srvctl start asm -n host02
$ srvctl status asm -n host01
$ srvctl status asm -n host02
# $ srvctl status vip -n node1

$ srvctl stop asm
$ srvctl status asm -n host01
$ srvctl status asm

$ export ORACLE_SID=+ASM1
$ export ORACLE_HOME=/u01/app/11.2.0/grid
$ $ORACLE_HOME/bin/sqlplus / AS SYSASM
sqlplus / AS SYSASM
SQL> startup 

create diskgroup dgroup1 normal redundancy
failgroup fgroup1 disk 'ORCL:VOL1','ORCL:VOL2'
failgroup fgroup2 disk 'ORCL:VOL3','ORCL:VOL4';

ALTER DISKGROUP DGROUP1 ADD FAILGROUP FGROUP1 DISK
'ORCL:VOL5' FAILGROUP FGROUP2 DISK 'ORCL:VOL6';

CREATE DISKGROUP ACFS NORMAL REDUNDANCY 
FAILGROUP ACFS2 
DISK 'ORCL:DATA7' SIZE 1953 M ,'ORCL:DATA8' SIZE 2329 M 
FAILGROUP ACFS1 
DISK 'ORCL:DATA5' SIZE 1953 M ,'ORCL:DATA6' SIZE 1953 M 
ATTRIBUTE 'compatible.rdbms' = '11.2.0.0.0', 
		'compatible.asm' = '11.2.0.0.0', 
		'compatible.advm' = '11.2.0.0.0' ;

CREATE DISKGROUP FRA NORMAL REDUNDANCY 
DISK 'ORCL:SDD11' NAME 'FRA_DISK1' SIZE 977 M,    'ORCL:SDD12' NAME 'FRA_DISK2' SIZE 977 M;

asmcmd mkdg disk_config.xml

ALTER DISKGROUP ACFS ADD VOLUME TESTVOL SIZE 500M UNPROTECTED 

-- alter diskgroup dgroup1 drop disk VOL4;

top   prev   next  
volcreate -G ACFS -s 500M --redundancy unprotected

col name for a10
select name,state from v$asm_diskgroup;

select label,failgroup from v$asm_disk;

SELECT disk_number, label FROM V$asm_disk;

ALTER DISKGROUP dgroup1 MOUNT;

col name for a10
col type for a10
col value for a20
show parameter asm_diskgroups

-- ALTER DISKGROUP ALL DISMOUNT;

-- alter diskgroup dgroup1 drop disk VOL2;

SELECT group_number, operation, state, est_work, sofar, est_rate,
est_minutes FROM v$asm_operation;

SELECT group_number, operation, state, est_work, sofar, est_rate,
est_minutes FROM v$asm_operation;

select name,allocation_unit_size,total_mb from v$asm_diskgroup;
select name,allocation_unit_size,total_mb from v$asm_diskgroup;

select * from v$asm_client;

V$ASM_ALIAS		 	V$ASM_ATTRIBUTE		 V$ASM_CLIENT
V$ASM_DISK			V$ASM_DISK_IOSTAT		 V$ASM_DISK_STAT
V$ASM_DISKGROUP	V$ASM_DISKGROUP_STAT	 V$ASM_FILE
V$ASM_OPERATION	V$ASM_TEMPLATE			 V$ASM_ACFSVOLUME
V$ASM_FILESYSTEM

The V$ASM_* views exist in both ASM and database instances. The rows returned will vary.

top   prev   next  
-- create tablespace ts_test datafile '+DGROUP1' size 200M;
-- create user kamus identified by pass default tablespace ts_test;
-- grant dba to kamus;

== =================================================================

/etc/init.d/init.ohasd
cat /etc/inittab

dig @192.0.2.155 cluster01-scan.cluster01.example.com


/etc/security/limits.conf 

grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536

/etc/pam.d/login 
session    required     pam_limits.so

== =============================================


systemctl status ohas.service

ps -ef|grep lsnr|grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}'

su - grid

# alias crs_stat="crsctl stat res"
crs_stat -t

crs_stat -t -v

$ORACLE_GRID_HOME/bin/crsctl stat res -t

crsctl check crs

crsctl check cluster

crsctl query css votedisk

srvctl status asm -a

srvctl status database -d orcl 

-- 
olsnodes

olsnodes -n

olsnodes -n -i -s -t

crsctl query css votedisk
crs_stat -t
crs_stat -t -init

crsctl status resource -t
crsctl status res -t

crsctl check crs
# CRS-1714:Unable to discover any voting files
# /oracle/11.2.0/grid/log/testdb01/cssd/ocssd.log


cluvfy comp ocr -n all -verbose


# grid acccount
srvctl status asm
srvctl status scan
srvctl status scan_listener
srvctl status listener






crsctl start crs

crsctl stop crs 

crsctl stop cluster -all
crsctl stop clusterware -all

crsctl enable crs
crsctl disable crs 

cat /etc/oracle/ocr.loc
ocrconfig_loc=+DATA
local_only=FALSE

grep voting /log//cssd/ocssd.log

cluvfy comp ocr -n all -verbose
cluvfy comp ocr -n all

ocrcheck

ocrcheck -local

# crsctl delete css votedisk path_to_voting_disk
# crsctl add css votedisk path_to_voting_disk

# crsctl add css votedisk path_to_new_voting_disk
# crsctl delete css votedisk path_to_old_voting_disk

crsctl replace votedisk +asm_disk_group

crsctl replace votedisk {+asm_disk_group | path_to_voting_disk}

ocrconfig -showbackup auto

ocrconfig -backuploc 

# ocrconfig -add +DATA2
# ocrconfig -add /dev/sde1

ocrconfig -replace /dev/sde1 -replacement +DATA2

ocrconfig -repair -add DATA1 

ocrconfig -delete +DATA2
# ocrconfig -delete /dev/sde1

crsctl query crs releaseversion
crsctl query crs softwareversion
crsctl query crs softwareversion [node_name] 
crsctl query crs activeversion
activeversion 

ocrconfig -add +DATA2

ocrconfig -delete /dev/raw/raw1
ocrconfig -delete /dev/raw/raw2

crsctl query crs activeversion

ocrconfig -add /dev/sde1
ocrconfig -add /dev/sdf1

ocrconfig -delete +DATA2

ocrconfig -manualbackup

ocrconfig -showbackup manual

ocrconfig -export /home/oracle/ocr.backup

ocrconfig -showbackup

srvctl stop database -d orcl 

# grid or oracle user
# only stop current node
${GRID_HOME}/grid/bin/crsctl stop crs 

crsctl stop cluster

crsctl stop cluster -all


== ==============================================

oifcfg iflist -p -n

oifcfg getif

srvctl config nodeapps -a

srvctl stop service -d RDBA -s sales,oltp -n host01

srvctl config vip -n host01

srvctl stop vip -n host01

srvctl modify nodeapps -n host01 -A 192.168.2.125/255.255.255.0/eth0

srvctl start vip -n host01

cluvfy comp nodecon -n all -verbose

oifcfg setif -global eth2/192.0.2.0:cluster_interconnect

# oifcfg getif
# crsctl stop crs

ifconfig eth2 192.0.2.15 netmask 255.255.255.0 broadcast 192.0.2.255

oifcfg delif -global eth1/192.168.1.0
# crsctl start crs

crsctl status resource

srvctl add scan -n cluster01-scan
srvctl remove scan [-f]

srvctl add scan_listener 
srvctl add scan_listener -p 65536 ## using nondefault port number ##

srvctl remove scan_listener [-f]

srvctl modify scan -n new-scan

srvctl modify scan_listener -u

# scan vip
srvctl config scan

# scan listener
srvctl config scan_listener


top   prev   next  
1): cvuqdisk RPM: in grid installation cdrom rpm list

./runcluvfy.sh stage -pre crsinst -nrac1,rac2
./runcluvfy.sh stage -pre crsinst -n rac1,rac2 -fixup -verbose

./runcluvfy.sh stage -post hwos -n cnsz001,cnsz002 -verbose  

addNode.sh

cluvfy stage -post hwos -n host03

cluvfy comp peer -refnode host01 -n host03 -orainv oinstall -osdba asmdba -verbose 

cluvfy stage -post hwos -n host03

cluvfy comp peer -refnode host01 -n host03 -orainv oinstall -osdba asmdba -verbose 

cluvfy stage -post nodeadd -n host03 -verbose

crsctl unpin css -n host03
./rootcrs.pl -delete -force

./runInstaller -detachHome  ORACLE_HOME=/u01/app/11.2.0/grid

./deinstall .local

cd /Grid_home/oui/bin
[grid@host01]$ ./runInstaller -updateNodeList \
ORACLE_HOME=/u01/app/11.2.0\grid \
"CLUSTER_NODES={host01,host02}" CRS=TRUE

cluvfy stage -post nodedel .n host03 [-verbose]

== ==============================================

crsctl query crs softwareversion [hostname]

crsctl query crs activeversion

srvctl stop nodeapps -n host01 -r

cd patchset_directory/Disk1
$ ./runInstaller

crsctl stop crs

# cd /u01/app/11.2.0/grid/install
# ./root*.sh

== ================================================

prerootpatch.sh (as root)
prepatch.sh (as the clusterware owner)
postpatch.sh (as the clusterware owner)
postrootpatch.sh (as root

top   prev   next  
$ export ORACLE_HOME=/u01/app/11.2.0/grid
$ opatch command [options] 
opatch query -all | grep -i Rolling

opatch lsinventory -detail -oh /u01/app/11.2.0/grid 

crsctl stop crs

cd 
# ./custom/scripts/prerootpatch.sh -crshome\ /u01/app/11.2.0/grid -crsuser grid 

./custom/scripts/prepatch.sh .crshome /u01/app/11.2.0/grid 

[grid]$ opatch apply -local -oh Grid_home patch_location

./custom/scripts/postpatch.sh -crshome Grid_home

./custom/scripts/postrootpatch.sh -crshome Grid_home

opatch lsinventory -detail -oh Grid_home

== ==============================================

crsctl status server -f

crsctl add serverpool SP1 -attr "MIN_SIZE=2, MAX_SIZE=5, IMPORTANCE=3"

srvctl add srvpool -g SP1 -l 2 -u 5 -i 3 -n "server1,server2"

crsctl add serverpool SP1 -file /usr/local/bin/SP1_attributes.txt

crsctl delete serverpool SP1

srvctl remove srvpool -g SP1 

crsctl modify serverpool SP2 -attr "MIN_SIZE=4, MAX_SIZE=8, IMPORTANCE=7"

srvctl modify srvpool -g SP2 -l 4 -u 8 -i 7

Script agent: CRS_HOME/bin/scriptagent

Application agent: CRS_HOME/bin/appagent

== ====================================================================

srvctl config nodeapps -s 
-->ONS daemon exists. Local port 6100, remote port 6200

srvctl add nodeapps -l 6100 -r 6200

srvctl start nodeapps
srvctl stop  nodeapps

== ====================================================================

top   prev   next  
diagcollection.pl
/bin/diagcollection.pl 
/u01/app/11.2.0/grid/bin/diagcollection.pl --collect

cluvfy comp -list

vi /etc/sysconfig/ntp
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"

cluvfy comp crs -n all -verbose

crsctl set log res "ora.host01.vip:1" 
crsctl set log res "ora.host01.vip:0" 

crsctl lsmodules css

crsctl set log {crs | css | evm} "component_name=debugging_level,[...]" 

crsctl set log crs "CRSEVT=1","CRSCOMM=2","CRSTIMER=5" 

crsctl set log res "myResource1=1"

$ export SRVM_TRACE=TRUE
$ srvctl config database .d orcl > /tmp/srvctl.trc
$ cat /tmp/srvctl.trc

# crsctl set css diagwait 13 -force
# crsctl unset css diagwait

crsctl get css misscount
crsctl set css misscount 45

ocrcheck
ocrconfig

== ================================

top   prev   next  
ALTER TABLESPACE myspace ADD DATAFILE '+DATA/mydir/myspace02.dbf' SIZE 50M;

ALTER TABLESPACE myspace ADD DATAFILE '+DATA(mytemplate)/mydir/myspace03.dbf';

ALTER TABLESPACE myspace ADD DATAFILE '+DATA' SIZE 50M;

ALTER DATABASE ADD LOGFILE;

RMAN> BACKUP AS COPYATAFILE "+DATA/rdbms/datafile/tbs_5.256.565313879" FORMAT "+DATA2";

init.ora
DB_CREATE_ONLINE_LOG_DEST_1 = '+DATA'B_CREATE_ONLINE_LOG_DEST_2 = '+FRA'

top   prev   next  
-- --------------------------

srvctl start instance -d orcl -i orcl1,orcl2
srvctl stop instance -d orcl -i orcl1,orcl2

srvctl start database -d orcl
srvctl start database -d orcl -o open

srvctl start database -d orcl -n node1,node2

srvctl config database -d orcl -a

srvctl modify database -d orcl -y MANUAL;


srvctl add service -d PROD1 -s GL -r RAC02 -a RAC01

srvctl add service -d PROD1 -s AP -r RAC01 -a RAC02

srvctl add service -d PROD2 -s BATCH -g SP1 -c singleton -y manual

srvctl add service -d PROD2 -s ERP -g SP2 -c UNIFORM -y manual

srvctl start service -d orcl -s AP
srvctl stop service -d orcl -s AP 

srvctl disable service -d orcl -s AP -i orcl4
srvctl modify service -d orcl -s AP -i orcl5 -r

srvctl relocate service -d orcl -s AP -i orcl5 -t orcl4

srvctl modify service -s service_name -j LONG|SHORT

srvctl modify service -s gl.example.com -q TRUE -P BASIC -e SELECT -z 180 -w 5 -j LONG

top   prev   next  
EXECUTE DBMS_SERVER_ALERT.SET_THRESHOLD(
METRICS_ID => DBMS_SERVER_ALERT.ELAPSED_TIME_PER_CALL
, warning_operator => DBMS_SERVER_ALERT.OPERATOR_GE
, warning_value => '500000'
, critical_operator => DBMS_SERVER_ALERT.OPERATOR_GE
, critical_value => '750000'
, observation_period => 30
, consecutive_occurrences => 5
, instance_name => NULL
, object_type => DBMS_SERVER_ALERT.OBJECT_TYPE_SERVICE
, object_name => 'servall');


EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(
service_name => 'ERP', module_name=> 'PAYROLL', 
action_name => 'EXCEPTIONS PAY')

EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name => 'ERP', module_name=> 'PAYROLL', action_name => NULL);

EXEC DBMS_MONITOR.SERV_MOD_ACT_STAT_ENABLE(service_name => 'HOT_BATCH', module_name =>'POSTING', action_name => NULL);

V$SESSION
V$ACTIVE_SESSION_HISTORY
	
V$SERVICE_STATS
V$SERVICE_EVENT
V$SERVICE_WAIT_CLASS
V$SERVICEMETRIC
V$SERVICEMETRIC_HISTORY
V$SERV_MOD_ACT_STATS
DBA_ENABLED_AGGREGATIONS
DBA_ENABLED_TRACES

lsnrctl status LISTENER_SCAN2
restore a rac database:
how to restore a 12c oracle rac database 

srvctl disable database -d compdb
# traditional rman restore and recover
srvctl enable database -d compdb

ocr and voting disk backup and recovery

restore ocr 1 
restore ocr 2 
restore ocr 3 

# ocrconfig -export ocr_backup_`date +%Y%m%d`.dmp
# ocrconfig -manualbackup

# # backup voting disks, not supported in 12c 18c 19c
# dd if=/dev/raw/raw2 of=vod_backup_`date +%Y%m%d`.dmp bs=1k count=500k

Since 11g (11.2), the voting disk data is automatically backed up in OCR as part of any configuration change and 
is automatically restored to any voting disk added. 
That is to say, as long as you have the latest OCR backup, you automatically have voting disk backup too.

# # but need to the following commands
# crsctl replace votedisk +DATA