Oracle 12c New Features

I am in the process of putting my notes into web pages, please keep checking in later for more contents
oracle       dbaora 
top   prev   next  

1Z0-060: New Features of Oracle Database 12c


Enterprise Manager and Other Tools
    Use EM Express
    Use OUI, DBCA for installation and configuration

EM Express:
alter session set container=pdborcl;

select dbms_xdb.gethttpsport() from dual;

select dbms_xdb_config.gethttpsport() from dual;

lsnrctl status | grep HTTP
 --> show em port info

exec dbms_xdb_config.sethttpsport(5501);

grant EM_EXPRESS_BASIC to ;
grant EM_EXPRESS_ALL to ;

The EM_EXPRESS_BASIC role enables users to connect to EM Express and to view the pages in read-only mode. 
The EM_EXPRESS_BASIC role includes the SELECT_CATALOG_ROLE role.

The EM_EXPRESS_ALL role enables users to connect to EM Express and use all the functionality provided by EM Express 
(read/write access to all EM Express features). 
The EM_EXPRESS_ALL role includes the EM_EXPRESS_BASIC role

Depending on what packs you own, you should set the control_management_pack_access init.ora parameter on your system.  
For example, if you have the Diagnostics Pack, but not the Tuning Pack, 
you should set the control_management_pack_access parameter=DIAGNOSTIC
dbca,dbua
perl scripts: catctl.pl
$ORACLE_HOME/perl/bin/perl catctl.pl -n 8 catupgrd.sql
$ORACLE_HOME/perl/bin/perl catctl.pl catupgrd.sql

method 1: convert to cdb
perl catcon.pl -u SYS -d $ORACLE_HOME/rdbms/admin -e -s -b create_dictionary catcdb.sql


method 2: 
1): on source db: exec DBMS_PDB.DESCRIBE('PDB1.xml');
2): On target db: create pluggable database PDB1using ('PDB1.xml') nocopy tempfile reuse;
3): start ?/rdbms/admin/noncdb_to_pdb.sql

Note:884522.1

preupgrd.sql
preupgrade_fixups.sql
postupgrade_fixups.sql

top   prev   next  

Basics of Multitenant Container Database (CDB)
    Identify the benefits of the multitenant container database
    Explain root and multitenant architecture

ALTER SESSION SET CONTAINER=CDB1;
ALTER SESSION SET CONTAINER=CDB$ROOT;
ALTER SESSION SET CONTAINER=PDB$SEED;
ALTER SESSION SET CONTAINER=PDB1;
how to login to pdb
1): login into cdb, then ALTER SESSION SET container = pdb1;
2): login into pdb directly using tns entry for pdb

PDB1 =
  (DESCRIPTION =
    (ADDRESS = (PROTOCOL = TCP)(HOST = 10.10.10.10)(PORT = 1521))
    (CONNECT_DATA =
      (SERVER = DEDICATED)
      (SERVICE_NAME = pdb1)
    )
  )

$ORACLE_HOME/network/admin/listener.ora
USE_SID_AS_SERVICE_listener=on
lsnrctl reload

-- Where to get the service name:
SELECT name, pdb FROM   v$services ORDER BY name;

NAME			       PDB
------------------------------ ------------------------------
SYS$BACKGROUND                 CDB$ROOT
SYS$USERS                      CDB$ROOT
cdb1                           CDB$ROOT
cdb1XDB                        CDB$ROOT
pdb1                           PDB1
pdb2                           PDB2

lsnrctl service

LSNRCTL for Linux: Version 12.1.0.1.0 - Production on 20-MAY-2014 09:01:34

Copyright (c) 1991, 2013, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1521)))
Services Summary...
Service "cdb1" has 1 instance(s).
... ...
Service "pdb1" has 1 instance(s).
  Instance "cdb1", status READY, has 1 handler(s) for this service...
... ...
Background Processes /SGA (shared by root & all PDBs)

Character Set shared by root & all PDBs

Redo shared by root and all PDBs

Undo shared by root and all PDBs

Temporary Tablespace: can create for each PDB

Time Zones: can be set for each PDB

Initialization parameters: some can be set by PDB

Separate SYSTEM & SYSAUX for root & each PDB

Data files separate for root & each PDB (same block size)

top   prev   next  

Configuring and Creating CDBs and PDBs
    Create and configure a CDB
    Create and configure a PDB
    Migrate a non-CDB to a PDB database

CREATE DATABASE newcdb
USER SYS IDENTIFIED BY sys_password USER SYSTEM IDENTIFIED BY system_password
LOGFILE GROUP 1 ('/u01/logs/my/redo01a.log','/u02/logs/my/redo01b.log') SIZE 100M BLOCKSIZE 512,
GROUP 2 ('/u01/logs/my/redo02a.log','/u02/logs/my/redo02b.log') SIZE 100M BLOCKSIZE 512,
GROUP 3 ('/u01/logs/my/redo03a.log','/u02/logs/my/redo03b.log') SIZE 100M BLOCKSIZE 512 MAXLOGHISTORY 1
MAXLOGFILES 16 MAXLOGMEMBERS 3 MAXDATAFILES 1024 CHARACTER SET AL32UTF8 NATIONAL
CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/u01/app/oracle/oradata/newcdb/system01.dbf' SIZE 700M REUSE AUTOEXTEND ON NEXT 10240K MAXSIZE
UNLIMITED
SYSAUX DATAFILE '/u01/app/oracle/oradata/newcdb/sysaux01.dbf' SIZE 550M REUSE AUTOEXTEND ON NEXT 10240K
MAXSIZE UNLIMITED
DEFAULT TABLESPACE deftbs DATAFILE '/u01/app/oracle/oradata/newcdb/deftbs01.dbf' SIZE 500M REUSE AUTOEXTEND
ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE tempts1 TEMPFILE '/u01/app/oracle/oradata/newcdb/temp01.dbf' SIZE 20M REUSE
AUTOEXTEND ON NEXT 640K MAXSIZE UNLIMITED
UNDO TABLESPACE undotbs1 DATAFILE '/u01/app/oracle/oradata/newcdb/undotbs01.dbf' SIZE 200M REUSE AUTOEXTEND
ON NEXT 5120K MAXSIZE UNLIMITED 
ENABLE PLUGGABLE DATABASE
SEED FILE_NAME_CONVERT = ('/u01/app/oracle/oradata/newcdb/', '/u01/app/oracle/oradata/pdbseed/')
SYSTEM DATAFILES SIZE 125M AUTOEXTEND ON NEXT 10M MAXSIZE UNLIMITED
SYSAUX DATAFILES SIZE 100M
USER_DATA TABLESPACE usertbs DATAFILE '/u01/app/oracle/oradata/pdbseed/usertbs01.dbf' SIZE 200M REUSE
AUTOEXTEND ON MAXSIZE UNLIMITED


CREATE PLUGGABLE DATABASE dwpdb ADMIN USER dwadm IDENTIFIED BY password;

CREATE PLUGGABLE DATABASE dwpdb ADMIN USER dwadm IDENTIFIED BY password ROLES=(SELECT_CATALOG_ROLE,GATHER_SYSTEM_STATISTICS);

PDB_DBA role is also granted in addition to the above specifically granted roles.
CREATE PLUGGABLE DATABASE dwpdb ADMIN USER dwadm
IDENTIFIED BY
password
STORAGE (
MAXSIZE 10G
MAX_SHARED_TEMP_SIZE 100M) DEFAULT TABLESPACE dw
DATAFILE '/disk1/oracle/dbs/dwpdb/dw1.dbf' SIZE 2G
AUTOEXTEND ON PATH_PREFIX = '/disk1/oracle/dbs/dwpdb/'
FILE_NAME_CONVERT = ('/disk1/oracle/dbs/pdbseed/',
'/disk1/oracle/dbs/dwpdb/');

CREATE PLUGGABLE DATABASE
pdb2 FROM pdb1
;

CREATE PLUGGABLE DATABASE
pdb2 FROM pdb1
PATH_PREFIX = '/disk2/oracle/pdb2'
FILE_NAME_CONVERT = ('/disk1/oracle/pdb1/',
'/disk2/oracle/pdb2/');

CREATE PLUGGABLE DATABASE
pdb2 FROM pdb1
FILE_NAME_CONVERT = ('/disk1/oracle/pdb1/',
'/disk2/oracle/pdb2/') STORAGE (MAXSIZE 2G
MAX_SHARED_TEMP_SIZE 100M);
CREATE PLUGGABLE DATABASE
pdb2 FROM
pdb1@pdb1_link
;

BEGIN
DBMS_PDB.DESCRIBE(
pdb_descr_file => '/disk1/oracle/ncdb.xml');
END;
/

Run the noncdb_to_pdb.sql script:
. @$ORACLE_HOME/rdbms/admin/noncdb_to_pdb.sql

CREATE PLUGGABLE DATABASE dwpdb USING '/disk1/usr/dwpdb.xml' NOCOPY TEMPFILE REUSE;

CREATE PLUGGABLE DATABASE dwpdb USING '/disk1/usr/dwpdb.xml' 
SOURCE_FILE_NAME_CONVERT = ('/disk1/oracle/dw/', '/disk2/oracle/dw/') NOCOPY 
STORAGE (MAXSIZE 4G MAX_SHARED_TEMP_SIZE 100M) TEMPFILE REUSE;

ALTER PLUGGABLE DATABASE dwpdb UNPLUG INTO '/oracle/data/dwpdb.xml';

DROP PLUGGABLE DATABASE dwpdb KEEP DATAFILES;

DROP PLUGGABLE DATABASE dwpdb INCLUDING DATAFILES;


top   prev   next  

Managing CDBs and PDBs
    Establish connection to a CDB/PDB
    Start up and shut down a CDB/PDB
    Change instance parameters for a CDB/PDB

notes:
init.ora
dispatchers="(PROTOCOL=TCP)(SERVICE=XDB)"

ALTER SESSION SET CONTAINER=PDB1;
alter session set container=CDB1;

ALTER SESSION SET CONTAINER=CDB$ROOT;

ALTER SESSION SET CONTAINER=PDB$SEED;

ALTER SESSION SET CONTAINER=pdb_ss; -- (not case sensitive)

SELECT * FROM DBA_CONTAINER_DATA;

alter session set container=CDB$ROOT;

select name,open_mode,open_time from v$pdbs;

alter pluggable database ALL open read only ; -- (from CDB)
ALTER PLUGGABLE DATABASE PDB_SS, PDB1 CLOSE ;

ALTER PLUGGABLE DATABASE CLOSE IMMEDIATE;
alter pluggable database all except pdb1 close immediate;

SQL> ALTER PLUGGABLE DATABASE OPEN READ WRITE;

SQL> ALTER PLUGGABLE DATABASE CLOSE; -- ( shutdown )

Alter pluggable database open upgrade; -- (to migrate)

ALTER PLUGGABLE DATABASE PDB_SS, PDB1 open;

-- Pluggable database altered.
select name,open_mode,open_time from v$pdbs;
alter pluggable database pdb$seed close immediate;

alter pluggable database pdb$seed close immediate;
ERROR at line 1:
 ORA-65017: seed pluggable database may not be dropped or altered
Startup pluggable database pdb1 open ;(read/write)
(or while in pdb1 just run STARTUP)

Startup pluggable database pdb1 open read only;

Startup pluggable database pdb1 force; (closes/opens)
V$SESSION & V$INSTANCE have a CON_ID column for containers 
new V$IO_OUTLIER 

SELECT d.con_ID, p.PDB_NAME,d.FILE_ID,d.TABLESPACE_NAME, d.FILE_NAME 
FROM CDB_PDBS p, CDB_DATA_FILES d WHERE p.PDB_ID (+) = d.CON_ID order by d.con_id ;

SELECT p.PDB_ID, p.PDB_NAME, t.OWNER, t.TABLE_NAME
FROM CDB_PDBS p, CDB_TABLES t 
where p.PDB_ID = t.CON_ID AND T.OWNER ='ORDDATA' ORDER BY t.TABLE_NAME;

select name,service_id,con_name,con_id from v$active_services order by 1;
init.ora
ENABLE_PLUGGABE_DATABASE = true; (CDB) false (Npn-CDB)
SELECT NAME, CDB, CON_ID FROM V$DATABASE;
SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') FROM DUAL;

COL PDB_NAME FOR A30
SELECT PDB_ID,PDB_NAME,STATUS,CON_ID FROM CDB_PDBS;
SELECT NAME,OPEN_MODE,TOTAL_SIZE/1024/1024/1024 FROM V$PDBS;

ALTER SESSION SET CONTAINER=SALESPDB;
ALTER SESSION SET CONTAINER=PDB$SEED;
ALTER SESSION SET CONTAINER=CDB$ROOT;

SELECT NAME FROM V$PARAMETER WHERE ISPDB_MODIFIABLE = 'TRUE' AND NAME LIKE 'optim%';

Key ones modifiable:
cursor_sharing,open_cursors,result_cache_mode,sort_area_size

Key ones NOT modifiable:
shared_pool_size,db_cache_size,memory_target

-- The following commands can be run at cdb and pdb level
ALTER SYSTEM FLUSH SHARED_POOL
ALTER SYSTEM FLUSH BUFFER_CACHE
ALTER SYSTEM SET USE_STORED_OUTLINES
ALTER SYSTEM SUSPEND/RESUME
ALTER SYSTEM CHECKPOINT
ALTER SYSTEM KILL SESSION
ALTER SYSTEM DISCONNECT SESSION
ALTER SYSTEM SET initialization_parameter

top   prev   next  

Managing Tablespaces, Common and Local Users, Privileges and Roles
    Manage tablespaces in a CDB/PDB
    Manage users and privileges for CDB/PDB

CREATE TABLESPACE TS_PDB DATAFILE '/dbs/CONDB/PDB/ts_pdb_01.dbf' SIZE 10M;
ALTER PLUGGABLE DATABASE PDB DEFAULT TABLESPACE TS_PDB;
select name, open_mode , open_time from v$pdbs ; -- Query the PDBs
SELECT NAME, CREATED, CDB, CON_ID FROM V$DATABASE;

CDB$ROOT
PDB$SEED

In a CDB:Only one CDB$ROOT (Root), only one PDB$SEED (Seed), 
plus any PDBs (up to 252 more . 253 including the seed) that you create or plug in

Entire CDB => Container ID = 0
Root (CDB$ROOT) => Container ID = 1
Seed (PDB$SEED) => Container ID = 2
PDBs => Container ID = 3 to 254

sqlplus> SHO CON_ID CON_NAME

top   prev   next  

Backup, Recovery and Flashback for a CDB/PDB
    Perform backup of CDB and PDB
    Perform recovery of CDB and PDB
    Perform Flashback for a CDB

RMAN> alter pluggable database all open; (great command!)
RMAN> alter pluggable database pdb1 close;
RMAN> restore pluggable database pdb1;
RMAN> recover pluggable database pdb1 until SCN 777070;
RMAN> alter pluggable database pdb1 open resetlogs;

srvctl add service ... . pdb 

top   prev   next  

Information Lifecycle Management and Storage Enhancements
    Use ILM features
    Perform tracking and automated data placement
    Move a data file online

alter database move datafile 'xxx' to 'yyy';
alter database move datafile 'xxx' to 'yyy' keep; -- keep source file
alter database move datafile 'xxx' to 'yyy' reuse; -- overwrite target file

alter table online 
alter table dept3 move partition d1 tablespace sysaux online;
Automatic Data Optimization (ado) with Oracle Database 12c obe  

exec dbms_ilm_admin.set_heat_map_start(start_date => sysdate - 30);

alter system set heat_map=on  scope=both;
alter system set heat_map=off scope=both;

select object_name, segment_write_time, SEGMENT_READ_TIME, FULL_SCAN FROM dba_heat_map_segment WHERE OBJECT_NAME='MY_TABLE' AND OWNER = 'XXXX';
select object_name, track_time, segment_write, full_scan, lookup_scan from DBA_HEAT_MAP_SEG_HISTOGRAM where object_name='MY_TABLE' and owner = 'XXXX';

select compression, compress_for from dba_tables where table_name = 'MY_TABLE' and owner = 'XXXX';
select partition_name, tablespace_name, compression, compress_for from dba_tab_partitions where table_owner = 'FOREX' and table_name = 'EXCHANGE_RATE' order by partition_name;

alter table my_table ilm add policy tier to low_cost_store_ts;
alter table my_table ilm add policy tier to data2 read only after 180 days of no modification; 

alter table my_table ilm add policy row store  compress advanced segment after 7 days of no modification; 
alter table my_table ilm add policy row store  compress advanced row     after 1 day  of no modification; 
alter table my_table ilm add policy row store  compress for OLTP row after 3 days  of no modification; 

alter table my_table ilm add policy column store compress for query   high segment after  30 days of no modification; 
alter table my_table ilm add policy column store compress for archive high segment after 180 days of no modification; 


-- Row Store for fast OLTP
-- Compressed Column Store for fast analytics
-- Archive Compressed Column Store for max compression
-- table policy inherited from tablespace, partition policy inherited from table

alter table my_table ilm delete_all;

select * from dba_ilmparameters;

exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_used,85);
exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_free,25);
exec dbms_ilm_admin.customize_ilm(dbms_ilm_admin.tbs_percent_used,90);
 

declare
v_executionid number;
begin
dbms_ilm.execute_ILM (ILM_SCOPE => dbms_ilm.SCOPE_SCHEMA,execution_mode => dbms_ilm.ilm_execution_offline, task_id => v_executionid);
end;
/

select policy_name, action_type, scope, compression_level, tier_tablespace, condition_type, condition_days from user_ILMDataMovementPolicies;


select policy_name, object_name, inherited_from, enabled from user_ilmobjects;
select * from user_ilmpolicies;

select task_id, task_owner,   to_char(start_time, 'mm/dd/yy hh24:mi:ss') start_time,   to_char(completion_time, 'mm/dd/yy hh24:mi:ss') completion_time from dba_ilmtasks where task_owner != 'SYS'
select task_id, start_time as start_time from user_ilmtasks;

select task_id, job_name, job_state, to_char(start_time, 'mm/dd/yy hh24:mi:ss') start_time, to_char(completion_time, 'mm/dd/yy hh24:mi:ss') completion_time from dba_ilmresults where task_id = 3;
select * from user_ilmresults;


select task_id, policy_name, object_name, selected_for_execution, job_name from user_ilmevaluationdetails where task_id=11;

top   prev   next  

In-Database Archiving and Valid-Time Temporal
    Differentiate between ILM and Valid-Time Temporal
    Set and use Valid Time Temporal
    Use In-Database archiving


Valid-Time Temporal obe
alter table hr.emp_temp add period for valid_time; exec dbms_flashback_archive.enable_at_valid_time('CURRENT'); exec dbms_flashback_archive.enable_at_valid_time('ALL'); select first_name,to_char(valid_time_start,'dd-mon-yyyy') "Start", to_char(valid_time_end,'dd-mon-yyyy') "End" from hr.emp_temp order by 2; select first_name,to_char(valid_time_start,'dd-mon-yyyy') "Start", to_char(valid_time_end,'dd-mon-yyyy') "End" from hr.emp_temp as of period for valid_time to_date('01-JUN-2011') order by 2; select first_name,to_char(valid_time_start,'dd-mon-yyyy') "Start", to_char(valid_time_end,'dd-mon-yyyy') "End" from hr.emp_temp versions period for valid_time between to_date('01-SEP-1995') and to_date('01-SEP-1996') order by 2;
Using In-Database Row Archiving obe
ORA_ARCHIVE_STATE

CREATE TABLE test_tbl( id NUMBER ) ROW ARCHIVAL;
ALTER TABLE test_tbl NO ROW ARCHIVAL;
ALTER TABLE test_tbl ROW ARCHIVAL;

UPDATE test_tbl SET ora_archive_state=DBMS_ILM.ARCHIVESTATENAME(1) WHERE ID IN (1,2);

SELECT ORA_ARCHIVE_STATE, ID FROM test_tbl;

ALTER SESSION SET ROW ARCHIVAL VISIBILITY=ALL;
ALTER SESSION SET ROW ARCHIVAL VISIBILITY=ACTIVE;

DBMS_ILM.ARCHIVESTATENAME(1)

top   prev   next  

Auditing
    Enable and configure Unified Audit Data Trail
    Create and enable audit policies

docs  

obe  

before:
SYS.AUD$ for the database audit trail,
SYS.FGA_LOG$ for fine-grained auditing,
DVSYS.AUDIT_TRAIL$ for Oracle Database Vault, Oracle Label Security,

now:
view:  UNIFIED_AUDIT_TRAIL  
tables are in schema: audsys

select count(*) from unified_audit_trail;

select action_name, dbusername from unified_audit_trail where dbusername='HR';

os file location:
$ORACLE_BASE/audit

init.ora: UNIFIED_AUDIT_SGA_QUEUE_SIZE
exec dbms_audit_mgmt.flush_unified_audit_trail

Relink Oracle with the uniaud_on option. 
cd $ORACLE_HOME/rdbms/lib
make -f ins_rdbms.mk uniaud_on ioracle

SELECT VALUE FROM V$OPTION WHERE PARAMETER = 'Unified Auditing';

role:
AUDIT_ADMIN:  audit configuration and audit trail administration, 
AUDIT_VIEWER: viewing and analyzing audit data

ORA_SECURECONFIG policy

select distinct POLICY_NAME from AUDIT_UNIFIED_ENABLED_POLICIES where policy_name='ORA_SECURECONFIG';

select POLICY_NAME, AUDIT_OPTION, CONDITION_EVAL_OPT from AUDIT_UNIFIED_POLICIES where POLICY_NAME='AUD_SYSPRIV_POL';

select POLICY_NAME, AUDIT_OPTION from AUDIT_UNIFIED_POLICIES where policy_name='ORA_SECURECONFIG' order by 2 ;

new function:
Conditional Auditing
Ability to Audit Any Role.
Auditing Application Context Values.
Auditing Oracle Database Real Application Security Events.
Auditing Oracle Recovery Manager Events.
Auditing Oracle Database Vault Events.
Auditing Oracle Label Security Events.
Auditing Oracle Data Mining Events.
Auditing Oracle Data Pump Events.
Auditing Oracle SQL*Loader Direct Load Path Events.
Moving Operating System Audit Records into the Unified Audit Trail.
-- rman auditing
select  DBUSERNAME, RMAN_OPERATION from UNIFIED_AUDIT_TRAIL where RMAN_OPERATION is not null;
create audit policy DP_POL actions component=datapump export;
audit policy DP_POL;
select * from AUDIT_UNIFIED_ENABLED_POLICIES  where POLICY_NAME like '%DP%';
select DBUSERNAME, DP_TEXT_PARAMETERS1, DP_BOOLEAN_PARAMETERS1 from UNIFIED_AUDIT_TRAIL where  DP_TEXT_PARAMETERS1 is not null;
EXEC SYS.DBMS_AUDIT_MGMT.FLUSH_UNIFIED_AUDIT_TRAIL
select DBUSERNAME, DP_TEXT_PARAMETERS1, DP_BOOLEAN_PARAMETERS1 from UNIFIED_AUDIT_TRAIL where  DP_TEXT_PARAMETERS1 is not null;
create audit policy aud_syspriv_pol PRIVILEGES select any table, create library WHEN 'SYS_CONTEXT(''USERENV'',''SESSION_USER'')=''OE''' EVALUATE PER STATEMENT;

create audit policy aud_action_pol ACTIONS select, update, delete on hr.employees;

grant SELECT ANY TABLE to oe;
audit policy aud_syspriv_pol;

select DBUSERNAME, ACTION_NAME, SYSTEM_PRIVILEGE_USED from unified_audit_trail where DBUSERNAME in ('HR','OE');

select POLICY_NAME, AUDIT_OPTION, CONDITION_EVAL_OPT from AUDIT_UNIFIED_POLICIES where POLICY_NAME='AUD_SYSPRIV_POL';
select POLICY_NAME, ENABLED_OPT, USER_NAME, SUCCESS, FAILURE from AUDIT_UNIFIED_ENABLED_POLICIES where POLICY_NAME ='AUD_SYSPRIV_POL';

select UNIFIED_AUDIT_POLICIES, DBUSERNAME, ACTION_NAME,SYSTEM_PRIVILEGE_USED,to_char(EVENT_TIMESTAMP,'DD-MON-YY HH:MI') "DATE"
from unified_audit_trail where DBUSERNAME in ('DEV','OE') and ACTION_NAME not in ('LOGON', 'LOGOFF') order by 4;

create audit policy aud_role_pol ROLES mgr_role;
create audit policy aud_dba_pol  ROLES dba;

audit policy AUD_ROLE_POL WHENEVER SUCCESSFUL;
audit policy AUD_DBA_POL  WHENEVER SUCCESSFUL;

select POLICY_NAME, AUDIT_OPTION, CONDITION_EVAL_OPT from AUDIT_UNIFIED_POLICIES where POLICY_NAME in ('AUD_ROLE_POL','AUD_DBA_POL');

select POLICY_NAME, ENABLED_OPT, USER_NAME, SUCCESS, FAILURE from   AUDIT_UNIFIED_ENABLED_POLICIES where POLICY_NAME in ('AUD_ROLE_POL','AUD_DBA_POL');

select UNIFIED_AUDIT_POLICIES, DBUSERNAME, ACTION_NAME, SYSTEM_PRIVILEGE_USED, from unified_audit_trail
where DBUSERNAME in ('JIM','DBA_JUNIOR') and ACTION_NAME not in ('LOGON', 'LOGOFF') 
and (UNIFIED_AUDIT_POLICIES like '%AUD_ROLE_POL%' or UNIFIED_AUDIT_POLICIES like '%AUD_DBA_POL%');

create audit policy aud_mixed_pol ACTIONS create table, drop table, truncate table ROLES storage_role;
audit policy aud_mixed_pol;
noaudit policy aud_mixed_pol;
drop audit policy dp_pol;
select UNIFIED_AUDIT_POLICIES,DBUSERNAME,ACTION_NAME, SYSTEM_PRIVILEGE_USED from unified_audit_trail 
where  DBUSERNAME in ('JIM','DEV') and    UNIFIED_AUDIT_POLICIES like '%AUD_MIXED_POL%'
and    ACTION_NAME not in ('LOGON', 'LOGOFF');

select * from AUDIT_UNIFIED_ENABLED_POLICIES where  POLICY_NAME like '%MIXED%';

exec DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP ( -
     AUDIT_TRAIL_TYPE  => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -
     LAST_ARCHIVE_TIME => sysdate) 
exec DBMS_AUDIT_MGMT.CLEAN_AUDIT_TRAIL( -
     AUDIT_TRAIL_TYPE        => DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -    
     USE_LAST_ARCH_TIMESTAMP => TRUE)

exec DBMS_AUDIT_MGMT.SET_LAST_ARCHIVE_TIMESTAMP ( -
     AUDIT_TRAIL_TYPE = DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -
     LAST_ARCHIVE_TIME = sysdate)

exec DBMS_AUDIT_MGMT.CREATE_PURGE_JOB (-
     AUDIT_TRAIL_TYPE = DBMS_AUDIT_MGMT.AUDIT_TRAIL_UNIFIED, -
     AUDIT_TRAIL_PURGE_INTERVAL = 1, -
     AUDIT_TRAIL_PURGE_NAME = 'Audit_Trail_PJ', -
     USE_LAST_ARCH_TIMESTAMP = TRUE)

select JOB_NAME, STATUS, ACTUAL_START_DATE from dba_scheduler_job_run_details where JOB_NAME='AUDIT_TRAIL_PJ' order by ACTUAL_START_DATE;

top   prev   next  

Privileges
    Use administrative privileges
    Create, enable and use privilege analysis

notes:
SYSBACKUP - Admin privilege gives someone the ability to run RMAN backups,(don.t need SYSDBA for RMAN)
SYSDG - Admin privileges for Data Guard activities
SYSKM - Admin privileges for Transparent Data Encryption (TDE) wallet 
AUDIT_ADMIN 

top   prev   next  

Oracle Data Redaction
    Use and manage Oracle Data Redaction policies


top   prev   next  

RMAN and Flashback Data Archive
    Use RMAN enhancements
    Implement the new features in Flashback Data Archive

rman enhancement: 
1): sysbackup privileges
2): drect sql support
RMAN> create table test_tbl(id number);
RMAN> select * from test_tbl;
RMAN> desc test_tbl
12c Flash Data Archive New Feature

FDA can now be used with HCC compressed tables on Exadata and other Oracle storages.

FLASHBACK ARCHIVE object privilege
FLASHBACK ARCHIVE ADMINISTER privilege

CREATE TABLESPACE fda_1yr DATAFILE '... ...';
CREATE FLASHBACK ARCHIVE FDA1 TABLESPACE fda_1yr QUOTA 10G RETENTION 1 YEAR;
CREATE FLASHBACK ARCHIVE FBA_NO_OPTIMIZATION TABLESPACE FBA_NO_OPTIMIZATION RETENTION 1 MONTH NO OPTIMIZE DATA;
CREATE FLASHBACK ARCHIVE FBA_OPTIMIZATION TABLESPACE FBA_OPTIMIZATION RETENTION 1 MONTH OPTIMIZE DATA;

ALTER FLASHBACK ARCHIVE FDA1 MODIFY RETENTION 2 YEAR;
ALTER FLASHBACK ARCHIVE FDA1 PURGE ALL;
DROP FLASHBACK ARCHIVE FDA1;

create table tb_products(prod_id number, prod_desc varchar(50)) flashback archive;
CREATE TABLE T_NO_OPTIM (ID NUMBER(19), DESCRIPTION VARCHAR2(42)) FLASHBACK ARCHIVE FBA_NO_OPTIMIZATION;
CREATE TABLE T_OPTIM (ID NUMBER(19), DESCRIPTION VARCHAR2(42)) FLASHBACK ARCHIVE FBA_OPTIMIZATION;

ALTER TABLE emp FLASHBACK ARCHIVE FBDA1;
SELECT * FROM emp AS OF TIMESTAMP SYSDATE - 180;
ALTER TABLE emp NO FLASHBACK ARCHIVE;

select TABLE_NAME, FLASHBACK_ARCHIVE_NAME, ARCHIVE_TABLE_NAME, STATUS from DBA_FLASHBACK_ARCHIVE_TABLES;

select * from DBA_FLASHBACK_ARCHIVE;
select * DBA_FLASHBACK_ARCHIVE_TABLES;
select * DBA_FLASHBACK_ARCHIVE_TS;

What are its limitations so far?
It can not be enabled on nested, clustered, temporary, remote, or external tables
It can not be enabled on tables with LONG nor nested columns
ORA-65131: The feature Flashback Data Archive  is not supported in a pluggable database.

12c new features:
exec dbms_flashback_archive.set_context_level(level=> 'ALL'); 
-- context
exec dbms_flashback_archive.set_context_level(level=> 'ALL' );

select xid, operation, prod_id, prod_desc from SYS_FBA_HIST_91091;

select s.authenticated_identity, s.host, s.module, h.operation, h.prod_id, h.prod_desc
from sys.sys_fba_context_aud s, msc.sys_fba_hist_91091 h where h.xid=s.xid;

-- or get_sys_context function

declare
  v_xid RAW(8);
  output varchar2(32767);
    begin  
    --Set the variable v_xid with the XID value
    select xid into v_xid from msc.sys_fba_hist_91091 where xid='05001100D6020000';
  
    --Displays the username
    output:= dbms_flashback_archive.get_sys_context (v_xid, 'USERENV', 'SESSION_USER');
    dbms_output.put_line('User: '|| output);
  
    --Displays the hostname of the terminal from where the user is connected
    output:= dbms_flashback_archive.get_sys_context (v_xid,'USERENV','HOST');
    dbms_output.put_line('Hostname: '||output);
  
    --Displays the module from where the user is connected
    output:= dbms_flashback_archive.get_sys_context (v_xid,'USERENV','MODULE');
    dbms_output.put_line('Module: '||output);
  end;
/
-- hardening
exec dbms_flashback_archive.register_application(application_name=>'PRODUCTS',flashback_archive_name=>'FBA1');

exec dbms_flashback_archive.add_table_to_application(application_name=>'PRODUCTS',table_name=>'TB_PRODUCTS',schema_name=>'MSC');
exec dbms_flashback_archive.add_table_to_application(application_name=>'PRODUCTS',table_name=>'TB_PROVIDERS',schema_name=>'MSC');


select a.appname, b.object_name, c.obj# from SYS_FBA_APP a, DBA_OBJECTS b, SYS_FBA_APP_TABLES c
where a.app#=c.app# and b.object_id=c.obj#;

exec dbms_flashback_archive.enable_application(application_name=>'PRODUCTS');

select table_name, flashback_archive_name, archive_table_name, status from DBA_FLASHBACK_ARCHIVE_TABLES;

exec dbms_flashback_archive.create_temp_history_table('MSC', 'TB_PRODUCTS');
exec DBMS_FLASHBACK_ARCHIVE.IMPORT_HISTORY('MSC', 'TB_PRODUCTS', 'TEMP_HISTORY');


top   prev   next  

Real-Time Database Operation Monitoring
    Implement real-time database operation monitoring


top   prev   next  

SQL Tuning
    Use Adaptive Execution Plans
    Use enhanced features of statistics gathering
    Use Adaptive SQL Plan Management

Create EXTENDED Statistic Group

select dbms_stats.create_extended_stats('SH','CUSTOMERS', '(country_id,cust_state_province)') from dual;

Manage New Statistics .FYI Only DROP Extended Statistics
exec dbms_stats.gather_table_stats('SH','CUSTOMERS', method_opt =>'for all columns size skewonly');

exec dbms_stats.drop_extended_stats('SH', 'CUSTOMERS', '(country_id, cust_state_province)');

select column_name, num_distinct, histogram from user_tab_col_statistics where table_name = 'CUSTOMERS';
Adaptive query optimization, allows optimizer to adjust execution plan at run time when additional/better information is available
--> The optimizer peeks at user defined bind values during plan selection on the hard parse
OPTIMIZER_ADAPTIVE_REPORTING_ONLY = TRUE
OPTIMIZER_FEATURES_ENABLE=12.1.0.1 

--> require the following parameters
OPTIMIZER_FEATURES_ENABLE=12.1.0.1 (OR LATER)
OPTIMIZER_ADAPTIVE_REPORTING_ONLY=FALSE

Previously called dynamic sampling in 10g/11g,
Dynamic Statistics was used only in absence of stats on one of the tables in a multi-table join;
This is helpful when existing statistics are not sufficient.

PARALLEL_DEGREE_POLICY = ADAPTIVE

-- ---------------------------------------

-- deactivate bind peeking you can set:
alter system set "_OPTIM_PEEK_USER_BINDS"=FALSE

select sql_id,peeked, executions,rows_processed,cpu_time from
v$sql_cs_statistics;
(using the peeked value on the 2nd+ execution)

-- Bind Peeking
select sql_id , executions,is_bind_sensitive,is_bind_aware from v$sql;
DBMS_STATS.GATHER_TABLE_STATS run on a partitioned table when CONCURRENT is set to TRUE will gather stats using multiple jobs concurrently

top   prev   next  

Emergency Monitoring, Real-Time ADDM, Compare Period ADDM, and Active Session History (ASH) Analytics
    Perform emergency monitoring and real-time ADDM
    Generate ADDM Compare Period
    Diagnose performance issues using ASH enhancements


top   prev   next  

Resource Manager and Other Performance Enhancements
    Use Resource Manager for a CDB and PDB
    Explain Multi-process Multi-threaded Oracle architecture
    Use Flash Cache

note:
Use Flash Cache
SOlaris, Oracle Linux
DB_FLASH_CACHE_FILE = /dev/sda, /dev/sdb, /dev/sdc
DB_FLASH_CACHE_SIZE = 32G, 32G, 64G

V$FLASHFILESTAT


top   prev   next  

Index and Table Enhancements
    Use Index enhancements
    Use Table enhancements
    Use Online operation enhancements

notes:
The Invisible Index

sql: altere session set optimizer_use_invisible_indexes = true;


create index deptno_invisible_idx on dept_rich(deptno) invisible;
alter index dept_rich_inv_idx visible;
alter index dept_unique1 invisible;

select index_name,visibility from dba_indexes ;

select a.table_name, a.index_name,
b.column_name, a.uniqueness, a.visibility
from user_indexes a, user_ind_columns b
where a.index_name = b.index_name
and a.table_name = 'DEPT';

select column_name, num_distinct, histogram
from user_tab_col_statistics where table_name = 'CUSTOMERS';

alter table emp modify (sal invisible, comm invisible);
ALTER TABLE EMP MODIFY (SAL VISIBLE, COMM VISIBLE );

sqlplus: set colinvisible on --> desc my_table

invisible, but selectable,
SELECT SAL, JOB, ENAME,DEPTNO from emp;
DDL_LOCK_TIMEOUT - 11g
alter session set DDL_LOCK_TIMEOUT = 30

Enhanced DDL Capabilities - 12c
DROP INDEX ONLINE
DROP CONSTRAINT ONLINE
SET UNUSED COLUMN ONLINE
ALTER INDEX VISIBLE
ALTER INDEX INVISIBLE
SET UNUSED COLUMN ONLINE

top   prev   next  

ADR and Network Enhancements
    Explain ADR enhancements


top   prev   next  

Oracle Data Pump, SQL*Loader, External Tables and Online Operations Enhancements
    Use Oracle Data Pump enhancements
    Use SQL*Loader and External table enhancements

impdp command line option (or use DBMS_DATAPUMP)

impdp hr TABLES=hr.employees DIRECTORY=dpump_dir1 DUMPFILE=hr.dmp TRANSFORM=TABLE_COMPRESSION_CLAUSE:NOCOMPRESS
impdp scott2/tiger TABLES=dept2 TRANSFORM=TABLE_COMPRESSION_CLAUSE:compress:table 

Use the TABLE_COMPRESSION_CLAUSE:
TABLE_COMPRESSION_CLAUSE=NONE
TABLE_COMRPESSION_CLAUSE=NOCOMPRESS
TABLE_COMPRESSION_CLAUSE=COMPRESS BASIC
TABLE_COMPRESSION_CLAUSE=COMPRESS ROW STORE COMPRESS ADVANCED 

impdp scott2/tiger VIEWS_AS_TABLES... VIEWS_AS_TABLES=schema.view_name:table

expdp scott2/tiger VIEWS_AS_TABLES=emp_dept

top   prev   next  

Partitioning Enhancements
    Explain Partitioning enhancements
    Explain Index enhancements for partitioned tables


top   prev   next  

SQL Enhancements
    Use Oracle Database Migration Assistant for Unicode
    Use Row limiting clause, and secure file LOBs enhancements
    Configure extended datatypes

DMU: Database Migration for Unicode: Replacement of CSSCAN and CSALTER(desupported) with DMU
select * from my_table order by 1
FETCH FIRST 10 ROWS ONLY;

No "FETCH LAST": sort in reverse order

select * from my_table order by 1
OFFSET 10 ROWS FETCH NEXT 20 ROWS ONLY;

select * from my_table order by 1
FETCH NEXT 3 PERCENT ROWS ONLY;

init.ora db_securefile=permitted

BasicFile:
create table contracts_basic
(
        contract_id     number(12),
        contract_name   varchar2(80),
        file_size       number,
        orig_file       blob
)
tablespace users
lob (orig_file)
(
        tablespace users
        enable storage in row
        chunk 4096
        pctversion 20
        nocache
        nologging
);
/

-- SecureFile need ASSM tablespace
-- init.ora: db_securefile=permitted

-- Deduplication feature
alter table contracts_sec modify lob(orig_file) (deduplicate);
alter table contracts_sec modify lob(orig_file) (keep_duplicates);
alter table contracts_sec modify lob(orig_file) (compress high);
create table docs ( doc_id  number, clearance varchar2(20), doc_body clob) LOB(doc_body) store as securefile ( compress low );
alter table docs modify LOB(doc_body) ( nocompress );
alter table docs modify LOB(doc_body) ( compress low );
alter table contracts_sec modify lob(orig_file) (encrypt using 'AES128');
alter table contracts_sec modify lob(orig_file) (cache);

select table_name, column_name, encryption_alg from dba_encrypted_columns;

sqlnet.ora: ENCRYPTION_WALLET_LOCATION= 
   (SOURCE= 
       (METHOD=FILE) 
       (METHOD_DATA= 
          (DIRECTORY= /opt/oracle/orawall)

alter system set encryption key authenticated by "mypass";
lter system set encryption wallet open identified by "mypass";

create table contracts_sec 
(
        contract_id     number(12),
        contract_name   varchar2(80),
        file_size       number,
        orig_file       blob
)
tablespace users
lob (orig_file)                            
store as securefile
(
        tablespace users
        enable storage in row
        chunk 4096
        pctversion 20
        nocache
        LOGGING | NOLOGGING | FILESYSTEM_LIKE_LOGGING
)
/
init.ora: MAX_STRING_SIZE=STANDARD|EXTENDED 32767

for VARCHAR2(4000), NVARCHAR2(4000), RAW(2000)

Internally stored as SecureFiles LOBs or BasicFIle LOBs, but cannot use DBMS_LOB package

top   prev   next  

Key DBA Skills


Core Administration
    Explain the fundamentals of DB architecture
    Install and configure a database
    Configure server and client network for a database
    Monitor database alerts
    Perform daily administration tasks
    Apply and review patches
    Back up and recover the database
    Troubleshoot network and database issues
    Detect and repair data failures with Data Recovery Advisor
    Implement Flashback Technology
    Load and Unload Data
    Miscellaneous
        Relocate SYSAUX occupants
        Create a default permanent tablespace
        Use the Redo Logfile Size Advisor
        Use Secure File LOBs
        Use Direct NFS


top   prev   next  

Performance Management
    Design the database layout for optimal performance
    Monitor performace
    Manage memory
    Analyze and identify performance issues
    Perform real application testing
    Use Resource Manager to manage resources
    Implement Application Tuning

alter system set RESOURCE_LIMIT = TRUE CONTAINER = ALL;
(dynamically enable resource limits for all containers )

alter system set RESOURCE_LIMIT = TRUE CONTAINER = CURRENT;
(dynamically enable resource limits for the root

ALTER SYSTEM SET RESOURCE_LIMIT=TRUE CONTAINER=ALL;
ALTER SYSTEM SET RESOURCE_LIMIT=TRUE;

SELECT 
se.sid sess_id, co.name consumer_group,
se.state,se.consumed_cpu_time cpu_time,
se.cpu_wait_time,se.queued_time 
FROM v$rsrc_session_info se,v$rsrc_consumer_group co
WHERE se.current_consumer_group_id = co.id; 
BEGIN DBMS_RESOURCE_MANAGER.CREATE_CDB_PLAN_DIRECTIVE(
     plan => 'newcdb_plan',
     pluggable_database=> .pdb1',
     shares => 3,
     utilization_limit=> 70,
     parallel_server_limit=> 70);
END;

SELECT PLAN, STATUS, COMMENTS FROM DBA_CDB_RSRC_PLANS ORDER BY PLAN;

SELECT PLAN, PLUGGABLE_DATABASE, SHARES,UTILIZATION_LIMIT, PARALLEL_SERVER_LIMIT
FROM DBA_CDB_RSRC_PLAN_DIRECTIVES ORDER BY PLAN;

SELECT DB_NAME, CON_ID, PDB_NAME, OPERATION,OP_TIMESTAMP, CLONED_FROM_PDB_NAME
FROM CDB_PDB_HISTORY WHERE CON_ID > 2 ORDER BY CON_ID; 

top   prev   next  

Storage
    Manage database structures
    Administer ASM
    Manage ASM disks and diskgroups
    Manage ASM instance
    Manage VLDB
    Implement Space Management

Flex ASM in Oracle 12c
DB Servers are automatically relocated to another instance if an Oracle
ASM instance fails. If necessary, clients can be manually relocated eg:
SQL> ALTER SYSTEM RELOCATE CLIENT 'client-id';

top   prev   next  

Security
    Develop and implement a security policy
    Configure and manage auditing
    Create the password file
    Implement column and tablespace encryption

Others

init.ora
PGA_AGGREGATE_LIMIT --> 12c
PGA_AGGREGATE_TARGET --> 11g
init.ora
show parameter result_cache

V$RESULT_CACHE_STATISTICS  Displays the amount of memory to help you determine memory currently allocated to the result cache. 

V$RESULT_CACHE_MEMORY

V$RESULT_CACHE_OBJECTS

V$RESULT_CACHE_DEPENDENCY
plsql enhancement:

RELIES_ON : 
FUNCTION GetDeptInfo (dept_id NUMBER) RETURN DeptInfoRec RESULT_CACHE RELIES_ON (EMP); 

The RELIES_ON Clause specifies tables or views that the Function Results are dependent on
HEAT MAP --> frequency of data usage --> storage
Automatic Data Optimization with Heat Map
q: scheduling operations in a pluggable database (PDB)
1): Scheduler Attributes setting only at CDB Level
2): Scheduler Objects for a PDB --> expdp/impdp ok
3): Can be created by common user or pdb user at CDB or PDB level
4): A job defined in a PDB can be run even when PDB is not open
Improve rman backup performance:
use synchronous I/O
dbwr_io_slaves --> nonzero
remove the rate parameter, if specified, in the allocate channel command
use compression provided by media manager rather than use RMAN compression

q: rman list command:
stored scripts in the recovery catalog
backup sets and image copies that are obsolete
backups of tablespaces
not list for: available archived redo log files
not list for: backups that are marked obsolete according to the current retention policy 

Q: RECOVER DATABASE USING BACKUP CONTROLFILE UNTIL CANCEL;
The current online redo log file is missing.
A data file belonging to a noncritical tablespace is missing.
not: All the control files are missing.

Q: rman table level recover:
1. Determine which backup contains the table that needs to be recovered.
4. Create a Data Pump export dump file that contains the recovered table on a target database.
3. Import the Data Pump export dump file into the auxiliary instance.
2. Issue the recover table RMAN command with an auxiliary destination defined and the point in time specified.
-- Trying to see performance difference
Automatic Database Diagnostic Monitor (ADDM) Compare Period report
-- ------------------
sqlplus:
DESCRIBE employees
SET COLINVISIBLE ON
-- ------------------

SET DOCUMENT OFF
-- display or not display sqlplus comments
-- ------------------

SELECT DISTINCT namespace, object_type
FROM DBA_OBJECTS
ORDER BY namespace;

-- -------------------

-- define as identity column
CREATE TABLE oca_ident1 (
 id NUMBER GENERATED BY DEFAULT AS IDENTITY,
 memo VARCHAR2 (30));

-- -------------------

ALTER TABLE products READ ONLY;
ALTER TABLE products READ WRITE;

-- -------------------

set transaction read only;
-- -------------------

select db_unique_name, cdb from v$database;

-- -------------------

ALTER DATABASE MOVE DATAFILE '/disk1/oradata/DB01/userdata2.dbf' TO '+ORADATA' KEEP; --> keep source file

ALTER DATABASE MOVE DATAFILE '/disk1/oradata/DB01/userdata2.dbf' TO '+ORADATA' REUSE; --> if target file exists, reuse it

DBA_THRESHOLDS

-- --------------------
CREATE OR REPLACE TRIGGER alert_on_resumable_error
AFTER SUSPEND ON DATABASE
DECLARE
-- Declare any variables
BEGIN
-- Add space allocation code
-- Code to Notify DBA.
COMMIT;
END;
/
1):
LOG_ARCHIVE_DUPLEX_DEST: An Additinal location for redundant (archive) redo logs 
LOG_ARCHIVE_MIN_SUCCEED_DEST:

2):
CREATE SEQUENCE session_seq START WITH 1 INCREMENT BY 1 SESSION;
CREATE SEQUENCE global_seq START WITH 1 INCREMENT BY 1 GLOBAL;

References

nyoug 1   Rich Niemiec  
sandeep      
naga   oracle-base 12c   1z0-060 online  Oracle 12c Database New Features - Pluggable Database - Video 1 
Oracle 12c Database New Features - CloneDB - Video 2  
google search term:
alter system set RESOURCE_LIMIT = TRUE CONTAINER = ALL

catcon.pl catctl.pl CDB$ROOT PDB$SEED 
catcdb.sql 


THREADED_EXECUTION

DNFS_ENABLE=TURE