You are on page 1of 51

ORACLE 12c DBA HANDS-ON

OPERATING SYSTEM FUNCTIONALITY

USER RAM

HARD DISK

1. Whenever user sends a request (like clicking on video file, opening word document
etc.), initially OS will search for the data in RAM. If the information is available, it will
be given to the user. If the data is not available in RAM, OS will search in the hard disk
and will copy that info to RAM. Once the data has been copied to RAM, then it will be
given to the user.
2. The request and response between RAM and hard disk is called I/O operation. More
I/O operations will decrease the performance of the system.
3. Accessing the data from memory is always faster than accessing from disk.
4. Any system is called well performed when response time is low. This is possible when
there are less no of IO operations are done.

Questions:

1. Why to copy data to RAM?


a. It is to give benefit for next users who are requesting for same data.
2. What happens if data size is more than RAM size?
a. Data will be split into parts before copying to RAM. Then first part will be
copied to RAM and whenever that data has been read, it will be swapped to
swap area in OS and the next part of the data will be copied. This is called
Swapping (In windows, this is called paging)
3. How the data will be managed in RAM?
a. Using swapping method which in turn uses Least Recently Used (LRU)
algorithm. LRU algorithm will flush the inactive data from RAM to swap area.

ORACLE DATABASE FUNCTIONALITY

USER INSTANCE

DATABASE

1. The functionality of oracle database is like operating system itself.


2. Whenever user sends a request (like select statement), initially Oracle will search for
the data in the instance. If the data is available, it will be given to the user. If the data
is not available in the instance, Oracle will search in the database and will copy that
info to the instance. Once the data has been copied to the instance, then it will be
given to the user.
3. Instance also will follow LRU algorithm.

INSTANCE DATABASE

Memory Background Logical Physical


Structures Processes Structures Structures
ORACLE 10g DATABASE ARCHITECTURE
Shared pool Database buffer cache

User Server MRU Log buffer


process process Library end cache
PGA cache
WRITE LRU Large pool
list list

Data Java pool


dictionary
cache Stream pool
LRU end
SGA

SMON PMON DBWR LGW CKPT ARCH


n n R n

PARAMETER FILE

PASSWORD FILE

DATA REDOLOG CONTROL


ARCHIVED
FILES FILES FILES
REDOLOG FILES

DATABASE
INTERNALS OF USER CONNECTIVITY

1. Whenever user starts an application, a user process will be created on the client side.
E.g.: when we start sqlplus window, in the background sqlplusw.exe process will be
started.
2. This user process will send a request to establish a connection to database by
providing login credentials like username and password.
3. On database side, a component called Listener will accept the connection and will
hand over user credentials (username, password) to one of the background process
called PMON (process monitor)
4. PMON will first check for base tables in data dictionary cache (of shared pool). Base
tables are the one which will store data related to database like tables info, user info
etc.
5. PMON will copy the required base tables to data dictionary cache. Once copied, PMON
will authenticate the user with the help of base tables.
6. Once authenticated, PMON will send either successful or failure acknowledgement
depends on the authentication result.
7. If it is successful connection, PMON will create server process. The memory allocated
to the server process is called as PGA (private global area)

Note: A Process will consume some memory and CPU to complete its work. In Oracle database
also, every process will consume memory and CPU. The memory area for server process is
exclusively named as PGA

8. Server process is the one which will do work on behalf of user process

E.g.: User will issue select statement, the processing and data retrieval will be done by server
process
BASE TABLES
1. Base tables are the components which will store the database related information like
tables info, user info, indexes info etc. which is helpful for database operation. This
info is also called as dictionary information.
2. Base tables will be in the form of XXX$ (i.e. name suffixed with a $ sign) and will reside
in SYSTEM tablespace.
3. Information in base tables will be in cryptic format.
4. Oracle processes will create/delete/modify base tables. Manual attempt to modify
base tables may lead to database corruption.
5. Base tables will be created at the time of database creation using SQL.BSQ script

VIEWS ON BASE TABLES


1. As the data in base tables is in cryptic format, Oracle has provided two types of views
for DBA to access the data
a. Data dictionary views which will be in the format of dba_XXX (name prefixed
with dba_ keyword) and provides permanent info of the database
Eg: how many users are created in the database?
b. Dynamic performance views which will be in the format of v$XXX (name
prefixed with v$ sign) and provides ongoing (current actions) of the database
Eg: how many users connected right now to the database?
2. These views will be created after database creation by executing CATALOG.SQL script
3. There are other procedures and packages which are helpful for DBA will be created
using CATPROC.SQL script
4. CATALOG.SQL and CATPROC.SQL will be executed automatically if we create database
using DBCA (automated tool) or we need to execute them manually if we are creating
database manually.
5. These two scripts will reside in ORACLE_HOME/rdbms/admin location.
ORACLE_HOME is the location where we install oracle software.
PHASES OF SQL EXECUTION
1. Any SQL statement will undergo following phases to get executed
a. PARSING: This phase has below sub phases
i. Syntax checking
ii. Semantic checking i.e. checking for the privileges using base tables
iii. Diving the statement into literals (converting to parse tree)

b. EXECUTION: This phase has below sub phases


i. Converting the statement into ASCII format
ii. Compilation
iii. Executing
c. FETCH: Data will be retrieved in this phase either by performing logical read or
physical read
2. Retrieving the data from instance is called logical read (buffer read or buffer get) and
getting data from disk is called physical read (disk read or buffer miss)
3. Data in the database will be stored in the form of blocks. The Standard block size is 8k.
4. Depends on the size of the table, either one block can hold multiple table data or one
table data can spread across multiple blocks.
Eg: If a table size is 32kb, then it will spread into 4 blocks. If a table size is 3kb, it will
occupy only one block leaving 1kb space for another table.

Note: For a PL/SQL program, BINDING will happen after PARSING phase (so it will have 4
phases to go)
SELECT STATEMENT PROCESSING

1. User process will issue the statement


2. Server process will receive the statement and will send to library cache (of shared
pool)
3. 1st phase i.e parsing will take place in library cache. For 1st time, server process will
copy the required base tables into data dictionary cache to complete semantic
checking
4. After parsing, OPTIMIZER (an algorithm resides in library cache) will generate
execution plans (plan on how to fetch the data from the table) and it will choose the
best one among them based on the response time and resource (cpu and memory)
utilization
5. Server process will then take the parsed tree to PGA where 2nd phase i.e execution will
take place
6. After execution, server process will scan LRU list for the data from bottom to top. If it
found the data, it will be given to the user. If it didnt found any data, it will search in
datafiles and will copy those blocks to the top portion (MRU end) of LRU list. After
copying, data will be given to user if filtering is not required.
7. In case of filtering blocks will be copied to PGA and data will get filtered. Server process
will then give the filtered data to the user

Note: for statements issued for the second time, parsing and fetch phases are skipped, subject
to the availability of data and parsed statement in the instance
LOGICAL STRUCTURES OF DATABASE

1. The following are the logical structures of database and will be helpful in easy
manageability of the database
a. TABLESPACE an alias name to a group of datafiles (or) group of segments (or)
a space where tables reside
b. SEGMENT group of extents (or) object that occupies space
c. EXTENT group of oracle data blocks (or) memory unit allocated to the object
d. ORACLE DATA BLOCK basic unit of data storage (or) group of operating
system blocks

2. The following tablespaces are mandatory to exist in 10g database


a. SYSTEM stores base tables (dictionary information)
b. SYSAUX auxiliary tablespace to SYSTEM which also stores base tables
required for reporting purpose
c. TEMP used for performing sort operations
d. UNDO used to store before images helpful for rollback of transaction

Note: Oracle 9i should have all the above tablespaces except SYSAUX. SYSAUX is introduced
in 10g to avoid burden on SYSTEM tablespace

Note: Generally, to store user data, we will create a separate tablespace


DML STATEMENT PROCESSING

1. Server process performs parsing in library cache, execution in PGA just like select
statement
2. Server process will search for data in LRU list. If the data is available, it will copy undo
block from the undo tablespace. If data is not available, it will copy both data block
and undo block from the respective tablespaces.
3. Both the blocks again will be copied to PGA. Server process will modify the values in
both the blocks and as new values are inserted, we called them as modified blocks or
dirty blocks.
4. As two blocks modified, we say there are two changes happened in the database.
Oracle will keep a record of these changes in the form of redo entry.
5. A redo entry is a log that will maintain below information whenever a change happens
in the database
SCN, Date and time, Table name, Tablespace name, Block id, New value,
Commit or not commit etc

Note: SCN (system change number) is a sequence unique number generated by Oracle
whenever a transaction modifies the data in the database

6. Redo entries that are generated in PGA will be copied to log buffer cache by server
process
7. After that, the dirty blocks will be copied to write list
8. LGWR will write the redo entries from log buffer cache to redolog files. Then, DBWR
will write dirty blocks from write list to corresponding datafiles.

Note: LGWR writing before DBWR is called WRITE-AHEAD protocol

DDL STATEMENT PROCESSING

1. DDL statement processing is same as DML processing because DDL statements will modify
base tables either with insert/delete/update statements
ARCHITECTURE
COMPONENTS OF ORACLE DATABASE ARCHITECTURE

SERVER PROCESS It is the process which will do work on behalf of user process on server
side

PRIVATE GLOBAL AREA (PGA)

1. Memory area allocated to server process


2. Used to perform execution of the SQL statement & to store session information
3. The size of memory allocated will be defined using PGA_AGGREGATE_TARGET
4. Sorting will take place in PGA if the data is small. This is called as in-memory sort.
5. If the data size is larger than sort are size of PGA, Oracle will use both PGA and TEMP
tablespace which needs no.of I/Os and automatically database performance will get
degraded.
ORACLE INSTANCE It is a combination of memory structures and background processes
which help in database operations

SHARED GLOBAL AREA (SGA) It is the memory area which contains several memory caches
helpful in reading and writing data

SHARED POOL

1. Shared pool contains following components


a. Library cache it contains shared SQL & PL/SQL statements
b. Data dictionary cache it contains dictionary information in the form of rows,
hence also called as row cache
2. Size of shared pool is defined using SHARED_POOL_SIZE

DATABASE BUFFER CACHE

1. It is the memory area where a copy of the data is placed in LRU list and dirty blocks
are stored in WRITE list
2. Size of DBC is defined using DB_CACHE_SIZE

LOG BUFFER CACHE It is the memory area where a copy of redo entries are maintained. Size
is defined by LOG_BUFFER

Note: LBC should be allotted with smallest size than any other memory component in SGA

LARGE POOL

1. Large pool will be used efficiently at the time of RMAN backup


2. Large pool can dedicate some of its memory to shared pool and gets back whenever
shared pool is observing less free space
3. Size is defined using LARGE_POOL_SIZE
JAVA POOL Java pool memory is used in server memory for all session-specific Java code
and data within the JVM. Size is defined using JAVA_POOL_SIZE

STREAM POOL

1. It is the memory area used when replicating a database using oracle streams
2. This parameter is introduced in 10g and can be defined using STREAM_POOL_SIZE

Pre-requisites for SMON

1. Single redolog file cannot accommodate the redo entries that are generated in the
database.
2. To overcome this Oracle designed its architecture so that LGWR will write into 2 or
more redolog files in a cyclic order (shown in the below diagram)

Redolog Redolog
file 1 file 2

LOGSWITCH

3. Oracle provides option to have backup of redolog files in the form of archive log files
if archive log mode is enabled

LGWR moving from one redolog file to another is called LOG SWITCH. At the time of log
switch, following actions will take place

Checkpoint event will occur DBWR should write dirty blocks to datafiles (Eg: Its just
like automatic saving of email when composing in Gmail)
CKPT process will update the latest SCN to datafile header and controlfiles by taking
the info from redolog files
DBWR will write the corresponding dirty blocks to datafiles
ARCH process will backup redolog files to archives if database is in archivelog mode

Note: Checkpoint event not only occurs at log switch. It can occur at repeated interval and
oracle will take care of this if we set FAST_START_MTTR_TARGET=0

SMONn It is the background process responsible for following actions

1. Instance recovery will happen when instance crash occurs. Instance crash occurs
because of power failure, RAM failure or any memory related issues. SMON will do
instance recovery in following phases
a. Roll forward committed data should be there in the database

Rollforward example

...

2 4

3 3

3 Controlfile - 3
1 2
3

In the above diagram, assume that 1,2 and 3 transactions are committed and 4 is going
on. Also as checkpoint occurred at log switch, complete data of 1 and 2, also part of 3
were written to datafiles
Assume LGWR is writing to second file for transaction 4 and instance crash occurred.
While recovery, SMON will start comparing the SCN between datafile header and
redolog file. Also it will check for commit point.
In the above example, 1 & 2 are committed and data is written to datafiles. But for 3,
only half of the data is written. To write other half, SMON will initiate DBWRn. But
DBWRn will be unable to do that as dirty blocks are cleaned out from write list (due to
instance restart)
Now DBWRn will take help of server process which will actually generate dirty blocks
with the help of new value in redo entries that are already written to redolog files

b. Opens the database for user access


c. Rollbacks uncommitted transactions with the help of undo tablespace

Rollback example

...

3 3

3 Controlfile - 3
1 2 3

In the above example, transaction 3 is not yet committed, but because of log switch
checkpoint event triggered, part of 3s data is written to datafiles
Assume an instance crash occurred and SMON is performing instance recovery
SMON will start comparing SCN as usual and when comes to 3 it identifies that data is
written to datafiles, but actually 3 is not committed. So, this data need to be rollback.
Rollback is nothing but reverse of the DML statement (example: rollback of update is
update again to previous value, rollback of insert is delete and rollback of delete is
insert)
SMON will take DBWR and server process help to generate dirty blocks with the help
of old values in undo tablespace. These dirty blocks will be written to datafiles.
Note: redo entries help in rollforward and undo will help in rollback

2. It will coalesce the tablespaces which are defined as automatic segment space
management
3. It will release the temporary segments occupied by the transactions when they are
completed (a more detailed post available @ http://pavandba.com/2010/04/20/how-
temporary-tablespace-works/ )

PMON It is responsible for following actions

1. releases the locks and resources (memory) held by abruptly terminated sessions

Note: whenever any user performs DML transactions on a table, oracle will apply lock.
This is to maintain read consistency

2. authenticates the user


3. registers the listener information with the instance
4. restarts dead dispatchers at the time of instance recovery in case of shared server
architecture

DBWRn It is responsible in writing dirty blocks from write list to datafiles. It will do this
action in following situations

1. after LGWR writes


2. when write list reaches threshold value
3. at every checkpoint
4. when tablespace is taken offline or placed in read-only mode
5. when database is shutdown cleanly

LGWR It is responsible for writing redo entries from log buffer cache to redolog files. It will
perform this in following situations

1. before DBWRn writes


2. whenever commit occurs
3. when log buffer cache is 1/3rd full
4. when 1 MB of redo is generated
5. every 3 sec

CKPT it will update the latest SCN to control files and datafile header by taking information
from redolog files. This will happen at every checkpoint event

ARCHn It will generate archives which are backup of redolog files in the specified location.
This will be done only if database is in archivelog mode

DATAFILES actual files where user data will be stored

REDOLOG FILES contains redo entries which are helpful in database recovery.

CONTROL FILES These files will store crucial database information like

1. database name and creation timestamp


2. latest SCN
3. checkpoint information
4. location and sizes of redolog files and datafiles
5. parameters that define the size of controlfile
a. MAXDATAFILES
b. MAXLOGFILES
c. MAXLOGMEMBERS
d. MAXHISTORY etc

ARCHIVED REDOLOG FILES These files will be created by ARCHn process if archivelog mode
is enabled.

PARAMETER FILE

1. It contains parameters that will be helpful in database functionality


2. It is a text file and will be in the form of init<SID>.ora [where SID instance name] and
resides in ORACLE_HOME/dbs (on unix) and ORACLE_HOME/database (on windows)
example: if SID is TEST, then file name will be inittest.ora.
3. Parameters are divided into two types
a. Static parameters the value of these parameters cannot be changed when
the database is up and running
b. Dynamic parameters the value of these parameters can be changed when DB
is up and running

Note: Instance name can be different from database name. This is to provide security

SERVER PARAMETER FILE (SPFILE)

1. This file is binary copy of pfile.


2. Spfile will be in the form of spfile<SID>.ora and resides in ORACLE_HOME/dbs (on
unix) and ORACLE_HOME/database (on windows)
example: If SID is test, then spfile name will be spfiletest.ora
3. We can create spfile from pfile or pfile from spfile at any moment whether database
is up or not
a. SQL> create spfile from pfile;
b. SQL> create pfile from spfile;
4. Oracle will check for spfile first, if not there it will check for pfile during database
startup
5. To change the value of parameter immediately (but for temporary time), we use
SCOPE=MEMORY while modifying value. To change the value after restarting the
database, we use SCOPE=SPFILE and to do both we use SCOPE=BOTH

ASMM (AUTOMATIC SHARED MEMORY MANAGEMENT)

1. In 9i, SGA memory components sizes used to be mentioned with separate parameters.
a. SHARED_POOL_SIZE
b. DB_CACHE_SIZE
c. LOG_BUFFER
d. LARGE_POOL_SIZE
e. JAVA_POOL_SIZE
2. From 10g with ASMM entire SGA memory sizes will be managed automatically by
Oracle with the help of SGA_TARGET parameter
Note: LOG_BUFFER will not be automatically sized in any version as it is static parameter

3. We can also define Maximum size for SGA with SGA_MAX_SIZE. SGA size will start with
SGA_TARGET and if load increases it will grow till SGA_MAX_SIZE
4. This automatic memory management will be done by background process MMAN
(Memory Manager)

ALERT LOG & OTHER TRACE FILES

1. Alert log is the monitoring file which records certain information useful for DBA in
diagnosing the problems/errors in the database
a. All oracle related errors
b. Every startup and shutdown timestamp
c. Archives generation information
d. Checkpoint information
e. All DDL statements issued against logical/physical structures
2. Oracle also generates trace files which will provide more information related to
error/problem. The location and name of trace file for the error is visible in Alert log
itself.
3. Oracle provides 3 types of trace files
a. Background trace files if any issue with background process
b. Core trace files - if any issue with OS
c. User trace files if any issue with user transactions
4. In 10g, these files used to reside in 3 different locations. Since 11g, all the files are in
single location which is $ORACLE_HOME/diag/rdbms/SID/dbname/trace. We can
change this default location using DIAGNOSTIC_DEST parameter
5. Oracle 11g also provides XML formatted alert log file which resides in
$ORACLE_HOME/diag/rdbms/SID/dbname/alert. We can use ADRCI utility to read the
content of this alert log file.
OPTIMAL FLEXIBLE ARCHITECTURE (OFA)

1. Reading and writing data from/to a hard disk will be done with the help of IO header
(disk spindler)
2. For a hard disk, there will be only one IO header. Recent hard disks are having 2 IO
headers
3. As per OFA, Oracle recommends to store all the physical files separately in different
hard drives. In such case, different I/O headers will be working for different hard drives
thereby increasing database performance

STARTUP PHASES

Command to startup:

SQL> startup or SQL> startup nomount


SQL> alter database mount;
SQL> alter database open;
SHUTDOWN TYPES

Command to shut down: SQL> shutdown normal or


SQL> shutdown transactional or
SQL> shutdown immediate or
SQL> shutdown abort or
SHARED SERVER ARCHITECTURE

Shared pool Database buffer cache

Up1 Up2 Up Log buffer


3 cache

DISPATCHER Large pool


Java pool

Response
queue Request Stream pool
queue
UGA
Shared server processes

SMON PMON DBWR LGW CKPT ARCH


n n R n

PARAMETER FILE

PASSWORD FILE

DATA REDOLOG CONTROL


ARCHIVED
FILES FILES FILES
REDOLOG FILES

DATABASE
1. User requests will be received by dispatcher which will then place the request in
request queue
2. Shared server processes will take that request from request queue and will be process
it inside the database
3. The results will be placed in response queue again from where dispatcher will send
that to corresponding users
4. Instead of PGA, statements will get executed in UGA (user global area) in shared server
architecture
5. Shared server architecture can be enabled by specifying following parameters
a. DISPATCHERS
b. MAX_DISPATCHERS
c. SHARED_SERVER_PROCESSES
d. MAX_SHARED_SERVER_PROCESSES
6. Shared server architecture has following advantages
a. Reduces the number of processes on the operating system
b. Reduces instance PGA memory
c. Increases application scalability and the number of clients that can
simultaneously connect to the database. May be faster than dedicated server
when the rate of client connections and disconnections is high
7. Shared server has several disadvantages, including slower response time in some
cases, incomplete feature support, and increased complexity for setup and tuning. As
a general guideline, only use shared server when you have more concurrent
connections to the database than the operating system can handle.
8. Users should use SERVER=SHARED in their TNSNAMES.ORA file to connect through
shared server mode.
9. Single database can handle both dedicated and shared server connections at same
time.
ORACLE 11g DATABASE ARCHITECTURE

Shared pool Database buffer cache


User MRU Log buffer
process end cache
Library cache

Large pool
WRITE LRU
Data dictionary cache list list
Server
process Java pool

PGA RESULT CACHE


Stream pool
LRU
end

SMON PMON DBWR LGW CKPT ARCH


n n R n

PARAMETER FILE

PASSWORD FILE

ARCHIVED
REDOLOG FILES DATA REDOLOG CONTROL
FILES FILES FILES
RESULT CACHE

1. It is a new component introduced in 11g which stores execution plan ids and the
corresponding rows for a sql statement.
2. We can enable result cache by specifying RESULT_CACHE_MODE parameter. The
possible values are MANUAL or FORCE. FORCE will make all SQL statements to use
result cache by default. MANUAL gives chance to use result cache only for specific SQL
statements. If set to MANUAL, we need to use /* result cache */ hint in the SQL
statement.
3. ASMM will automate the size of result cache, but can exclusively specified using
RESULT_CACHE_MAX_SIZE parameter.
4. Oracle recommends to enable result cache only if database is hitting with lot of
statements which are frequently repeated which happens in OLTP environment
5. When any DML/DDL statements modify table data or structure, data in result cache
will become invalid and need to be processed again

Note: http://pavandba.com/2010/07/15/how-result-cache-works/

SELECT STATEMENT PROCESSING

1. User process will issue the statement


2. Server process will receive the statement and will send to library cache (of shared
pool)
3. 1st phase i.e parsing will take place in library cache. For 1st time, server process will
copy the required base tables into data dictionary cache to complete semantic
checking
4. After parsing, OPTIMIZER (an algorithm resides in library cache) will generate
execution plans (plan on how to fetch the data from the table) and it will choose the
best one among them based on the response time and resource (cpu and memory)
utilization
5. Server process will then take the parsed tree to PGA where 2nd phase i.e execution will
take place
6. After execution, server process will scan LRU list for the data from bottom to top. If it
found the data, it will be given to the user. If it didnt found any data, it will search in
datafiles and will copy those blocks to the top portion (MRU end) of LRU list. After
copying, data will be given to user if filtering is not required.
7. In case of filtering blocks will be copied to PGA and data will get filtered. Server process
will then give the filtered data to the user
8. After this, server process will store the execution plan id and rows that were given to
the user in result cache.
9. For statements issued for the second time, server process will get parsed tree and plan
id from library cache and it will check if the same plan id is available in result cache. If
exists, the corresponding rows will be given to the user.

AUTOMATIC MEMORY MANAGEMENT

1. This is the new feature of 11g which enables DBA to manage both SGA and PGA
automatically by setting MEMORY_TARGET parameter.
2. MEMORY_TARGET = SGA_TARGET + PGA_AGGREGATE_TARGET
3. MEMORY_TARGET is dynamic parameter. Max size can be defined using
MEMORY_MAX_TARGET (static parameter)
4. We can get memory size advice using V$MEMORY_TARGET_ADVICE

Note: More about AMM @ http://pavandba.com/2010/07/21/automatic-memory-


management-in-11g/

Oracle 8i Oracle 9i Oracle 10g Oracle 11g


SORT_AREA_SIZE
WORK_AREA_SIZE
HASH_AREA_SIZE PGA_AGGREGATE_TARGET PGA_AGGREGATE_TARGET
BITMAP_WORK_AREA
MEMORY_TARGET
SHARED_POOL_SIZE SHARED_POOL_SIZE LOG_BUFFER
DB_CACHE_SIZE DB_CACHE_SIZE SGA_TARGET
LOG_BUFFER LARGE_POOL_SIZE LOG_BUFFER
LARGE_POOL_SIZE JAVA_POOL_SIZE
JAVA_POOL_SIZE LOG_BUFFER
IN-MEMORY COLUMN STORE

1. Oracle 12c has introduced new SGA component called in-memory column store.
2. It stores the copy of data in special columnar format unlike block format in database
buffer cache

3. Size will be defined using INMEMORY_SIZE parameter (static).


4. Only objects specified as INMEMORY in its DDL will be stored in this memory. Oracle
will populate data when it is useful. We can change this by setting INMEMORY
PRIORITY to LOW, MEDIUM, HIGH and CRITICAL
5. IMCS can be enabled at column, table, materialized view, tablespace and at partition
levels
6. IMCS is useful for performing fast full scans for the large tables
7. As data is stored in columnar format, OLAP environments will get max benefit when
used with IMCS
8. There is no change in DML processing when using IMCS
9. IMCS is not applicable for SYS user and SYSTEM, SYSAUX tablespaces
10. Oracle reads data from disk in its row format, pivots the rows to create columns, and
then compresses the data into In-Memory Compression Units (IMCUs memory
area inside IM column store)
11. This entire activity will be done by Worker processes (Wnnn).
The INMEMORY_MAX_POPULATE_SERVERS parameter specifies the maximum
number of worker processes to use. By default, the setting is one half
of CPU_COUNT. More worker processes result in faster population, but they use
more CPU resources.
12. As it is memory component, data must be read again once database is restarted

Installing Oracle 10g on Linux 4.7

https://www.youtube.com/watch?v=e07GwQ6M6y0

Installing Oracle 11g on Linux 4.7

https://www.youtube.com/watch?v=My7VvRgiUbQ

Installing Oracle 12c on Linux 7.2

https://www.youtube.com/watch?v=JlD_E_T31xQ

Setting Environment Variables

MANUAL DATABASE CREATION

1. Copy any existing database pfile to a new name. if there is no other pfile, we can take
help of sample file provided by default
2. Open pfile with vi editor and do necessary changes like changing database name,
dump locations etc and save it
3. Create necessary directories as mentioned in pfile at OS level
4. Copy the database creation script to the server and edit it

5. Export the SID


$ export ORACLE_SID=testdb
6. Start the instance in nomount phase using the pfile
SQL> startup nomount
7. Execute create database script
SQL> @create_db.sql
8. Once database is created, it will be opened automatically
9. Execute the catalog.sql and catproc.sql scripts
SQL> @$ORACLE_HOME/rdbms/admin/catalog.sql
SQL> @$ORACLE_HOME/rdbms/admin/catproc.sql
10. Finally add this database entry to oratab file

MULTIPLEXING REDOLOG FILES

1. Redolog files are mainly used for recovering a database and to ensure data commit
2. If a redolog file is lost, it will lead to data loss. To avoid this, we need to maintain
multiplexed copies of redolog files in different locations. These copies are together
called as redolog group and individual files are called redolog members
3. Oracle recommends to maintain a min of 2 redolog groups with min of 2 members in
each group

Redolog Redolog Redolog Redolog


member 1 member 2 member 1 member 2

GROUP 1 GROUP 2
4. LGWR will write into members of same group parallely only if ASYNC I/O is enabled at
OS level
5. Redolog files will have 3 states. Always these states will be changed in cyclic order
a. CURRENT LGWR currently writing into this group
b. ACTIVE ARCH is taking backup of redolog file
c. INACTIVE no activity
6. We cannot have different sizes for members in the same group, whereas we can have
different sizes for different groups, but not recommended to implement

COMMANDS

# To check redolog file members

SQL> select member from v$logfile;

# To check redolog group info,status and size

SQL> select group#, members,status,sum(bytes/1024/1024) from v$log


group by group#,members,status;

# To add a redolog file group

SQL> alter database add logfile group


4('/u01/app/oracle/oradata/proddb/redo04a.log','/u01/app/oracle/oradata/proddb/redo04
b.log') size 50m;

# To add a redolog member

SQL> alter database add logfile member '/u01/app/oracle/oradata/proddb/redo01b.log' to


group 1;

# To drop a redolog group

SQL> alter database drop logfile group 4;


# To drop a redolog member

SQL> alter database drop logfile member '/u02/app/oracle/oradata/proddb/redo02b.log';

Note: We cannot drop a member or a group which is in CURRENT state

Note: Even after we drop logfile group or member, still file will exist at OS level

# Reusing a member

SQL> alter database add logfile member '/u02/app/oracle/oradata/proddb/redo02b.log'


reuse to group 2;

# Steps to rename (or) relocate a redolog member

1. SQL> shutdown immediate;


2. SQL> !mv /u01/app/oracle/oradata/proddb/redo01.log
/u01/app/oracle/oradata/proddb/redo01a.log
3. SQL> startup mount
4. SQL> alter database rename file '/u01/app/oracle/oradata/proddb/redo01.log' to
'/u01/app/oracle/oradata/proddb/redo01a.log';
5. SQL> alter database open;

Note: We cannot resize a redolog member, instead we need to create new group with
required size and drop the old group

# Handling corrupted redolog file

SQL> alter database clear logfile member /u02/prod/redo01a.log;


Or
SQL> alter database clear unarchived logfile member /u02/prod/redo01a.log;
MULTIPLEXING OF CONTROL FILES

1. Control file contains crucial database information and loss of this file will lead to loss
of important data about database.
2. It could also lead to data loss and recovery might not be possible in all cases. So, it is
recommended to have multiplexed copies of files in different locations

COMMANDS

# Steps to multiplex control file using spfile

1. SQL> show parameter spfile


2. SQL> select name from v$controlfile;
3. SQL> alter system set
control_files='/u01/app/oracle/oradata/proddb/control01.ctl','/u01/app/oracle/fast
_recovery_area/proddb/control02.ctl','/u02/app/oracle/oradata/proddb/control03.c
tl' scope=spfile;
4. SQL> shutdown immediate
5. SQL> !cp /u01/app/oracle/oradata/proddb/control01.ctl
/u02/app/oracle/oradata/proddb/control03.ctl
6. SQL> startup

Note: we can create a maximum of 8 copies of controlfiles

ARCHIVELOG FILES

1. Archive log files (also called as archive logs or archives) are offline copies of redolog
files
2. These are required to recover the database if we have old backup
3. The default location for archives from 10g onwards is Fast Recovery Area (FRA). The
location and size of FRA can be known using DB_RECOVERY_FILE_DEST and
DB_RECOVERY_FILE_DEST_SIZE parameters respectively. We can change the archive
location from default FRA to another location
4. Whenever archive destination is full, database will hang. In such scenarios, do
following
a. If time permits take the backup of archives using delete input clause
b. If time doesnt permits temporarily move archives to some other mount point
c. If no mount point having free space then delete the archives and take full
backup of the database without fail immediately
d. If we are using RMAN to take backup, then use below command after deleting
archives at OS level
RMAN> crosscheck archivelog all;
5. We can specify multiple archive log destinations to do multiplexing of archives

COMMANDS

# Check if database is in archive log mode

SQL> archive log list;

# Enabling archivelog mode in 10g/11g

1. SQL> shutdown immediate


2. SQL> startup mount
3. SQL> alter database archivelog;
4. SQL> alter database open;

# To change the archive destination from FRA to customized location

1. SQL> !mkdir /u03/archives


2. SQL> alter system set log_archive_dest_1=location=/u03/archives scope=both;

# To do archives multiplexing

1. SQL> !mkdir /u02/archives


2. SQL> alter system set log_archive_dest_2=location=/u02/archives scope=both;
# Disabling archivelog mode in 9i/10g/11g

1. SQL> shutdown immediate


2. SQL> startup mount
3. SQL> alter database noarchivelog;
4. SQL> alter database open;

BLOCK SPACE UTILIZATION PARAMETERS


Block header

20%
PCT FREE level

PCTUSED level
40%

The following are the block space utilization parameters

a. INITTRANS and MAXTRANS represents no of concurrent transactions that can


access a block. MAXTRANS was set to 255 and removed from 10g
b. PCTFREE it is the reserved space for future updates (update statement may
or maynot increase row size). Incase row size increases, it will take space from
PCTFREE. If the database is OLTP and have very frequent updates, then we
should set PCTFREE to higher value
c. PCTUSED it is the level which will be compared with data level for insertion
after deletion. If the database is OLTP and insert/delete statements are more,
we should set PCTUSED to higher value

Note: Block space utilization parameters are not required to use in case if we create database
as OLTP (in DBCA) and if we use locally managed tablespaces

LOCAL Vs DICTIONARY managed tablespaces

1. In DMT, free block information is used to maintain in the base tables. Everytime when
DBWRn requires free block, it will perform an I/O to know free block information from
base tables.
2. Once DBWR complete writing into block, the status will be updated in base tables.
3. When more and more free block information is required, high no of IOs will be
performed which will degrade database performance
4. In locally managed tablespace, free block information is maintained in datafile header
itself in the form of bitmap blocks. These bitmaps are represented with 0 free block
and 1 used block
5. Once DBWR complete writing into block, the bitmap status will be updated. As there
is no IO involved, database performance will not be impacted

Note: We cannot create dictionary managed tablespace if SYSTEM tablespace is LOCAL

COMMANDS

# To create a tablespace

SQL> create tablespace testtbs datafile '/u01/app/oracle/oradata/proddb/testtbs01.dbf'


size 50m;

# To create a tablespace with NOLOGGING option

SQL> create tablespace testtbs datafile /u01/app/oracle/oradata/proddb/testtbs01.dbf


size 10m nologging;

Note: It is generally said that NOLOGGING will not generated any redo entries. But, this is
true only in following situations.

1. create table B as select * from A nologging;


2. insert into B select * from A nologging;
3. alter index <index_name> rebuild online nologging;
4. create index <index_name> on table_name(column_name) nologging;
5. any DML operations on LOB segments

For any other situations or DML processing still Oracle will generate redo entries in order to
perform instance recovery
# To create dictionary managed tablespace

SQL> create tablespace mytbs


datafile /u02/prod/mytbs01.dbf size 10m
extent management dictionary;

# To convert DMT to LMT or vice versa

SQL> exec dbms_space_admin.tablespace_migrate_to_local(MYDATA);

SQL> exec dbms_space_admin.tablespace_migrate_from_local(MYDATA);

# To view tablespace information

SQL> select allocation_type,extent_management,contents from dba_tablespaces where


tablespace_name=MYDATA;

# To view datafile information

SQL> select file_name,sum(bytes),autoextensible,sum(maxbytes) from dba_data_files


where tablespace_name=MYDATA group by file_name,autoextensible;

# To enable/disable autoextend

SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend on maxsize 100m;

SQL> alter database datafile /u02/prod/mytbs01.dbf autoextend off;

# To resize a datafile

SQL> alter database datafile /u02/prod/mytbs01.dbf resize 20m;

# To add a datafile

SQL> alter tablespace mytbs add datafile /u02/prod/mytbs02.dbf size 10m;

Note: If we have multiple datafiles, extents will be allocated in round robin fashion

Note: Adding the datafile for tablespace size increment is the best option if we have
multiple hard disks
# To rename a tablespace (from 10g onwards)

SQL> alter tablespace mytbs rename to mydata;

# To rename or relocate a datafile

1. SQL> alter tablespace mydata offline;


2. SQL> !mv /u02/prod/mytbs01.dbf /u02/prod/mydata01.dbf
3. SQL> alter tablespace mytbs rename datafile /u02/prod/mytbs01.dbf to
/u02/prod/mydata01.dbf;
4. SQL> alter tablespace mydata online;

# To rename or relocate system datafile

1. SQL> shutdown immediate


2. SQL> !mv /u02/prod/system.dbf /u02/prod/system01.dbf
3. SQL> startup mount
4. SQL> alter database rename file /u02/prod/system.dbf to
/u02/prod/system01.dbf;
5. SQL> alter database open;

# To drop a tablespace

SQL> drop tablespace mydata;


This will remove tablespace info from base tables, but still datafiles exist at OS level
Or
SQL> drop tablespace mydata including contents;
This will remove tablespace info and also clears the datafile (i.e it will empty the contents)
Or
SQL> drop tablespace mydata including contents and datafiles;
This will remove at oracle and also OS level

# To reuse a datafile

SQL> alter tablespace mydata add datafile /u02/prod/mydata02.dbf reuse;


BIGFILE TABLESPACE

1. In a normal tablespace, a single datafile size can grow till max of 32GB. If we have a
very large database with many more datafiles, managing will be tough. For easy
manageability, Oracle introduced bigfile tablespace in 10g
2. In Bigfile tablespace, single datafile can grow till 32TB
3. Bigfile tablespaces should be created only when we have stripping and mirroring
(usually called as RAID levels) implemented at storage level
4. Bigfile tablespaces can be created only as LMT and with ASSM

# To create a big file tablespace

SQL> create bigfile tablespace bigtbs


datafile /u02/prod/bigtbs01.dbf size 50m;

CAPACITY PLANNING

1. It is the process of estimating space requirement for future data storage


2. Purchasing hardware in real time involves lot many teams, process and approvals
which is very time consuming. So, clients prefer to purchase and store the hardware
well before to avoid delay in processing.
3. Capacity planning will be done by all teams like unix admin, network admin, DBA,
application team etc. it will be done either for next 3 months (quarter) or 6 months.
4. As a DBA, we will check the avg database consumption either daily, weekly or monthly
basis. Based on that, first we will check if the required space is available at OS level or
in the hardware inventory. If not, we will propose to client of the required space.
5. Following query is used to find free space info in the tablespace
SQL> select tablespace_name,sum(bytes/1024/1024) from dba_free_space group by
tablespace_name;
Eg: If we observe tablespace TESTTBS free space is reducing 1GB daily, it means for
next 1 month we need 1GB*30=30GB. So for 3 months, it would be 90GB and this
value we will mention in the report we submit to client

Note: more information can be obtained from this link

https://pavandba.files.wordpress.com/2009/11/capacity-planning-with-oracle.doc

UNDO MANAGEMENT

1. Undo tablespace features are enabled by setting following parameters


a. UNDO_MANAGEMENT default value is AUTO which means undo tablespace
management will be completely taken care by Oracle
b. UNDO_TABLESPACE specifies which undo tablespace is in use currently
c. UNDO_RETENTION specifies how many seconds the expired data can be
stored in undo tablespace
2. Even though we can create multiple undo tablespaces, only one will be active at a
given time

Imp points to remember:


1. Undo block will have three status in undo tablespace

ACTIVE currently a transaction is using undo block

UNEXPIRED transaction completed using undo block, but neither commit nor
rollback is issued

EXPIRED transaction is either committed or rollback and the undo value is no more
required

Note: any transaction will first prefer to use free undo blocks (free space in undo
tablespace), if not it will use EXPIRED undo blocks

2. Oracle will show old value to a select statement if on the same table a DML is in process.
For example, a bank account balance is 5000. User A is checking balance from ATM and at
same time user B is depositing money 10000 into same account. Until user B commit, A will
see 5000 only.

ORA-1555 error(snapshot too old error)

1. In the above situation, Tx1 issued an update statement on table A and committed.
Assume this has used entire undo tablespace. As the transaction is committed, all
undo blocks will be in expired state. Also, assume dirty blocks of A are not yet written
to datafile because checkpoint didnt occurred.
2. Tx2 is updating table B and because of non availability of free undo blocks, it has
overridden expired undo blocks of A.
3. Tx3 is selecting the data from A. normally Tx3 will first check for data in datafile. But,
as dirty blocks are not yet written to datafiles, it will not be able to get new data. Then,
Oracle will try to check for old data in undo tablespace which also fails as already those
blocks got over ride by Tx2 i.e. B. So, neither it get new data nor old data in which case
Tx3 will result in ORA-1555 error.

Soltuions to avoid ORA-1555

1. If we get this error rarely, ask the user to re-issue the SELECT statement
2. If occurs very frequently, first increase undo tablespace size
3. If we cannot increase undo tablespace further, increase undo_retention value
4. If still problem exist, ask application team to avoid frequent commits

COMMANDS

# To create UNDO tablespace

SQL> create undo tablespace undotbs2


datafile /u02/prod/undotbs2_01.dbf size 30m;

# To change undo tablespace

SQL> alter system set undo_tablespace=UNDOTBS2 scope=memory/spfile/both;

# To create temporary tablespace

SQL> create temporary tablespace mytemp


tempfile /u02/prod/mytemp01.dbf size 30m;

# To add a tempfile

SQL> alter tablespace mytemp add tempfile /u02/prod/mytemp02.dbf size 30m;

# To resize a tempfile

SQL> alter database tempfile /u02/prod/mytemp01.dbf resize 50m;

# To create temporary tablespace group

SQL> create temporary tablespace mytemp


tempfile /u02/prod/mytemp01.dbf size 30m
tablespace group grp1;

# To view tablespace group information

SQL> select * from dba_tablespace_groups;

# To view temp tablespace information


SQL> select file_name,sum(bytes) from dba_temp_files where tablespace_name=MYTEMP
group by file_name;

# To move temp tablespace between groups

SQL> alter tablespace mytemp tablespace group grp2;

Note: As UNDO and TEMP are used for transaction purpose and doesnt store any data,
autoextend should not be turned ON for these tablespaces

ORACLE MANAGED FILES (OMF)

1. OMF gives flexibility in managing (creating, naming and deleting) Controlfiles,


Redologs and Datafiles. When using OMF, Oracle will automatically create the file in
the specified location and even will delete the file at OS level when dropped.
2. We can use OMF by setting below parameter
db_create_file_dest
db_create_online_log_dest dynamic parameters

COMMANDS

# To configure OMF parameters

SQL> alter system set db_create_file_dest=/u02/prod scope=memory;

SQL> alter system set db_create_online_log_dest=/u02/prod scope=memory;


USER MANAGEMENT

1. User management contain two terms which are frequently used


SCHEMA owner of objects in the database
USER accesses objects of a schema and not an owner of any object
2. Application team will request for user creation. They will specify username
(sometimes password also) and privileges that should be granted to that user.
3. Every user will have a default permanent tablespace (to store tables) and default
temporary tablespace (to do sorting)
4. If we dont assign default tablespace and temporary tablespace, oracle will take
default values
5. User require quota on a tablespace in order to create tables
6. Privileges are of 2 types
a. System level privileges any privilege that modifies database.
Eg: create table, create view etc
b. Object level privileges any privilege used to access object in another schema.
Eg: select on A, update on B etc
7. Role is a group of privileges. In real time, always it is recommended to grant privileges
to user using role
8. User should be granted with minimum CONNECT role which allows to connect to
database
9. RESOURCE role internally used to contain unlimited tablespace privilege which is
removed from 12c onwards
10. Passwords are case sensitive from 11g onwards
11. To find out roles and privileges assigned to a user, use following views
a. DBA_SYS_PRIVS
b. DBA_TAB_PRIVS
c. DBA_ROLE_PRIVS
d. ROLE_SYS_PRIVS
e. ROLE_TAB_PRIVS
f. ROLE_ROLE_PRIVS
COMMANDS

# To create a user

SQL> create user user1 identified by user1


default tablespace mytbs
temporary tablespace temp;

# To grant permissions to user

SQL> grant connect to user1;

# To revoke any permissions from user

SQL> revoke create table from scott;

# To change password of user

SQL> alter user user1 identified by oracle;

# To allocate quota on tablespace

SQL> alter user user1 quota 10m on testtbs;

# To change default tablespace or temporary tablespace

SQL> alter user user1 default tablespace test;

SQL> alter user user1 default temporary tablespace mytemp;

# To check default permanent & temporary tablespace for a user

SQL> select default_tablespace,temporary_tablespace from dba_users where


username=SCOTT;

# To lock or unlock a user

SQL> alter user scott account lock;

SQL> alter user scott account unlock;

# To check default permanent tablespace and temporary tablespace


SQL> select property_name,property_value from database_properties where
property_name like DEFAULT%;

# To change default permanent tablespace

SQL> alter database default tablespace mydata;

# To change default temporary tablespace

SQL> alter database default temporary tablespace mytemp;

# To check system privileges for a user

SQL> select privilege from dba_sys_privs where grantee=SCOTT;

# To check object level privileges

SQL> select owner,table_name,privilege from dba_tab_privs where grantee=SCOTT;

# To check roles assigned to a user

SQL> select granted_role from dba_role_privs where grantee=SCOTT;

# To check permissions assigned to role

SQL> select privilege from role_sys_privs where role=MYROLE;

SQL> select owner,table_name,privilege from role_tab_privs where role=MYROLE;

SQL> select granted_role from role_role_privs where role=MYROLE;

# To drop a user

SQL> drop user user1;

Or

SQL> drop user user1 cascade;


PROFILE MANAGEMENT

1. Profiles are assigned to control access to resources and to provide enhanced security
2. Profile management comprises
a. Password management
b. Resource management
3. The following are parameters for password policy management

FAILED_LOGIN_ATTEMPTS Specifies how many times user can fail to login to the
database. User account will be locked if it exceeds failed
attempts

PASSWORD_LOCK_TIME Specifies how much time account will be locked and it


will be unlocked automatically (DBA can also manually
unlock it)

PASSWORD_LIFE_TIME Specifies no of days after which user should change


password

PASSWORD_GRACE_TIME Provides grace period to change user password. If still


password didnt change, account will be locked and only
DBA can unlock it

PASSWORD_REUSE_TIME Specifies no of days after which user can reuse same


password

PASSWORD_REUSE_MAX How many times same password can be reused

PASSWORD_VERIFY_FUNCTION Defines password rules

1. Should be 8 character long


2. Should have capital letter
3. Should have special character
4. Should have number etc
4. The following parameters used for resource management

SESSIONS_PER_USER How many concurrent sessions can be connected from a user

IDLE_TIME How much time a session can be idle

CONNECT_TIME How much time user can connect to database

COMMANDS

# To create a profile

SQL> create profile my_profile limit


failed_login_attempts 3
password_lock_time 1/24/60
sessions_per_user 1
idle_time 5;

# To assign a profile to user

SQL> alter user scott profile my_profile;

# To alter a profile value

SQL> alter profile my_profile limit sessions_per_user 3;

# To create default password verify function

SQL> @$ORACLE_HOME/rdbms/admin/utlpwdmg.sql

Note: Resource management parameters are affective only if RESOURCE_LIMIT is set to


TRUE. Default value is FALSE till 11g and TRUE in 12c
AUDITING

1. It is the process of monitoring and tracking activities in the database


2. Auditing is enabled by setting AUDIT_TRAIL parameter to
a. None no auditing
b. DB audited information will be recorded in AUD$ table and can be viewed
using DBA_AUDIT_TRAIL view (default)
c. OS audited information will be stored in the form of trace files at OS level
d. DB, Extended it is same as DB option but will still record info like SQL_TEXT,
BIND_VALUE etc
e. XML it will generated XML files to store auditing information
f. XML, Extended same as XML but will record much more information
3. Different levels of auditing are
a. Statement level monitors only one statement
b. Schema level monitor all activities of a schema
c. Object level monitor a particular object
4. By default some activites like startup & shutdown of database, any structural changes
to database are audited and recorded in alert log file
5. SYS user activities can also be captured by setting AUDIT_SYS_OPERATIONS to TRUE.
Default value is FALSE in 11g and TRUE in 12c

COMMANDS

# To enable auditing

SQL> alter system set audit_trail=DB scope=spfile;

Note: AUDIT_TRAIL parameter is static and require a restart of database

# To audit what is required

SQL> Audit create table; statement level auditing

SQL> Audit update on table SALARY; object level auditing

SQL> Audit all by scott; schema level auditing


SQL> Audit session by scott;

SQL> Audit all privileges;

# To turn off auditing

SQL> Noaudit all by scott;

SQL> Noaudit update on table SALARY;

You might also like