Thursday, December 6, 2012

What is Oracle Database ?

An Oracle Database is a Relational Database management System used to store & retrieve the related information.

An Oracle Database server/instance consists of shared memory structure, background processes & storage which handles the functional requirement to manage concurrent & shared data access by users.

Oracle Database product has evolve though its 8i, 9i, 10g & 11g version

Oracle database server is a part of multitier architecture includes Client Machine/Webserver, Middleware Application & Database Server.
Client Machine/Webserver:
Client is the End user who accesses the DB to retrieve the information.
Various ways to access db by client are Sqlplus, Sql developer or other third Party Tools like TOAD/PLSQL Developer, Web URL. 
Client can be remote or local to DB servers which means that Webserver & Middleware layers are optional & DB can be retrieve from its local server itself.
In Complex & Critical Application Setup’s Multitier approach being followed to make efficient administration, security enforcement, patch/upgrades, backup, restoration, monitoring, license management, hardware management of every component.
Middleware Application:
It stands as a middleware layer to client before accessing database which consists of data retrieval policy, functions, application/java /plsql codes, user interface etc.  Oracle CRM, Fusion Middleware and other vendor application products are found in this layer
Database Server:
Here it comes the Oracle Database, located on Server supporting any platform like Windows, Solaris, AIX, HP-UX and Linux etc.
Will simplify the correlation between each of the basic (writing basic as 8i to 11g various new components being added by oracle but I have picked up the most common of all them & will be easy to understand rather than adding more confusion) database Components & their Usage in following section with reference to above Oracle Basic Architecture with below flow.

1.       Database Buffer
2.       Shared Pool
3.       Large Pool
4.       Java Pool
5.       Redo Buffer
6.       Fixed SGA
7.       Stream Pool
8.       Flash Back Buffer
9.       Program Global Area(PGA)
10.    User Global Area (UGA)

Memory components are divided into SGA, PGA & UGA based on their functionality, Manageability & types of the process which access them.

SGA is System Global Area constitutes the Oracle Instance along with background processes & physical/logical components of the database. SGA is an os shared memory hold by the oracle instance for its various internal memory components.

Upper limit of OS Memory that can be shared to oracle is defined in system parameter file of server along with other limits like stack, concurrent number files / blocks, semaphores, number of process that can be utilized by oracle & its dependent components. During DB startup OS Memory is allocated to SGA & same is released to OS during shutdown. Memory blocks inside the SGA Memory can be read/written by oracle process based on internal algorithm. SGA value set by parameter SGA_MAX_SIZE .SGA is known as dynamic for its feature to resize its components online without shut/startup of instance/DB. Unit of allocation of memory blocks are known as granule. Granule size in 4MB if SGA is < 128MB else 16 MB

Oracle maintains granule information for each of the components of SGA

Will walk through the SGA memory components in detail below

1. Database Buffer

Database buffer plays significant role in SGA by keeping blocks/granules read from datafile & allowing user processes to read/write the block concurrently. DB cache value is driven by parameter DB_CACHE_SIZE

Database buffer/Cache manages the blocks by an internal algorithm using write & LRU (Least Recently Used) lists

Write List holds all the dirty blocks, means the blocks which are modified by user processes in memory & should be written to DB file back again to retain consistent image of a row/data.

LRU list contains two ends, MRU (Most Recently Used) & LRU (Least Recently Used).

LRU end holds dirty, pined, free buffer/blocks. Pinned buffer means blocks which are currently in use & free is free J

Dirty blocks held by LRU end of LRU list are being moved to write list.

Whenever user process accesses the buffer it has been moved to MRU end of LRU list, mostly fresh/Recently read blocks from data files found in MRU end.

Cache hit/miss: First time if an oracle process requesting a block is found in database buffer is known as a cache hit, else it must fetch it from data file into buffer know as direct IO & should be considered as cache miss

Database buffer also holds static components keep (db_keep_cache_size) & recycle buffer (db_recycle_cache_size)
Data blocks of the segments allocated to KEEP buffer cache retained in memory
Database blocks of the segments allocated to RECYCLE are wiped out of memory as soon as they are no longer needed , making room for other RECYCLE segment blocks
DEFAULT buffer pool holds segment blocks which are not assigned to any of the above buffer pool
By default segments allocated to DEFAULT buffer pool
Oracle also supports non-default db block sizes in database buffer 2K, 4K, 8K, 16K, 32K by parameters DB_2K_CACHE_SIZE,

DB buffer Flush occurs when...

-Checkpoint occurs/forced by alter system checkpoint;
-Dirty Buffer list is full & no more free block is available for incoming block
-Alter system flush BUFFER_POOL; is executed

Moving default segment pool to KEEP:

select owner,segment_name,buffer_pool from dba_segments where owner='SEBS' and segment_name='CDRV_RIC_PART';

alter table SEBS.CDRV_RIC_PART  storage ( buffer_pool keep);

Verifying Buffer Pool stats:
select name,BLOCK_SIZE,CURRENT_SIZE,BUFFERS,RESIZE_STATE  from  v$buffer_pool;

          2.  Shared Pool

Shared Pool in SGA is further divided into Library & Dictionary Cache. SHARED_POOL_SIZE parameter reserves space for shared pool


Library Cache:
It holds shared/private sql area, sql’s & plsql code like function procedures & control structures like locks/Enques
Each user process has its private sql area (in shared server mode) & shared sql area stored in shared pool.
Shared sql area holds parse trees, explain plan , compilation program unit information of the sql’s , procedure , function, triggers executed by a user process which is shared among all the other user processes executing similar statement
Each executing session has its own private sql area , in dedicated server mode its located in PGA whereas in shared server mode it’s in shared pool
LRU algorithm is used to remove least used library information & make way to new sql’s stats

Dictionary Cache:

Also known as row cache as it holds data as complete row instead of specific block
Dictionary contains metadata of the objects & views , structure & its users/privileges

Oracle flushes the shared pool when…
-Analyze statement is used to update the objects stats
-Objects is Modified
-Objects is Invalidate/Re-validated, re-compilation
-alter system flush shared_pool; is executed

Use below query to get all components of shared pool:
select pool, name, bytes   from v$sgastat where pool like '%shared_pool%' order by pool, name;

    3. Large Pool

Large pool was brought in to optimize shared pool by isolating heavy IO servers operation like Backup/restoration using parameter LARGE_POOL_SIZE.
Large Pool allocates memory required during disk IO server process operations of Backup & recovery.
Also parallel query buffers & Oracle XA memory allocation taken care by large pool

select pool, name, bytes   from v$sgastat where pool like '%large_pool%' order by pool, name;

    4. Java Pool

Java pool memory is used for all session-specific Java code and data within the JVM.
The Java Pool Advisor statistics provide information about library cache memory used for Java and predict how changes in the size of the Java pool can affect the parse rate. The Java Pool Advisor is internally turned on when statistics_level is set to TYPICAL or higher. These statistics resets when the advisor is turned off.

select pool, name, bytes   from v$sgastat where pool like '%java%' order by pool, name;

    5.  Redo Buffer

Redo Log Buffer is circular buffer residing in SGA. It holds redo entries generated due to DB changes. Redo Entries contains information to reconstruct the changes made to database by DML(insert, update,delete) & DDL (alter, create or replace , drop , truncate ) operations. Redo Buffer is flushed to Redo Log files required for Recovery.
log_buffer parameter sets the buffer value in SGA, which is not auto tuned by ASMM(Automatic Shared Memory Management).
The server processes generate redo data into the log buffer as they make changes to the data blocks in the buffer. LGWR subsequently writes entries from the redo log buffer to the online redo log 
   6. Fixed SGA

The fixed SGA is a un alterable component of the SGA that varies in size based on platform & release.
The fixed SGA contains references to the other components of the SGA and variables that contain the values of various parameters. 
Its bootstrap section used by Oracle internally to find the other bits and pieces of the SGA.

select pool, name, bytes   from v$sgastat where name like '%fixed_sga%'  order by pool, name;

  7.  Stream Pool

The streams pool is used exclusively by Oracle Streams.

It stores buffered queue messages, and it provides memory for Oracle Streams capture processes and apply processes.

Initial size of the stream pool set by streams_pool_size parameter & default value is zero.
The pool size grows dynamically as needed when Oracle Streams is used.

select pool, name, bytes   from v$sgastat where pool like '%stream%' order by pool, name; 
   8. Flash Back Buffer

Oracle Flashback feature supports retrieving the historical stat of the db object using Database flashback logs, undo or recyclebin.
Flashback requires past image of the objects retrieved using various methods e.g flashback query version to fetch data as on historical timestamp,  recyclebin to recover dropped table in which tables is renamed to hexadecimal naming format & retained till free space pressure arrives in particular tablespace

To support flashback mechanism flashback buffers stores altered/dirty buffers from database buffer before being moved to database flash back logs/disk by RVWR background process (will discuss in later stage).

   9. Program Global Area (PGA)

For every user connecting to DB a PGA area is created from PGA memory from OS RAM
PGA consists of stack space, session/cursor information & Private sql area (in dedicated server mode)
Oracle 10g onwards PGA size is managed by pga_aggregate_target & WORKAREA_SIZE_POLICY (value AUTO) parameter else it can managed by setting sort_area_size and hash_area_size parameters

select * from v$pga_target_advice order by pga_target_for_estimate;
select * from v$pgastat;
select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process;

The PGA has great impression performance due to queries involving huge sorts (ORDER by and GROUP BY commands in SQL)
Apart from this PGA supports Hash-join , Bitmap related operations , Write buffers used by bulk load operations
Below Query can be used for optimal re-sizing of PGA & to review PGA stats

select * from v$pga_target_advice order by pga_target_for_estimate;
select * from v$pgastat;
select max(pga_used_mem), max(pga_alloc_mem), max(pga_max_mem) from v$process;

   10. UGA

The User Global Area (UGA) is memory resides in OS RAM holds session information like sorts & hash joins of each user process.
In dedicated server mode UGA is a part of PGA being one to one relationship between User & Server Process
Whereas in shared server mode it has been moved to SGA as user sessions are served by multiple server processes

select name, sum(value/1024) "Value in KB"
 from v$statname n,
 v$session s,
 v$sesstat t
 where s.sid=t.sid
 and n.statistic# = t.statistic#
 and s.type = 'USER'
 and s.username is not NULL
 and in ('session pga memory', 'session pga memory max',
 'session uga memory', 'session uga memory max')
 group by name

    1.   PMON
    2.   SMON
    3.   DBWn
    4.   LGWR
    5.   CKPT
    6.   ARCn
    7.   RECO
    8.   RVWR
    9.   MMON
  10.   MMNL
  11.   MMAN
  12.   CJQn

         1.  PMON

PMON is an oracle background process called process monitor.
PMON responsible for releasing Locks & cleaning up cache post abrupt termination of user process holding resources
PMON also communicates workload advisory to listener in real application cluster environment which helps listener to redirect connection to least loaded instance
PMON restarts dispatcher & server process if it fails
PMON is responsible for service registration to listener which avoids manual configuration of service in listener.ora file by supplying information like database/instance name & its current/maximum load, dispatcher/server mode (shared/dedicated) info.
         2.  SMON

SMON is system monitor process.
SMON is responsible for performing instance recovery during startup of DB post abrupt shutdown/termination of instance.
SMON performs two operation while performing instance recovery i.e. Roll forward & Roll Back during which it refers UNDO & Online Redo log file
SMON coalesces adjacent free extents into large free extents in dictionary managed tablespace
It wakes up every 5 min to perform Housekeeping activity, killing SMON background process terminates instance
In RAC, SMON process of one instance can perform instance recovery of other instance that has failed

SMON cleans up temporary segments that are no longer in use and recovers dead transactions which were skipped during system failure/instance recovery because of file-read errors or offline file status. Such transactions are recovered by SMON when the tablespace or data file is brought back online.  

        3. DBWn

DBWn is database writer background process, number of process n ranging from 0-9 , a-j  total 20 max
The initialization parameter DB_WRITER_PROCESSES specifies the number of DBWn processes.
It writes modified blocks from the database buffer cache to the datafiles.
Database selects an appropriate default setting for this initialization parameter based upon the number of CPUs and processor groups if it is not set at startup

DBWn writes buffers when:
- A DBWn timeout occurs (every 3 seconds).
- A normal or incremental checkpoint occurs.
- The number of dirty buffers reaches a threshold value.
- A process scans a specified number of blocks when scanning free buffers and cannot find any.
- A normal or temporary tablespace is placed offline.
- A tablespace is put into read-only mode. 
- A tablespace is dropped or truncated.
- ALTER TABLESPACE <tablespace_name> BEGIN BACKUP; is executed

Modified or new data is not necessarily written to a datafile immediately. To reduce the amount of disk access and to increase performance, data is pooled in memory and written to the appropriate data files all at once (in bulk mode), as determined by the background process database writer process (DBWn).

The disk write can happen before or after the commit.

After a COMMIT, the database writes the redo buffers to disk but does not immediately write data blocks to disk. Instead, database writer (DBWn) performs lazy writes in the background.

        4. LGWR

LGWR is Log writer background oracle process
LGWR process writes redo log buffer in to online redo log files & members simultaneously
It also signals archival process when a redo log group is about to get overwritten if DB is in ARCHIVELOG Mode

Oracle Database uses a fast commit mechanism to improve performance for committed transactions.
When a user issues a COMMIT statement, the transaction is assigned a system change number (SCN). LGWR puts a commit record in the redo log buffer and writes it to disk immediately, along with the commit SCN and transaction's redo entries.

The redo log buffer is circular. When LGWR writes redo entries from the redo log buffer to an online redo log file, server processes can copy new entries over the entries in the redo log buffer that have been written to disk.
LGWR normally writes fast enough to ensure that space is always available in the buffer for new entries, even when access to the online redo log is heavy.

The atomic write of the redo entry containing the transaction's commit record is the single event that determines the transaction has committed. Oracle Database returns a success code to the committing transaction although the data buffers have not yet been written to disk. The corresponding changes to data blocks are deferred until it is efficient for DBWn to write them to the data files.

LGWR can write redo log entries to disk before a transaction commits.
The redo entries become permanent only if the transaction later commits.
When activity is high, LGWR can use group commits.
For example, a user commits, causing LGWR to write the transaction's redo entries to disk. During this write to disk LGWR can write the list of redo entries of waiting transactions for other user processes (not yet committed) in one operation instead of performing IO followed by every user process commit. In this way, the database minimizes disk I/O and maximizes performance. If commits requests continue at a high rate, then every write by LGWR can contain multiple commit records.

LGWR writes synchronously to the active mirrored group of online redo log files. If a log file is inaccessible, then LGWR continues writing to other files in the group and writes an error to the LGWR trace file and the alert log. If all files in a group are damaged, or if the group is unavailable because it has not been archived, then LGWR cannot continue to function and DB hangs.

In the following circumstances, LGWR writes to redo log file:
 -A user commits a transaction
 -An online redo log switch occurs.
 -Three seconds have passed since LGWR last wrote.
 -The redo log buffer is one-third full or contains 1 MB of buffered data.
 -DBWn must write modified buffers to disk.

       5.  CKPT

CKPT is an oracle background process ensures that database is consistent by ensuring that all committed transactions have been written to datafiles
A checkpoint manages/ensures following three operations:
               a)       Every dirty block in the buffer cache is written to the data files.
        b)    The latest SCN is written (updated) into the datafile header.
        c)    The latest SCN is also written to the control files.

Events that trigger a checkpoint are:
- Redo log switch
- LOG_CHECKPOINT_INTERVAL has been reached
- alter system checkpoint; is executed
- Tablespaces/database put in  hot backup

Note: redo log switches cause a checkpoint, checkpoints don't cause a log switch.
Set LOG_CHECKPOINTS_TO_ALERT=TRUE to see checkpoint information in alert log

In Incremental Checkpoint, checkpoint information is written to the control file. In the following cases:

1. Every three second.
2. At the time of log switch - Sometimes log switches may trigger a complete checkpoint, if the next log where the log switch is to take place is Active.

In Complete Checkpoint, checkpoint information is written in control file, datafile header and also dirty block is written by DBWR to the datafiles.

 Full Checkpoint happens in the following cases.

 1. To meet fast_start_mttr_target
 2. before Clean Shutdown
 3. Some log switches may trigger a complete checkpoint , if the next log where the log switch is to take place is Active.
 This has more chance of happening when the Redo Log files are small in size and continuous transactions are taking place.
 4. when the 'alter system checkpoint' command is issued

Partial Checkpoint happens in the following cases.

 1. Before begin backup.
 2. before tablespace offline.
 3. before placing tablespace in read only.
 4. before dropping tablespace.
 5. before taking datafile offline.
 6. When checkpoint queue exceeds its threshold.
 7. Before segment is dropped.
 8. Before adding and removing columns from table.

        6. ARCn

ARCn is background process spawn in ARCHIVELOG DB mode, where n ranges from 0-9
ARCn copies online redo log files to a designated storage device after
- A log switch has occurred
- alter database archivelog current; is executed
- No more online redo log file is available & to be overwritten

An Oracle instance can have up to 10 ARCn processes (ARC0 to ARC9). The LGWR process starts a new ARCn process whenever the current number of ARCn processes is insufficient to handle the workload. The alert file keeps a record of when LGWR starts a new ARCn process.

Dynamic LOG_ARCHIVE_MAX_PROCESSES can be set to max 10.
Starting from one ARCn process oracle automatically starts up more ARCn processes till LOG_ARCHIVE_MAX_PROCESSES when the database workload requires more.

Arch can be stopped and started dynamically with alter system archive log stop|start or using below commands (deprecated 10g onwards)

archive log list
archive log stop
archive log start
archive log next
archive log all
archive log n

Query v$archive_processes to find out how many arch processes are running

       7. RECO

In a distributed database, the recoverer process (RECO) automatically resolves failures in distributed transactions.
The RECO process of a node automatically connects to other databases involved in an in-doubt distributed transaction. When RECO re-establishes a connection between the databases, it automatically resolves all in-doubt transactions, removing from each database's pending transaction table any rows that correspond to the resolved transactions

       8. RVWR

The Recovery Writer process is responsible for writing to the flashback logs in the flash recovery area.

These logs are required to restore the database to a previous point in time by using the "flashback" option for Oracle databases (Oracle 10g and later)

       9. MMON

MMON is a Memory Monitor background process (10g onwards)
MMON gathers memory statistics (snapshots) stores this information in the AWR (automatic workload repository).
MMON is also responsible for issuing alerts for metrics that exceed their thresholds.
Statistics gathered by MMON snaps are used by ADDM (Automatic Database Diagnostic Monitor) to generate database workload & performance report helpful for tuning
MMON computes short duration metrics every 15 seconds and long duration metrics every 60 seconds

     10. MMNL

MMNL is a Memory Monitor Light background process that assists the MMON process.
This process will flush the ASH buffer to AWR tables when the buffer is full or a snapshot is taken.

      11.  MMAN

MMAN is Memory Manager Background process (10g onwards)
MMAN manages ASSM (Automatic Shared Memory management) by dynamically resizing SGA dynamic components
MMAN is spawn whenever SGA_TARGET parameter is set to non-zero value i.e. when ASSM is enabled
It dynamically resizes Database buffer, Shared Pool, Large Pool, Java Pool memory components based on workload
v$sga_dynamic_components view shows memory components & their dynamic values

     12. CJQn

CJQn is a Job Queue background Process.
Oracle Database uses job queue processes to run user jobs, often in batch mode.

A job is a user-defined task scheduled to run one or more times. For example, you can use a job queue to schedule a long-running update in the background. Given a start date and a time interval, the job queue processes attempt to run the job at the next occurrence of the interval.

Oracle Database manages job queue processes dynamically, thereby enabling job queue clients to use more job queue processes when required. The database releases resources used by the new processes when they are idle.
Dynamic job queue processes can run a large number of jobs concurrently at a given interval. The sequence of events is as follows:

The job coordinator process (CJQ0) is automatically started and stopped as needed by Oracle Scheduler.
The coordinator process periodically selects jobs that need to be run from the system JOB$ table.
New jobs selected are ordered by time.

The coordinator process dynamically spawns job queue slave processes (Jnnn) to run the jobs.
The job queue process runs one of the jobs that was selected by the CJQ0 process for execution. Each job queue process runs one job at a time to completion.

After the process finishes execution of a single job, it polls for more jobs. If no jobs are scheduled for execution, then it enters a sleep state, from which it wakes up at periodic intervals and polls for more jobs. If the process does not find any new jobs, then it terminates after a preset interval.

The initialization parameter JOB_QUEUE_PROCESSES represents the maximum number of job queue processes that can concurrently run on an instance. However, clients should not assume that all job queue processes are available for job execution.

The coordinator process is not started if the initialization parameter JOB_QUEUE_PROCESSES is set to 0.

  1.  Parameter
  2.  Password
  3.  tnsnames
  4.  Listener
  5.  Sqlnet
  6.  Control file
  7.  Data files
  8.  Temp file
  9.  Online Redo Log File
10. Archive Log

          1.  Parameter

Parameter is a mandatory file required to start database instance in nomount stage by reading instance characteristics
Parameter files hold DB parameter & its respective values
Below Query can be used to find parameter file being used during db startup

SELECT DECODE(value, NULL, 'PFILE', 'SPFILE') "Init File Type"  FROM v$parameter WHERE name = 'spfile';

Parameter files located in default location $ORACLE_HOME/dbs (UNIX) & %ORACLE_HOME%database (Windows)


PFILE is a text file & editable using editors like  vi in Unix & notepad  in windows
PFILE is the static parameter file i.e. to reflect modified parameter values requires restart of the instance
Although PFILE is supported in all DB Versions it was the only supported parameters file in oracle 8i & lower


SPFILE is a binary file can be updated using “ALTER SYSTEM SET <parameter>=<value>”
If SPFILE is modified using editor then it returns below error while starting up DB

ORA-01078:falilure in processing system parameters
ORA-01565: error in identifying file 'C:\oracle\SPFILEMRTEDEV.ora'
ORA-27046:file size is not a multiple of logical block size

SPFILE can be backed up using RMAN


Dynamic parameters can be updated in SPFILE without need of instance re-start

Identifying Parameter’s characteristics:

TRUE if value default and not changed since instance creation.
Also note event if value is set to default value again post changes it needs db restart followed by “alter system reset <parameter>” to reflect value as TRUE in view
FALSE if value is modified or it could be default value changes at least once since instance creation without reset/restart

TRUE if parameter can be modified by “alter session set <parameter>=<value>;” else FALSE

Has below values based on ability to modify parameters using “ALTER SYSTEM SET <parameter>=<value>;”
IMMEDIATE — can be altered regardless of what type of parameter file was used to start the instance and changes take effect immediately
DEFERRED — can be altered regardless of what type of parameter file was used to start the instance and change will take effect in subsequent sessions
FALSE — can only be changed when a SPFILE is used, you need to bounce the instance.


It is related to RAC.
If this value is set to ‘TRUE’ for a parameter, it means that there can be different value of the same parameter in different instance in RAC.
If the value is set to ‘FALSE’, it means that particular parameter should always have the same value in all RAC instances.
If issys_modifiable value is ‘FALSE’ for the specific parameter then ISINSTANCE_MODIFIABLE is always ‘FALSE’ for the same parameter.


MODIFIED - Parameter has been modified with ALTER SESSION
SYSTEM_MOD - Parameter has been modified with ALTER SYSTEM (which causes all the currently logged in sessions' values to be modified)
FALSE - Parameter has not been modified after instance startup


Indicates whether Oracle adjusted the input value to a more suitable value (for example, the parameter value should be prime, but the user input a non-prime number, so Oracle adjusted the value to the next prime number)


Indicates whether the parameter has been deprecated (TRUE) or not (FALSE)

Parameter Modification Options:

Parameters can be modified using “ALTER SYSTEM SET <parameter>=<value> SCOPE=<SPFILE/MEMORY/BOTH>”
SCOPE = SPFILE  (For both static and dynamic parameters, changes are recorded in the spfile, takes effect on restart)
SCOPE = MEMORY (For dynamic parameters, changes are applied in memory only. No static parameter change is allowed.)
SCOPE = BOTH (For dynamic parameters, the change is applied in both the server parameter file and memory. No static parameter change is allowed.)
For dynamic parameters, we can also specify the DEFERRED keyword. When specified, the change is effective only for future sessions
ALTER SYSTEM SET <parameter> = <value> [DEFERRED]

     2. Password

If the DBA wants to startup a stopped oracle instance whose database dictionary is not accessible how oracle would authenticate the DBA?
With the Help of password file & parameter remote_login_passwordfile or using OS authentication
This will allow authentication of the DBA to happen outside of the database either using password file or through the operating system (sqlplus “/ as sysdba”) from oracle owner.
The init parameter remote_login_passwordfile specifies if a password file is used to authenticate the DBA or not. If it set either to shared or exclusive a password file will be used.
The default location for the password file is: $ORACLE_HOME/dbs/orapw$ORACLE_SID on Unix and %ORACLE_HOME%\database\PWD%ORACLE_SID%.ora on Windows.

Password files creation:

orapwd file=<fname> entries=<users> force=<y/n> ignorecase=<y/n> nosysdba=<y/n>

Complete file name path to be specified along with name e.g. FILE=/u01/app/ora11g/product/11.2.0/dbhome_1/orapwMYDB.
Supplying only a file name will create password file in current directory
The contents of this file are encrypted.

Number of entries/distinct users allowed connecting DB using SYSDBA/SYSOPER
Entries can be reused as users are added to and removed from the password file
Refer below snaps to identify how ENTRIES can be validated using v$pwfile_users


If set to Y, overwrite an existing password file.
If not set or set to N an error is returned if a password file of the same name already exists

If set to y, passwords are case-insensitive.
If not set or set to N password is case-sensitive.


In addition to creating the password file, initialization parameter REMOTE_LOGIN_PASSWORDFILE should be set to the appropriate value. The possible values are:

NONE: Oracle Database to behave as if the password file does not exist. That is, no privileged connections are allowed over nonsecure connections.

EXCLUSIVE: (default) An EXCLUSIVE password file can be used with only one instance of one database. It enables to add, modify, and delete users & to change the SYS password with the ALTER USER command.

SHARED: A SHARED password file can be used by multiple databases running on the same server, or multiple instances of an Oracle Real Application Clusters (RAC) database. A SHARED password file cannot be modified. This means that you cannot add users to a SHARED password file. Any attempt to do so or to change the password of SYS or other users with the SYSDBA or SYSOPER privileges generates an error. All users needing SYSDBA or SYSOPER system privileges must be added to the password file when REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE. After all users are added, you can change REMOTE_LOGIN_PASSWORDFILE to SHARED, and then share the file.
This option is useful if you are administering multiple databases or a RAC database.

If REMOTE_LOGIN_PASSWORDFILE is set to EXCLUSIVE or SHARED and the password file is missing, this is equivalent to setting REMOTE_LOGIN_PASSWORDFILE to NONE.

        3. Tnsnames

tnsnams.ora is a configuration file mostly required by clients (but located on both client & Server)  connecting to DB for resolving the connection specification which contains net service names mapped to connect descriptors or net service names mapped to listener protocol addresses.
File Located in $ORACLE_HOME/network/admin

Sample TNS entry:

    (ADDRESS = (PROTOCOL = TCP) (HOST = remedy-ebu-test-db1)(PORT = 1521))

A successful connection can be made it tnsping, ping, telnet to specified host/IP resolves

[ora11g@remedy-ebu-test-db1 ~]$ tnsping REMTST
TNS Ping Utility for Linux: Version - Production on 04-DEC-2012 18:07:14
Copyright (c) 1997, 2011, Oracle.  All rights reserved.
Used parameter files:
Used TNSNAMES adapter to resolve the alias
Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = remedy-ebu-test-db1)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = REMTST)))
OK (50 msec)

       4.  Listener
Listener.ora is a SQL*Net configuration file used to configure Oracle Database Listeners required to accept remote connection requests
File Located in $ORACLE_HOME/network/admin

      (ADDRESS = (PROTOCOL = TCP) (HOST = remedy-ebu-test-db1)(PORT = 1526))

      5. Sqlnet

The "sqlnet.ora" file contains client side network configuration parameters located at "$ORACLE_HOME/network/admin" or "$ORACLE_HOME/net80/admin" directory on the client.
This file will also be present on the server if client style connections are used on the server itself.
Here is an example of an "sqlnet.ora" file.


NAMES.DIRECTORY_PATH set to specify the order of the naming methods used for client name resolution lookups

Tnsnames Set to resolve a net service name through the tnsnames.ora file on the client
Ldap Set to resolve a database service name, net service name, or net service alias through a directory server.
ezconnect or hostname Select to enable clients to use a TCP/IP connect identifier, consisting of a host name and optional port and service name
cds Set to resolve an Oracle database name in a Distributed Computing Environment (DCE) environment.
Nis Set to resolve an Oracle database name in a Distributed Computing Environment (DCE) environment.

SQLNET.AUTHENTICATION_SERVICES used to enable one or more authentication services.
If authentication has been installed, it is recommended that this parameter be set to either none or to one of the authentication methods.

NONE for no authentication methods. A valid username and password can be used to access the database.
ALL for all authentication methods
NTS for Windows NT native authentication

       6. Control file

Control file is a small binary file that records the physical structure of the database. The control file also includes:

-The database name
-Names and locations of associated datafiles and redo log files
-The timestamp of the database creation
-The current log sequence number
-Checkpoint information

The control file must be available for writing by the Oracle Database server whenever the database is open.
It is being read/written in Mount stage.
Without the control file, the database cannot be mounted.

The control file of an Oracle Database is created at the same time as the database. By default, at least one copy of the control file is created during database creation. More than one copy of contolfile is recommended

select * from v$controlfile;

       7. Data files

Datafiles are physical files stored on disk which consists of User, System, Undo segments.
Data file constitutes the actual database & it holds almost 90% of the actual database size
Database datafiles are only written to by the DBWR processes that we introduced you to earlier.
These database datafiles are associated with Oracle “Tablespaces”, which are “logical” containers for tables and indexes. 

To get DB file details:

select * from v$datafile;  <Used generally when db is in mount/open stage>
select * from dba_data_files;  <Used generally when db is in open stage>

To get free space in dbfiles:
select dt.tablespace_name,round(sum(df.bytes)/1024/1024/1024,0) "Free_GB"
from dba_free_space df,dba_tablespaces dt
where df.tablespace_name=dt.tablespace_name(+)
and df.tablespace_name not in (select tablespace_name from dba_temp_files)
group by dt.tablespace_name
order by 1;

To get allocated space in dbfiles:

select dt.tablespace_name,round(sum(bytes)/1024/1024/1024,0) "GB"
from dba_data_files df,dba_tablespaces dt
where df.tablespace_name=dt.tablespace_name(+)
and df.tablespace_name not in (select tablespace_name from dba_temp_files)
group by dt.tablespace_name
order by 1;

DB files are integral part of database recorded in contolfile, loss of any online db file lead to db crash until it is taken offline or replaced with valid backup followed by recovery

        8. Temp file

Temporary tablespaces are used to manage space for database sort operations and for storing global temporary tables.
For example, if you join two large tables, and Oracle cannot do the sort in memory, space will be allocated in a temporary tablespace for doing the sort operation.
Other SQL operations that might require disk sorting are: CREATE INDEX, ANALYZE, Select DISTINCT, ORDER BY, GROUP BY, UNION, INTERSECT, MINUS, Sort-Merge joins, etc.

To get temp file details:
select * from v$tempfile; <Used generally when db is in mount/open stage>
select  * from dba_temp_files; <Used generally when db is in open stage>

To get  free space in Temp tablespace :

select TABLESPACE_NAME,round(sum(FREE_SPACE)/1024/1024,0) "FREE_MB" from dba_temp_free_space group by tablespace_name; <11g onwards>

As temp files are not recorded in control file, absence of temp file can be tolerated but SYSTEM tablespace will act as TEMP tablespace

          9.  Online Redo Log File

Online redo log holds redo entry for every sql statement processed in database written by LGWR sequentially in available redo log file groups
Redo log plays important role in DB/Instance recovery post DB crash to identify the status of the transaction (committed/uncommitted)

Each DB holds minimum two Online redo log file group , one as a current redo log being written & other for archiving
After each log switch a new unique sequential number has been assigned to every redo thread

A log switch is the point at which Oracle ends writing to current  online redo log file and begins writing to next available.. Manually log switch can happen using “alter system switch logfile;”
Single instance DB contains single thread with minimum two groups & a member
In RAC, number of thread equivalent to number of instances running in cluster with each thread representing/Used by an instance
Redo log Members helps in Multiplexing the redo log files to safe guard against damage/loss of the disk
Multiple members under a same group/thread hold the duplicate copy of the redo file
Each member of a group can be spanned across multiple disk controllers so that failure of a single disk would not lead to loss of a complete redo group

       10.  Archive Log

Archive log files are historical image of online redo log file.
When database in ARCHIVELOG Mode, before a redo log file gets overwritten content of that redo log copied to archive log files in the archive destination specified by init parameter log_archive_dest & in the format specified by log_archive_format

Archive log file name formed by sequence#, thread# & reset log id (10g onwards)

Archive log files can be backed using RMAN & required for recovery

Archive log list provide details of the archive mode, current /archived redo sequence & location of the archive destination

Thats It !!!


  1. Thanks ajay for uploading the architecture in detail...very useful explanation..

  2. Ajay, May I get this in a PDF or word file please..?

    James Bennett

  3. Sir, pls send that video clip to my email I.D. It will certainly be aid ful 2 make pcb easy.

    Here is my web site;

  4. wow! very nice work. I'm looking for this.

    Thanks a lot.