Thursday, 15 December 2016

Oracle Multitenant Architecture



The Multitenant Architecture enables an Oracle database to function as a multitenant container database (CDB).
A CDB includes zero, one, or many customer-created pluggable databases (PDBs). A PDB is a portable collection of schemas, schema objects, and nonschema objects that appears to an Oracle Net client as a non-CDB. All Oracle databases before Oracle Database 12c were non-CDBs.

In this we will see about the components of CDB. In this I have simplified the components functionality.

CDB$ROOT:
 This particular container contains Oracle metadata. May be few limited user data as well. There can be only one CDB$ROOT for the CDB.

PDB$SEED:
 This doesn't contains the user data but for creating the PDB template. One PDB$SEED will be there for per CDB.

PDB:
 This PDB contains the user data and user metadata. We can have many PDB in single CDB. 



Image result for oracle multitenant architecture diagram 

* Each of the CDB, PDB$SEED, PDB is called the container. Each container has unique Container ID (CON_ID), Container name and each PDB has global unique identifier. 

* PDB's SYSTEM tablespace contains user metadata. The oracle metadata won't be stored in each PDB's.

* Oracle metadata will be storing in a central location for all the PDB's.

* Oracle metadata will have a pointer which is stored in the ROOT CDB. So each PDB will be having an access to metadata without duplicating it to other PDB's.

* CDB has similar files and and Background processes same as Non-CDB databases. Some of the files are common for both CDB and PDBs. Some are not shared between them.

Common Files:

* BG processes: There is a single set of BG processes for the CDB. The PDB don't have any separate background processes attached to it.

* Redo-Log files: These are common for the entire CDB. Each of the PDB's changes will be tracked with the corresponding change number in the archived log. There will be only one redolog for the non-RAC instance. Each instance will be having an redolog if in case of RAC.

* Memory: We allocate the memory only for the CDB because all we need only one CDB for the entire multitenant architecture.

* Controlfiles: The controlfile will have all teh information about all the PDB's.

* Metadata: Oracle metadata will be shared among all the PDB's.

* Temp Tablespace: There is common temp tablespace for all the CDB and ROOT node. Each PDB has their own temp tablespace for its users.

* Undo: All PDB's uses same Undo tablespace. Each PDB will have one active Undo TBS.

Exclusive for PDBs:

* The tablespace or the application which we created for the application PDB, will not be shared among the other PDB' or teh central CDB.

* Temp Tablespace
* Local database users
* Local user Metadata
* Resource Manager Plan




Cheers..!!
Happy Learning..!!





Tuesday, 22 November 2016

New Background processes in 12c

BWnn  There can be 1 to 100 Database Writer Processes. The names of the first 36 Database Writer Processes are DBW0-DBW9 and DBWa-DBWz. The names of the 37th through 100th Database Writer Processes are BW36-BW99. The database selects an appropriate default setting for the DB_WRITER_PROCESSES parameter or adjusts a user-specified setting based on the number of CPUs and processor groups.

FENC (Fence Monitor Process) Processes fence requests for RDBMS instances which are using Oracle ASM instances
 

IPC0 (IPC Service Background Process) Common background server for basic messaging and RDMA primitives based on IPC (Inter-process communication) methods.
 

LDDn (Global Enqueue Service Daemon Helper Slave) Helps the LMDn processes with various tasks
 

LGnn (Log Writer Worker) On multiprocessor systems, LGWR creates worker processes to improve the performance of writing to the redo log. LGWR workers are not used when there is a SYNC standby destination. Possible processes include LG00-LG99.
 

LREG  (Listener Registration Process) Registers the instance with the listeners
 

OFSD (Oracle File Server Background Process) Serves file system requests submitted to an Oracle instance
 

RPOP (Instant Recovery Repopulation Daemon) Responsible for re-creating and/or repopulating data files from snapshot files and backup files
 

SAnn (SGA Allocator) Allocates SGA  The SAnn process allocates SGA in small chunks. The process exits upon completion of SGA allocation.
 

SCRB (ASM Disk Scrubbing Master Process) Coordinates Oracle ASM disk scrubbing operations
 

SCRn  (ASM Disk Scrubbing Slave Repair Process) Performs Oracle ASM disk scrubbing repair operation
 

SCVn  (ASM Disk Scrubbing Slave Verify Process) Performs Oracle ASM disk scrubbing verify operation

Happy Learning..!!
Cheers..!!

ORA-00018 maximum number of sessions exceeded

ORA-00018 maximum number of sessions exceeded
Cause: All session state objects are in use.
Action: Increase the value of the SESSIONS initialization parameter.

ORA-00018 comes under "Oracle Database Server Messages". These messages are generated 
by the Oracle database server when running any Oracle program.

How to increase SESSION initialization parameter:

1. Login as sysdba
 sqlplus / as sysdba
 
2. Check Current Setting of Parameters
 sql> show parameter sessions
 sql> show parameter processes
 sql> show parameter transactions

3. If you are planning to increase "sessions" parameter you should also plan to increase 
"processes and "transactions" parameters.
A basic formula for determining  these parameter values is as follows:
  processes=x
  sessions=x*1.1+5
  transactions=sessions*1.1
  
4. These parameters can't be modified in memory. You have to modify the spfile only 
(scope=spfile) and bounce the instance.
 sql> alter system set processes=500 scope=spfile;
 sql> alter system set sessions=555 scope=spfile;
 sql> alter system set transactions=610 scope=spfile;
 sql> shutdown abort
 sql> startup 


Cheers..!!
Happy Learning..!!

Monday, 21 November 2016

RMAN-03022 and ORA-03114

I am getting this RMAN-03022 and ORA-03114 error:
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of allocate command at 19/13/2015 21:13:22
ORA-03114: not connected to ORACLE



How do I fix this ORA-03114 and RMAN-03002 error?

Option 1) Try flushing the shared as below and check whether it is working now.


Connect to the database and execute the below

sql>alter system flush shared_pool;

Once the shared pool is flushed RMAN should start working.

If the above doesn't help, try next step

Option 2) 

sql>alter system set cursor_sharing=exact scope=both;
 
 
Cheers..!!
Happy Learning..!! 

Oracle 12c silent installation using response file.

Let us assume,You have extracted the RDBMS software zip file, created Oracle user and groups. 

-- Before proceeding with the installation, We need to make few changes in the server ( Pre Checks ).

  1.  Change the kernel parameter in the /etc/sysctl.conf file.

fs.file-max = 6815744
kernel.sem = 250 32000 100 128
kernel.shmmni = 4096
kernel.shmall = 1073741824
kernel.shmmax = 4398046511104
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048576
fs.aio-max-nr = 1048576
net.ipv4.ip_local_port_range = 9000 65500
 -- Even we can proceed with the installation without updating the kernel parameters. But in future we may face a performance issue.

-- kernel.shmax and kernel.shmall values already will be there when you installed the OS. However the above values are the recommendation from Oracle.

To make the changes effective, fire the sysctl -p command.

  2. Add the following lines in the /etc/security/limits.conf file.


oracle   soft   nofile    1024
oracle   hard   nofile    65536
oracle   soft   nproc    16384
oracle   hard   nproc    16384
oracle   soft   stack    10240
oracle   hard   stack    32768


3. Install the below packages if not already installed.

yum install binutils -y
yum install compat-libcap1 -y
yum install compat-libstdc++-33 -y
yum install compat-libstdc++-33.i686 -y
yum install gcc -y
yum install gcc-c++ -y
yum install glibc -y
yum install glibc.i686 -y
yum install glibc-devel -y
yum install glibc-devel.i686 -y
yum install ksh -y
yum install libgcc -y
yum install libgcc.i686 -y
yum install libstdc++ -y
yum install libstdc++.i686 -y
yum install libstdc++-devel -y
yum install libstdc++-devel.i686 -y
yum install libaio -y
yum install libaio.i686 -y
yum install libaio-devel -y
yum install libaio-devel.i686 -y
yum install libXext -y
yum install libXext.i686 -y
yum install libXtst -y
yum install libXtst.i686 -y
yum install libX11 -y
yum install libX11.i686 -y
yum install libXau -y
yum install libXau.i686 -y
yum install libxcb -y
yum install libxcb.i686 -y
yum install libXi -y
yum install libXi.i686 -y
yum install make -y
yum install sysstat -y
yum install unixODBC -y
yum install unixODBC-devel -y

 4. Append the below lines in the .bashrc file, which will present in the home directory of the oracle user.


# Oracle Settings
export TMP=/tmp
export TMPDIR=$TMP

export ORACLE_HOSTNAME=localhost.localdomain
export ORACLE_UNQNAME=orcl
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/12.1.0/db_1
export ORACLE_SID=orcl

export PATH=/usr/sbin:$PATH
export PATH=$ORACLE_HOME/bin:$PATH

export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

 5. Create the necessary directories for Oracle home and Inventory location.

$mkdir -p /u01/app/oracle/product/12.1.0/db_1
$mkdir -p /u01/app/oracle/oraInventory 



 6. Preparing response file. Below are the important parameters for installing the Oracle RDBMS software.

oracle.install.responseFileVersion=/oracle/install/rspfmt_dbinstall_response_schema_v12.1.0
oracle.install.option=INSTALL_DB_SWONLY
ORACLE_HOSTNAME=localhost.localdomain
UNIX_GROUP_NAME=oinstall
INVENTORY_LOCATION=/u01/app/oracle/oraInventory
SELECTED_LANGUAGES=en
ORACLE_BASE=/u01/app/oracle/
ORACLE_HOME=/u01/app/oracle/product/12.1.0/db_1
oracle.install.db.InstallEdition=EE
oracle.install.db.DBA_GROUP=dba
oracle.install.db.OPER_GROUP=dba
oracle.install.db.BACKUPDBA_GROUP=dba
oracle.install.db.DGDBA_GROUP=dba
oracle.install.db.KMDBA_GROUP=dba
SECURITY_UPDATES_VIA_MYORACLESUPPORT=false
DECLINE_SECURITY_UPDATES=true
oracle.installer.autoupdates.option=SKIP_UPDATES



- Installation.

Go the directory where you have extracted the software and run it as Oracle user.

$./runInstaller -showProgress -silent -responseFile <Responsefile location>db_install.rsp

- showProgress with the help of this parameter we can get to know which status are we in and how much percent it got completed.

Once the installation is done, we need to run 2 script as a root user.


/u01/app/oracle/oraInventory/orainstRoot.sh 
/u01/app/oracle/product/12.1.0/db_1/root.sh
 
 
Now you can create your database.
 
Cheers..!!
 
Happy Learning..!!




 
















































 

RMAN backup skip tablespace using EXCLUDE feature

To Skip the tablespace during backup database
 
export ORACLE_SID=PROD

sqlplus / as sysdba

SQL> select * from v$tablespace;
TS# NAME INC BIG FLA ENC
———- —————————— — — — —
0 SYSTEM YES NO YES
1 SYSAUX YES NO YES
2 UNDOTBS1 YES NO YES
3 TEMP NO NO YES
4 USERS YES NO YES
5 PROD_DATA YES NO YES
6 PROD_INDX YES NO YES
10 rows selected.
sql exit
rman target /
rman>show exclude;
using target database control file instead of recovery catalog
RMAN configuration parameters for database with db_unique_name PROD are:
RMAN configuration has no stored or default parameters
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE PROD_DATA;

Tablespace PROD_DATA will be excluded from future whole database backups

new RMAN configuration parameters are successfully stored
 
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE PROD_INDX;
Tablespace PROD_INDX will be excluded from future whole database backups
new RMAN configuration parameters are successfully stored.

RMAN> show exclude;
RMAN configuration parameters for database with db_unique_name PROD are:
CONFIGURE EXCLUDE FOR TABLESPACE ‘PROD_DATA’;
CONFIGURE EXCLUDE FOR TABLESPACE ‘PROD_INDX’;
 
RMAN> BACKUP DATABASE;
Starting backup at 08-JUNE-10
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=129 device type=DISK
file 5 is excluded from whole database backup
file 6 is excluded from whole database backup
channel ORA_DISK_1: starting full datafile backup set
channel ORA_DISK_1: specifying datafile(s) in backup set
input datafile file number=00001 name=/u01/oradata/SYSTEM01.DBF
…….
We can override the exclude feature explicitly during database backup.
RMAN> show exclude;
RMAN configuration parameters for database with db_unique_name PROD are:
CONFIGURE EXCLUDE FOR TABLESPACE ‘PROD_DATA’;
CONFIGURE EXCLUDE FOR TABLESPACE ‘PROD_INDX’;
RMAN> BACKUP DATABASE NOEXCLUDE;

To Clear the EXCLUDE feature:
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE PROD_DATA CLEAR;
Tablespace PROD_DATA will be included in future whole database backups
old RMAN configuration parameters are successfully deleted
RMAN> CONFIGURE EXCLUDE FOR TABLESPACE PROD_INDX CLEAR;
Tablespace PROD_INDX will be included in future whole database backups
old RMAN configuration parameters are successfully deleted
RMAN> show EXCLUDE;
RMAN configuration parameters for database with db_unique_name PROD are:
RMAN configuration has no stored or default parameters
Note:- This will only work while taking backups with RMAN. While cloning or while duplicating we can use :skip tablespace tbs_name" to skip particular database.
 
Cheers..!!
 
Happy Learning..!!
 
 

Monday, 2 May 2016

Top 10 Oracle Wait Events

The following are 10 of the most common causes for wait events, along with explanations and potential solutions:

1. DB File Scattered Read:

 
This generally indicates waits related to full table scans. As full table scans are pulled into memory, they rarely fall into contiguous buffers but instead are scattered throughout the buffer cache. A large number here indicates that your table may have missing or suppressed indexes. Although it may be more efficient in your situation to perform a full table scan than an index scan, check to ensure that full table scans are necessary when you see these waits. Try to cache small tables to avoid reading them in over and over again, since a full table scan is put at the cold end of the LRU (Least Recently Used) list.


2. DB File Sequential Read
 

This event generally indicates a single block read (an index read, for example). A large number of waits here could indicate poor joining orders of tables, or unselective indexing. It is normal for this number to be large for a high-transaction, well-tuned system, but it can indicate problems in some circumstances. You should correlate this wait statistic with other known issues within the Statspack report, such as inefficient SQL. Check to ensure that index scans are necessary, and check join orders for multiple table joins. The DB_CACHE_SIZE will also be a determining factor in how often these waits show up. Problematic hash-area joins should show up in the PGA memory, but they're also memory hogs that could cause high wait numbers for sequential reads. They can also show up as direct path read/write waits.

3. Free Buffer
This indicates your system is waiting for a buffer in memory, because none is currently available. Waits in this category may indicate that you need to increase the DB_BUFFER_CACHE, if all your SQL is tuned. Free buffer waits could also indicate that unselective SQL is causing data to flood the buffer cache with index blocks, leaving none for this particular statement that is waiting for the system to process. This normally indicates that there is a substantial amount of DML (insert/update/delete) being done and that the Database Writer (DBWR) is not writing quickly enough; the buffer cache could be full of multiple versions of the same buffer, causing great inefficiency. To address this, you may want to consider accelerating incremental checkpointing, using more DBWR processes, or increasing the number of physical disks.


4. Buffer Busy

 
This is a wait for a buffer that is being used in an un shareable way or is being read into the buffer cache. Buffer busy waits should not be greater than 1 percent. Check the Buffer Wait Statistics section (or V$WAITSTAT) to find out if the wait is on a segment header. If this is the case, increase the freelist groups or increase the pctused to pctfree gap. If the wait is on an undo header, you can address this by adding rollback segments; if it's on an undo block, you need to reduce the data density on the table driving this consistent read or increase the DB_CACHE_SIZE. If the wait is on a data block, you can move data to another block to avoid this hot block, increase the freelists on the table, or use Locally Managed Tablespaces (LMTs). If it's on an index block, you should rebuild the index, partition the index, or use a reverse key index. To prevent buffer busy waits related to data blocks, you can also use a smaller block size: fewer records fall within a single block in this case, so it's not as "hot." When a DML (insert/update/ delete) occurs, Oracle Database writes information into the block, including all users who are "interested" in the state of the block (Interested Transaction List, ITL). To decrease waits in this area, you can increase the initrans, which will create the space in the block to allow multiple ITL slots. You can also increase the pctfree on the table where this block exists (this writes the ITL information up to the number specified bymaxtrans, when there are not enough slots built with the initrans that is specified).


5. Latch Free

 
Latches are low-level queuing mechanisms (they're accurately referred to as mutual exclusion mechanisms) used to protect shared memory structures in the system global area (SGA). Latches are like locks on memory that are very quickly obtained and released. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. Most latch problems are related to the failure to use bind variables (library cache latch), redo generation issues (redo allocation latch), buffer cache contention issues (cache buffers LRU chain), and hot blocks in the buffer cache (cache buffers chain). There are also latch waits related to bugs; check MetaLink for bug reports if you suspect this is the case. When latch miss ratios are greater than 0.5 percent, you should investigate the issue.


6. Enqueue

 
An enqueue is a lock that protects a shared resource. Locks protect shared resources, such as data in a record, to prevent two people from updating the same data at the same time. An enqueue includes a queuing mechanism, which is FIFO (first in, first out). Note that Oracle's latching mechanism is not FIFO. Enqueue waits usually point to the ST enqueue, the HW enqueue, the TX4 enqueue, and the TM enqueue. The ST enqueue is used for space management and allocation for dictionary-managed tablespaces. Use LMTs, or try to pre-allocate extents or at least make the next extent larger for problematic dictionary-managed tablespaces. HW enqueues are used with the high-water mark of a segment; manually allocating the extents can circumvent this wait. TX4s are the most common enqueue waits. TX4 enqueue waits are usually the result of one of three issues. The first issue is duplicates in a unique index; you need to commit/rollback to free the enqueue. The second is multiple updates to the same bitmap index fragment. Since a single bitmap fragment may contain multiple rowids, you need to issue a commit or rollback to free the enqueue when multiple users are trying to update the same fragment. The third and most likely issue is when multiple users are updating the same block. If there are no free ITL slots, a block-level lock could occur. You can easily avoid this scenario by increasing the initrans and/or maxtrans to allow multiple ITL slots and/or by increasing the pctfree on the table. Finally, TM enqueues occur during DML to prevent DDL to the affected object. If you have foreign keys, be sure to index them to avoid this general locking issue.


7. Log Buffer Space

 
This wait occurs because you are writing the log buffer faster than LGWR can write it to the redo logs, or because log switches are too slow. To address this problem, increase the size of the log files, or increase the size of the log buffer, or get faster disks to write to. You might even consider using solid-state disks, for their high speed.


8. Log File Switch

 
All commit requests are waiting for "logfile switch (archiving needed)" or "logfile switch (Checkpoint. Incomplete)." Ensure that the archive disk is not full or slow. DBWR may be too slow because of I/O. You may need to add more or larger redo logs, and you may potentially need to add database writers if the DBWR is the problem.


9. Log File Sync

 
When a user commits or rolls back data, the LGWR flushes the session's redo from the log buffer to the redo logs. The log file sync process must wait for this to successfully complete. To reduce wait events here, try to commit more records (try to commit a batch of 50 instead of one at a time, for example). Put redo logs on a faster disk, or alternate redo logs on different physical disks, to reduce the archiving effect on LGWR. Don't use RAID 5, since it is very slow for applications that write a lot; potentially consider using file system direct I/O or raw devices, which are very fast at writing information.


10. Idle Event.
 

There are several idle wait events listed after the output; you can ignore them. Idle events are generally listed at the bottom of each section and include such things as SQL*Net message to/from client and other background-related timings. Idle events are listed in the stats$idle_event table.



Happy Learning..!!

Tuesday, 29 March 2016

TNS-01106: Listener using listener name listener1 has already been started

Today I came across one issue with my listener. In my production environment multiple listener are running. When i started first two listener it started normally.

But when I tried to start my another listener am getting an TNS - 01106 error.

In my initial investigation I couldn't found any issues with my listener file.

But after some time i got to know that , There is a problem with my Key values in Listener.ora file.

For both the listener it has same key values after i changed the value to EXTPROC to EXTPROC4.

After changing that key value I successfully started my listener services..



Cheers..!! 
Happy Learning..!!

Monday, 29 February 2016

IO Stats

select
FILE_NO,
FILETYPE_NAME,
SMALL_READ_MEGABYTES "Single-block MegaBytes Reads",
SMALL_WRITE_MEGABYTES "Single-block MegaBytes Writes",
SMALL_READ_REQS "Single-block Read Requests",
SMALL_WRITE_REQS "Single-block Write Requests",
SMALL_READ_SERVICETIME "Total S-Block Read Time",
SMALL_WRITE_SERVICETIME "Total S-Block Write Time",
-- decode(SMALL_READ_REQS,0,0,SMALL_READ_SERVICETIME/SMALL_READ_REQS) "Per SBlock
Read Response T",
SMALL_SYNC_READ_REQS,
SMALL_SYNC_READ_LATENCY "S-Block Sync Read Latency (ms)",
LARGE_READ_MEGABYTES "Multi-block MegaBytes Reads",
LARGE_WRITE_MEGABYTES "Multi-block MegaBytes Writes",
LARGE_READ_REQS "Multi-block Read Requests",
LARGE_WRITE_REQS "Multi-block Write Requests",
LARGE_READ_SERVICETIME "Total M-Block Read Time",
LARGE_WRITE_SERVICETIME "Total M-Block Write Time",
ASYNCH_IO,
RETRIES_ON_ERROR
from V$IOSTAT_FILE
order by FILE_NO
/

IO Tuning

IO Related waits

SELECT
n.username,
s.sid,
s.value
FROM v$sesstat s,v$statname t, v$session n
WHERE s.statistic# = t.statistic#
AND n.sid = s.sid
AND t.name='CPU used by this session'
Page 342 Oracle DBA Code Examples
AND s.value <> 0
ORDER BY s.value desc;

Top session CPU-comsumers

SELECT n.username, s.sid, s.value
FROM v$sesstat s,v$statname t, v$session n
WHERE s.statistic# = t.statistic#
AND n.sid = s.sid
AND t.name='CPU used by this session'
AND s.value <> 0
ORDER BY s.value desc;

CPU Utilization

One-hour history of the Host CPU Utilization

select BEGIN_TIME, END_TIME, GROUP_ID, METRIC_ID, METRIC_NAME, VALUE, METRIC_UNIT
from V$SYSMETRIC_HISTORY
where METRIC_NAME LIKE '%Host CPU%';




Happy Learning..!!
Cheers..!!

Saturday, 27 February 2016

How to create database manually on linux

Create a necessary directories for admin, oradata and recovery destination.

mkdir -p /u01/app/oracle/admin/prod/adump
mkdir -p /u01/app/oracle/oradata/prod/
mkdir -p /u01/app/oracle/flash_recovery_area/prod
mkdir -p /u01/app/oracle/diag

After 11g your diagnostic location should be in Oracle base directory.

create a script called db_create.sql with the below

I am going to create a database called prod.

CREATE DATABASE "prod"
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXDATAFILES 100
MAXINSTANCES 1
MAXLOGHISTORY 292
LOGFILE
GROUP 1 '/u01/app/oracle/oradata/prod/redo1.log' SIZE 10M,
GROUP 2 '/u01/app/oracle/oradata/prod/redo2.log' SIZE 10M,
GROUP 3 '/u01/app/oracle/oradata/prod/redo3.log' SIZE 10M
DATAFILE
'/u01/app/oracle/oradata/prod/system.dbf' size 1000m,
'/u01/app/oracle/oradata/prod/usr04.dbf' size 100m
sysaux datafile '/u01/app/oracle/oradata/prod/sysaux.dbf' size 100m
undo tablespace undotbs
datafile '/u01/app/oracle/oradata/prod/undo.dbf' size 50m
CHARACTER SET US7ASCII
;


Set the environment. 

$export ORACLE_SID=prod.


Create a init file in ORACLE_HOME/dbs directory. Your init file name should be initprod.ora

prod.__db_cache_size=88080384
prod.__java_pool_size=4194304
prod.__large_pool_size=4194304
prod.__shared_pool_size=62914560
prod.__streams_pool_size=0
*.audit_file_dest='/u01/app/oracle/admin/prod/adump'
*.compatible='11.2.0.1.0'
*.control_files='/u01/app/oracle/oradata/prod/control01.ctl','/u01/app/oracle/oradata/prod/control02.ctl','/u01/app/oracle/oradata/prod/control03.ctl'
*.db_block_size=8192
*.db_domain=''
*.db_file_multiblock_read_count=16
*.db_name='prod'
*.db_recovery_file_dest='/u01/app/oracle/flash_recovery_area/prod'
*.db_recovery_file_dest_size=2147483648
*.diagnostic_dest='/u01/app/oracle/diag'
*.dispatchers='(PROTOCOL=TCP) (SERVICE=prodXDB)'
*.job_queue_processes=10
*.open_cursors=300
*.pga_aggregate_target=202375168
*.processes=150
*.remote_login_passwordfile='EXCLUSIVE'
*.sga_target=609222656
*.undo_management='AUTO'
*.undo_tablespace='undotbs'


Once your init file is created, Start the database in  nomount stage.

Once the instance is started, Run the db_create.sql script.

SQL>@db_create.sql;

Database created.


After your database creation, Your database will come up in default log mode, which is noarchivelog mode.

for make your database in archivelog mode, shutdown your database and start in mount state to change the database to archive log mode.

once the database came up in mount mode,

SQL> alter database archivelog;

SQL> alter database open;


Post installation we should run the below script.

SQL> @?/rdbms/admin/catalog.sql;
SQL> @?/rdbms/admin/catblock.sql;
SQL> @?/rdbms/admin/catproc.sql;


Run this pupbld.sql script as system user.

SQL> conn system/manager
connected
SQL>@?/sqlplus/admin/pupbld.sql;

Now your database is ready to use.


Happy Learning..!!
Cheers..!!








Sunday, 21 February 2016

Monitor Log Shipping

SET PAGESIZE 124
COL DB_NAME FORMAT A8
COL HOSTNAME FORMAT A12
COL LOG_ARCHIVED FORMAT 999999
COL LOG_APPLIED FORMAT 999999
COL LOG_GAP FORMAT 9999
COL APPLIED_TIME FORMAT A12
SELECT DB_NAME, HOSTNAME, LOG_ARCHIVED, LOG_APPLIED,APPLIED_TIME,
LOG_ARCHIVED-LOG_APPLIED LOG_GAP
FROM
(
SELECT NAME DB_NAME
FROM V$DATABASE
),
(
SELECT UPPER(SUBSTR(HOST_NAME,1,(DECODE(INSTR(HOST_NAME,'.'),0,LENGTH(HOST_NAME),
(INSTR(HOST_NAME,'.')-1))))) HOSTNAME
FROM V$INSTANCE
),
(
SELECT MAX(SEQUENCE#) LOG_ARCHIVED
FROM V$ARCHIVED_LOG WHERE DEST_ID=1 AND ARCHIVED='YES'
),
(
SELECT MAX(SEQUENCE#) LOG_APPLIED
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
),
(
SELECT TO_CHAR(MAX(COMPLETION_TIME),'DD-MON/HH24:MI') APPLIED_TIME
FROM V$ARCHIVED_LOG WHERE DEST_ID=2 AND APPLIED='YES'
);




Happy Learning..!!
Cheers..!!

Long running sessions

SELECT SID, SERIAL#, opname, SOFAR, TOTALWORK,
ROUND(SOFAR/TOTALWORK*100,2) COMPLETE
FROM V$SESSION_LONGOPS
WHERE
TOTALWORK != 0
AND SOFAR != TOTALWORK
order by 1;


Happy learning..!!
Cheers..!!

Get table and index DDL

set heading off;
set echo off;
Set pages 999;
set long 90000;

spool ddl_list.sql

select dbms_metadata.get_ddl('TABLE','DEPT','SCOTT') from dual;

select dbms_metadata.get_ddl('INDEX','DEPT_IDX','SCOTT') from dual;

spool off;


============================================================================================


Now we can modify the syntax to punch a whole schema. 

It us easily done by selecting dbms_metadata. get_ddl and specifying USER_TABLES and USER_INDEXES. :



set pagesize 0
set long 90000
set feedback off
set echo off 

spool scott_schema.sql 

connect scott/tiger;

SELECT DBMS_METADATA.GET_DDL('TABLE',u.table_name)
FROM USER_TABLES u;

SELECT DBMS_METADATA.GET_DDL('INDEX',u.index_name)
FROM USER_INDEXES u;

spool off;


Happy Learning..!!
Cheers..!!

ORA-19809: limit exceeded for recovery files

ORA-19809: limit exceeded for recovery files

The flash recovery area is full:

To verify this run the following query. It will show the size of the recovery area and how full it is:

set lines 100
col name format a60
select name
, floor(space_limit / 1024 / 1024) "Size MB"
, ceil(space_used / 1024 / 1024) "Used MB"
from v$recovery_file_dest
order by name
/

To fix the problem, you need to either make the flash recovery area larger, or remove some files from it.

If you have the disk space available, make the recovery area larger:

alter system set db_recovery_file_dest_size= scope=both
/

To remove files you must use RMAN.
Manually moving or deleting files will have no effect as oracle will be unaware.
The obvious choice is to backup and remove some archive log files.
However, if you usually write your RMAN backups to disk, this could prove tricky.

RMAN will attempt to write the backup to the flash recovery area...which is full.

You could try sending the backup elsewhere using a command such as this:

rman target / catalog user/pass@rmancat

run {allocate channel t1 type disk;
backup archivelog all delete input
format '//arch_%d_%u_%';
release channel t1;
}

This will backup all archive log files to a location of your choice and then remove them.



Happy Learning..!!
Cheers..!!

No of archives generated everyday

SELECT A.*,
Round(A.Count#*B.AVG#/1024/1024) Daily_Avg_Mb
FROM
(
SELECT
To_Char(First_Time,’YYYY-MM-DD’) DAY,
Count(1) Count#,
Min(RECID) Min#,
Max(RECID) Max#
FROM
v$log_history
GROUP
BY To_Char(First_Time,’YYYY-MM-DD’)
ORDER
BY 1 DESC
) A,
(
SELECT
Avg(BYTES) AVG#,
Count(1) Count#,
Max(BYTES) Max_Bytes,
Min(BYTES) Min_Bytes
FROM
v$log
) B;


Happy Learning..!!
Cheers..!!

which schema's are taking up space

set pages 999
col "size MB" format 999,999,999
col "Objects" format 999,999,999
select obj.owner "Owner"
, obj_cnt "Objects"
, decode(seg_size, NULL, 0, seg_size) "size MB"
from (select owner, count(*) obj_cnt from dba_objects group by owner) obj
, (select owner, ceil(sum(bytes)/1024/1024) seg_size
from dba_segments group by owner) seg
where obj.owner = seg.owner(+)
order by 3 desc ,2 desc, 1
/



Happy Learning..!!
Cheers..!!

List segments of a tablespace, their properties and sizes

set lines 137
set pages 10000

clear break

col TSname heading 'TSpace|Name|'
col TSname format a25
col SgmntOwner heading 'Sgmnt|Owner|'
col SgmntOwner format a15
col SgmntType heading 'Sgmnt|Type|'
col SgmntType format a15
col SgmntName heading 'Sgmnt|Name|'
col SgmntName format a35
col MinExt heading 'Min|No of|Ext'
col MinExt format 99999999999
col MaxExt heading 'Max|No of|Ext'
col MaxExt format 99999999999
col SgmntSize heading 'Sgmnt|Size|Mb'
col SgmntSize format 9999
col SgmntExentNo heading 'No of|Extent|'
col SgmntExentNo format 9999999999

SELECT
ds.tablespace_name as "TSname",
ds.owner as "SgmntOwner",
ds.segment_type as "SgmntType",
ds.segment_name as "SgmntName",
ds.min_extents as "MinExt",
ds.max_extents as "MaxExt",
ROUND(ds.bytes/1024/1024,0) as "SgmntSize",
SUM(ds.extents) as "SgmntExentNo"
FROM
dba_segments ds
WHERE tablespace_name = 'SYSAUX'
GROUP BY
ds.tablespace_name,
ds.owner,
ds.segment_type,
ds.segment_name,
ds.min_extents,
ds.max_extents,
ds.bytes
ORDER BY
ds.tablespace_name,
ds.owner,
ds.segment_type,
ds.segment_name
;


Happy Learning..!!
Learning..!!