Tuesday, 19 January 2016

Snapshot too old error

ORA-01555 "Snapshot too old" - Detailed Explanation
===================================================

Overview
~~~~~~~~

This article will discuss the circumstances under which a query can return the Oracle
error ORA-01555 "snapshot too old (rollback segment too small)". The article will then
proceed to discuss actions that can be taken to avoid the error and finally will provide
some simple PL/SQL scripts that illustrate the issues discussed.


1. READ CONSISTENCY:
====================

This is documented in the Oracle Server Concepts manual and so will not be discussed
further. However, for the purposes of this article this should be read and understood if
not understood already.

Oracle Server has the ability to have multi-version read consistency which is invaluable
to you because it guarantees that you are seeing a consistent view of the data (no 'dirty
reads').


2. DELAYED BLOCK CLEANOUT:
==========================

This is best illustrated with an example: Consider a transaction that updates a million
row table. This obviously visits a large number of database blocks to make the change to
the data. When the user commits the transaction Oracle does NOT go back and revisit these
blocks to make the change permanent. It is left for the next transaction that visits any
block affected by the update to 'tidy up' the block (hence the term 'delayed block
cleanout').

Whenever Oracle changes a database block (index, table, cluster) it stores a pointer in
the header of the data block which identifies the rollback segment used to hold the
rollback information for the changes made by the transaction. (This is required if the
user later elects to not commit the changes and wishes to 'undo' the changes made.)

Upon commit, the database simply marks the relevant rollback segment header entry as
committed. Now, when one of the changed blocks is revisited Oracle examines the header of
the data block which indicates that it has been changed at some point. The database needs
to confirm whether the change has been committed or whether it is currently uncommitted.
To do this, Oracle determines the rollback segment used for the previous transaction
(from the block's header) and then determines whether the rollback header indicates
whether it has been committed or not.

If it is found  that the block is committed then the header of the data block is updated
so that subsequent accesses to the block do not incur this processing.

This behaviour is illustrated in a very simplified way below. Here we walk through the
stages involved in updating a data block.

 STAGE 1 - No changes made

 Description: This is the starting point. At the top of the
              data block we have an area used to link active
              transactions to a rollback
              segment (the 'tx' part), and the rollback segment
              header has a table that stores information upon
              all the latest transactions
              that have used that rollback segment.

              In our example, we have two active transaction
              slots (01 and 02)
              and the next free slot is slot 03. (Since we are
              free to overwrite committed transactions.)

Data Block 500             Rollback Segment Header 5
+----+--------------+      +----------------------+---------+
| tx | None         |      | transaction entry 01 |ACTIVE   |
+----+--------------+      | transaction entry 02 |ACTIVE   |
| row 1             |      | transaction entry 03 |COMMITTED|
| row 2             |      | transaction entry 04 |COMMITTED|
| ... ..            |      |     ...     ...   .. |  ...    |
| row n             |      | transaction entry nn |COMMITTED|
+-------------------+       +--------------------------------+

 STAGE 2 - Row 2 is updated

 Description: We have now updated row 2 of block 500. Note that
              the data block header is updated to point to the
              rollback segment 5, transaction
              slot 3 (5.3) and that it is marked uncommitted
             (Active).

Data Block 500             Rollback Segment Header 5
+----+--------------+     +----------------------+---------+
| tx |5.3uncommitted|-+   | transaction entry 01 |ACTIVE   |
+----+--------------+ |   | transaction entry 02 |ACTIVE   |
| row 1             | +-->| transaction entry 03 |ACTIVE   |
| row 2 *changed*   |     | transaction entry 04 |COMMITTED|
| ... ..            |     |     ...     ...   .. |  ...    |
| row n             |     | transaction entry nn |COMMITTED|
+------------------+      +--------------------------------+

 STAGE 3 - The user issues a commit

 Description: Next the user hits commit. Note that all that
              this does is it
              updates the rollback segment header's
              corresponding transaction
              slot as committed. It does *nothing* to the data
              block.

Data Block 500                   Rollback Segment Header 5
+----+--------------+       +----------------------+---------+
| tx |5.3uncommitted|--+    | transaction entry 01 |ACTIVE   |
+----+--------------+  |    | transaction entry 02 |ACTIVE   |
| row 1             |  +--->| transaction entry 03 |COMMITTED|
| row 2 *changed*   |       | transaction entry 04 |COMMITTED|
| ... ..            |       |     ...     ...   .. |  ...    |
| row n             |       | transaction entry nn |COMMITTED|
+------------------+        +--------------------------------+

 STAGE 4 - Another user selects data block 500

 Description: Some time later another user (or the same user)
              revisits data block 500. We can see that there
              is an uncommitted change in the
              data block according to the data block's header.

              Oracle then uses the data block header to look up
              the corresponding rollback segment transaction
              table slot, sees that it has been committed, and
              changes data block 500 to reflect the
              true state of the datablock. (i.e. it performs
              delayed cleanout).

Data Block 500                   Rollback Segment Header 5
+----+--------------+      +----------------------+---------+
| tx | None         |      | transaction entry 01 |ACTIVE   |
+----+--------------+      | transaction entry 02 |ACTIVE   |
| row 1             |      | transaction entry 03 |COMMITTED|
| row 2             |      | transaction entry 04 |COMMITTED|
| ... ..            |      |     ...     ...   .. |  ...   |
| row n             |      | transaction entry nn |COMMITTED|
+------------------+       +--------------------------------+


ORA-01555 Explanation
~~~~~~~~~~~~~~~~~~~~~

There are two fundamental causes of the error ORA-01555 that are a result of Oracle
trying to attain a 'read consistent' image. These are :

  o The rollback information itself is overwritten so that Oracle is unable to rollback
the (committed) transaction entries to attain a sufficiently old enough version of the
block.

  o The transaction slot in the rollback segment's transaction table (stored in the
rollback segment's header) is overwritten, and Oracle cannot rollback the transaction
header sufficiently to derive the original rollback segment transaction slot.

  Note: If the transaction of User A is not committed, the rollback segment entries will NOT be
  reused, but if User A commits, the entries become free for reuse, and if a query of User B
  takes a lot of time, and "meet" those overwritten entries, user B gets an error.

Both of these situations are discussed below with the series of steps that cause the
ORA-01555. In the steps, reference is made to 'QENV'. 'QENV' is short for 'Query
Environment', which can be thought of as the environment that existed when a query is
first started and to which Oracle is trying to attain a read consistent image. Associated
with this environment is the SCN
(System Change Number) at that time and hence, QENV 50 is the query environment with SCN
50.

 CASE 1 - ROLLBACK OVERWRITTEN

 This breaks down into two cases: another session overwriting the rollback that the
current session requires or the case where the current session  overwrites the rollback
information that it requires. The latter is discussed in this article because this is
usually the harder one to understand.

  Steps:

    1. Session 1 starts query at time T1 and QENV 50

    2. Session 1 selects block B1 during this query

    3. Session 1 updates the block at SCN 51

    4. Session 1 does some other work that generates rollback information.

    5. Session 1 commits the changes made in steps '3' and '4'.
       (Now other transactions are free to overwrite this rollback information)

    6. Session 1 revisits the same block B1 (perhaps for a different row).

       Now, Oracle can see from the block's header that it has been changed and it is
later than the required QENV (which was 50). Therefore we need to get an image of the
block as of this QENV.

       If an old enough version of the block can be found in the buffer cache then we
will use this, otherwise we need to rollback the current block to generate another
version of the block as at the required QENV.

       It is under this condition that Oracle may not be able to get the required
rollback information because Session 1's changes have generated rollback information that
has overwritten it and returns the ORA-1555 error.

 CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

    1. Session 1 starts query at time T1 and QENV 50

    2. Session 1 selects block B1 during this query

    3. Session 1 updates the block at SCN 51

    4. Session 1 commits the changes
       (Now other transactions are free to overwrite this rollback information)

    5. A session (Session 1, another session or a number of other sessions) then use the
same rollback segment for a series of committed transactions.

       These transactions each consume a slot in the rollback segment transaction table
such that it eventually wraps around (the slots are written to in a circular fashion) and
overwrites all the slots. Note that Oracle is free to reuse these slots since all
transactions are committed.

    6. Session 1's query then visits a block that has been changed since the initial QENV
was established. Oracle therefore needs to derive an image of the block as at that point
in time.

       Next Oracle attempts to lookup the rollback segment header's transaction slot
pointed to by the top of the data block. It then realises that this has been overwritten
and attempts to rollback the changes made to the rollback segment header to get the
original transaction slot entry.

       If it cannot rollback the rollback segment transaction table sufficiently it will
return ORA-1555 since Oracle can no longer derive the required version of the data block.


 It is also possible to encounter a variant of the transaction slot being overwritten
when using block cleanout. This is briefly described below :

 Session 1 starts a query at QENV 50. After this another process updates the blocks that
Session 1 will require. When Session 1 encounters these blocks it determines that the
blocks have changed and have not yet been cleaned out (via delayed block cleanout).
Session 1 must determine whether  the rows in the block existed at QENV 50, were
subsequently changed,

 In order to do this, Oracle must look at the relevant rollback segment transaction table
slot to determine the committed SCN. If this SCN is after the QENV then Oracle must try
to construct an older version of the block and if it is before then the block just needs
clean out to be good enough for  the QENV.

 If the transaction slot has been overwritten and the transaction table cannot be rolled
back to a sufficiently old enough version then Oracle cannot derive the block image and
will return ORA-1555.

 (Note: Normally Oracle can use an algorithm for determining a block's SCN during block
cleanout even when the rollback segment slot has been overwritten. But in this case
Oracle cannot guarantee that the version of the block has not changed since the start of
the query).

Solutions
~~~~~~~~~

This section lists some of the solutions that can be used to avoid the ORA-01555 problems
discussed in this article. It addresses the cases where rollback segment information is
overwritten by the same session and when the rollback segment transaction table entry is
overwritten.

It is worth highlighting that if a single session experiences the ORA-01555 and it is not
one of the special cases listed at the end of this article, then the session must be
using an Oracle extension whereby fetches across commits are tolerated. This does not
follow the ANSI model and in the rare cases where
ORA-01555 is returned one of the solutions below must be used.

 CASE 1 - ROLLBACK OVERWRITTEN

  1.  Increase size of rollback segment which will reduce the likelihood of overwriting
rollback information that is needed.

  2.  Reduce the number of commits (same reason as 1).

  3.  Run the processing against a range of data rather than the whole table. (Same
reason as 1).

  4.  Add additional rollback segments. This will allow the updates etc. to be spread
across more rollback segments thereby reducing the chances of overwriting required
rollback information.

  5.  If fetching across commits, the code can be changed so that this is not done.

  6.  Ensure that the outer select does not revisit the same block at different times
during the processing. This can be achieved by :

        - Using a full table scan rather than an index lookup
        - Introducing a dummy sort so that we retrieve all the data, sort it and then
sequentially visit these data blocks.

 CASE 2 - ROLLBACK TRANSACTION SLOT OVERWRITTEN

  1. Use any of the methods outlined above except for '6'. This will allow transactions
to spread their work across multiple rollback segments therefore reducing the likelihood
or rollback segment transaction table slots being consumed.

  2. If it is suspected that the block cleanout variant is the cause, then force block
cleanout to occur prior to the transaction that returns the ORA-1555. This can be
achieved by issuing the following in SQL*Plus, SQL*DBA or Server Manager :

      alter session set optimizer_goal = rule;
      select count(*) from table_name;

     If indexes are being accessed then the problem may be an index block and clean out
can be forced by ensuring that all the index is traversed. Eg, if the index is on a
numeric column with a minimum value of 25 then the following query will force cleanout of
the index :

      select index_column from table_name where index_column > 24;

Examples
~~~~~~~~

Listed below are some PL/SQL examples that can be used to illustrate the ORA-1555 cases
given above. Before these PL/SQL examples will return this error the database must be
configured as follows :

  o Use a small buffer cache (db_block_buffers).

    REASON: You do not want the session executing the script to be able to find old
versions of the block in the buffer cache which can be used to satisfy a block visit
without requiring the rollback information.

  o Use one rollback segment other than SYSTEM.

    REASON: You need to ensure that the work being done is generating rollback
information that will overwrite the rollback information required.

  o Ensure that the rollback segment is small.

    REASON: See the reason for using one rollback segment.

 ROLLBACK OVERWRITTEN

rem * 1555_a.sql -
rem * Example of getting ora-1555 "Snapshot too old" by
rem * session overwriting the rollback information required
rem * by the same session.

  drop table bigemp;
  create table bigemp (a number, b varchar2(30), done char(1));

  drop table dummy1;
  create table dummy1 (a varchar2(200));

  rem * Populate the example tables.
  begin
   for i in 1..4000 loop
     insert into bigemp values (mod(i,20), to_char(i), 'N');
     if mod(i,100) = 0 then
       insert into dummy1 values ('ssssssssssss');
       commit;
     end if;
   end loop;
   commit;
  end;
  /

  rem * Ensure that table is 'cleaned out'.
  select count(*) from bigemp;

  declare
   -- Must use a predicate so that we revisit a changed block at a different
   -- time.

   -- If another tx is updating the table then we may not need the predicate
   cursor c1 is select rowid, bigemp.* from bigemp where a < 20;

  begin
   for c1rec in c1 loop

     update dummy1 set a = 'aaaaaaaa';
     update dummy1 set a = 'bbbbbbbb';
     update dummy1 set a = 'cccccccc';
     update bigemp set done='Y' where c1rec.rowid = rowid;
     commit;
   end loop;
  end;
  /

 ROLLBACK TRANSACTION SLOT OVERWRITTEN

  rem * 1555_b.sql - Example of getting ora-1555 "Snapshot too old" by
  rem *              overwriting the transaction slot in the rollback
  rem *              segment header. This just uses one session.

  drop table bigemp;
  create table bigemp (a number, b varchar2(30), done char(1));

  rem * Populate demo table.
  begin
   for i in 1..200 loop
     insert into bigemp values (mod(i,20), to_char(i), 'N');
     if mod(i,100) = 0 then
       commit;
     end if;
   end loop;
   commit;
  end;
  /

  drop table mydual;
  create table mydual (a number);
  insert into mydual values (1);
  commit;

  rem * Cleanout demo table.
  select count(*) from bigemp;

  declare

   cursor c1 is select * from bigemp;

  begin

   -- The following update is required to illustrate the problem if block
   -- cleanout has been done on 'bigemp'. If the cleanout (above) is commented
   -- out then the update and commit statements can be commented and the
   -- script will fail with ORA-1555 for the block cleanout variant.
   update bigemp set b = 'aaaaa';
   commit;

   for c1rec in c1 loop
     for i in 1..20 loop
       update mydual set a=a;
       commit;
     end loop;
   end loop;
  end;
  /

Special Cases
~~~~~~~~~~~~~

There are other special cases that may result in an ORA-01555. These are given below but
are rare and so not discussed in this article :

 o Trusted Oracle can return this if configured in OS MAC mode. Decreasing
LOG_CHECKPOINT_INTERVAL on the secondary database may overcome the problem.

 o If a query visits a data block that has been changed by using the Oracle discrete
transaction facility then it will return ORA-01555.

 o It is feasible that a rollback segment created with the OPTIMAL clause maycause a
query to return ORA-01555 if it has shrunk during the life of the query causing rollback
segment information required to generate consistent read versions of blocks to be lost.

Reference: Note:40689.1


Happy Learning..!!
Cheers..!!

Views to monitor Datapump jobs

Oracle 10g provides two new views, DBA_DATAPUMP_JOBS and DBA_DATAPUMP_SESSIONS that allow the DBA to monitor the progress
of all DataPump operations.

But ofcourse there are other views as well:

SQL> select view_name from dba_views where view_name like '%PUMP%';

VIEW_NAME
DATAPUMP_PATHS
DATAPUMP_PATHMAP
DATAPUMP_TABLE_DATA
DATAPUMP_OBJECT_CONNECT
DATAPUMP_DDL_TRANSFORM_PARAMS
DATAPUMP_REMAP_OBJECTS
V_$DATAPUMP_JOB
V_$DATAPUMP_SESSION
GV_$DATAPUMP_JOB
GV_$DATAPUMP_SESSION
USER_DATAPUMP_JOBS
DBA_DATAPUMP_JOBS
DBA_DATAPUMP_SESSIONS
AQ$KUPC$DATAPUMP_QUETAB_S
AQ$_KUPC$DATAPUMP_QUETAB_F
AQ$KUPC$DATAPUMP_QUETAB
AQ$KUPC$DATAPUMP_QUETAB_R



Happy Learning..!!
Cheers..!!



Datapump

Oracle Data Pump is a newer, faster and more flexible alternative to the "exp" and "imp" utilities used   in previous Oracle versions. In addition to basic import and export functionality data pump provides a PL/SQL API and support for external tables.

Advantages of Datapump:

  1. Data Pump Export and Import operations are processed in the database as a Data Pump job, which is much more efficient that the client-side execution of original Export and Import. 
  2. Data Pump operations can take advantage of the server’s parallel processes to read or write multiple data streams simultaneously.
  3. Data Pump differs from original Export and Import in that all jobs run primarily on the server using server processes. These server processes access files for the Data Pump jobs using directory objects that identify the location of the files. The directory objects enforce a security model that can be used by DBAs to control access to these files.
  4. Datapump has a very powerful interactive command-line mode which allows the user to monitor and control Data Pump Export and Import operations
  5. Because Data Pump jobs run entirely on the server, you can start an export or import job, detach from it, and later reconnect to the job to monitor its progress.
  6. Data Pump gives you the ability to pass data between two databases over a network (via a database link), without creating a dump file on disk. 
  7. Datapump uses the Direct Path data access method (which permits the server to bypass SQL and go right to the data blocks on disk) has been rewritten to be much more efficient and now supports Data Pump Import and Export.
  8. Another amazing feature is that you can "PAUSE" and "RESUME" data pump jobs on demand.
How it differs from traditional exp & imp:
  1. Datapump operates on a group of files called dump file sets. However, normal export operates on a single file.
  2. Datapump access files in the server (using ORACLE directories). Traditional export can access files in client and server both.
  3. Exports (exp/imp) represent database metadata information as DDLs in the dump file, but in datapump, it represents in XML document format.
  4. Datapump has parallel execution but in exp/imp single stream execution.
  5. Datapump does not support sequential media like tapes, but traditional export supports.

Happy Learning..!!
Cheers..!!


Monday, 18 January 2016

Resizing Redo Logs

It has been seen in cases where ORA-16401 and ORA-16055 reported in primary alert log when redo log switch is over frequently.

So suggestion is to Increase the Size of the Online Redologs to reduce Redolog Switch Frequency. And this may also required to improve primary database performance.

So the purpose of this article is How to Increase size of Redo Logs in a Dataguard Environment when Physical Standby in place.



Solution

Primary Database:

$sqlplus sys/admin as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Feb 26 10:41:27 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select status,instance_name,database_role from v$database,v$instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         prim         PRIMARY

SQL> select group#,sum(bytes/1024/1024)"Size in MB" from v$log group by group#;

    GROUP# Size in MB
---------- ----------
         1         50
         2         50
         3         50

SQL> select group#,sum(bytes/1024/1024)"Size in MB" from v$standby_log group by group#;

    GROUP# Size in MB
---------- ----------
         6         50
         4         50
         5         50
         7         50

Standby Database:

$sqlplus sys/admin as sysdba

SQL*Plus: Release 11.2.0.3.0 Production on Tue Feb 26 10:48:30 2013

Copyright (c) 1982, 2011, Oracle.  All rights reserved.


Connected to:
Oracle Database 11g Enterprise Edition Release 11.2.0.3.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options

SQL> select status,instance_name,database_role from v$database,v$Instance;

STATUS       INSTANCE_NAME    DATABASE_ROLE
------------ ---------------- ----------------
OPEN         stdby         PHYSICAL STANDBY

SQL> select group#, sum(bytes/1024/1024)"Size in MB" from v$log group by group#;

    GROUP# Size in MB
---------- ----------
         1         50
         2         50
         3         50

SQL> select group#,sum(bytes/1024/1024)"Size in MB" from v$standby_log group by group#;

    GROUP# Size in MB
---------- ----------
         6         50
         4         50
         5         50
         7         50



Check if the parameter standby_file_management is set to AUTO or MANUAL on standby database. If it is not set to MANUAL, then set it.

SQL> show parameter standby_file_management

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      AUTO

SQL> alter system set standby_file_management=manual;

System altered.

SQL> show parameter standby_file_management

NAME                                 TYPE        VALUE
------------------------------------ ----------- ------------------------------
standby_file_management              string      MANUAL


On the primary database:

Check the status of the Online Redo Logs and resize them by dropping the INACTIVE redo logs and re-creating them with the new size.

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         2 INACTIVE
         3 CURRENT

Here you can see that Online Redo Log groups 1 and 2 are INACTIVE. Hence we can drop them and re-create with the new size.

SQL> alter database drop logfile group 1;

Database altered.

SQL> alter database add logfile group 1 size 100M;

Database altered.



The files will be created in either the default or current directory of the database server, depending upon your operating system.

Here it would be created in fast_recovery_area since it is already configured.

Use can fully specify filenames of new log members to indicate where the operating system file should be created.
for example:
ALTER DATABASE ADD LOGFILE GROUP 1 ('/u01/app/oracle/oradata/prim/redo01.log') SIZE 100M;



SQL> select group#,status from v$log;

GROUP# STATUS
---------- ----------------
1 UNUSED
2 INACTIVE
3 CURRENT

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database add logfile group 2 size 100M;

Database altered.

SQL> select group#,status from v$log;

GROUP# STATUS
---------- ----------------
1 UNUSED
2 UNUSED
3 CURRENT

Now that Online Redo Log Groups 1 and 2 are resized and the status of Group 3 is CURRENT, switch logfiles manually until Group 3 becomes INACTIVE.

After log switches, we can check the Status of Group 3

SQL> alter system switch logfile;

System altered.

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 CURRENT
         2 UNUSED
         3 ACTIVE

SQL> alter system switch logfile;

System altered.

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 ACTIVE
         2 CURRENT
         3 ACTIVE

SQL> alter system checkpoint;

System altered.

SQL> select group#,status from v$log;

    GROUP# STATUS
---------- ----------------
         1 INACTIVE
         2 CURRENT
         3 INACTIVE

Now that Group 3 is INACTIVE, we can drop it and re-create it with the new size.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database add logfile group 3 size 100M;

Database altered.

Now, we have resized all the Online Redo Logs on the Primary Database from 50M to 100M.

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$log group by group#;

GROUP# size in MB
---------- ----------
1 100
2 100
3 100

Moving on to the Standby Redo Logs on the Primary Database:

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$standby_log group by group#;

GROUP# size in MB
---------- ----------
6 50
4 50
5 50
7 50

SQL> select group#,status from v$standby_log;

GROUP# STATUS
---------- ----------
4 UNASSIGNED
5 UNASSIGNED
6 UNASSIGNED
7 UNASSIGNED

The status of the Standby Redo Logs (SRL) on the Primary database would be UNASSIGNED as they would be used only when the primary database starts behaving as a Standby (Switchover)
We can easily drop the UNASSIGNED SRLs and re-create them with the new size.

SQL> alter database drop standby logfile group 4;

Database altered.

SQL> alter database add standby logfile group 4 size 100M;

Database altered.

Repeat this for the remaining SRLs on the primary database and we can see below that all the SRL on the Primary database have been resized.

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$standby_log group by group#;

    GROUP# size in MB
---------- ----------
         6        100
         4        100
         5        100
         7        100

Moving on to the standby database:


SQL> select group#,status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
3 CLEARING
2 CLEARING

Lets try to drop Online Redo Log Group 2 as Group 1 is CURRENT status and it cannot be dropped.

SQL> alter database drop logfile group 2;
alter database drop logfile group 2
*
ERROR at line 1:
ORA-01156: recovery or flashback in progress may need access to files

Here above, we faced ORA-01156 error, which is self-explainatory. The recovery (MRP process) on the standby database is active and it needs to be cancelled before we drop any Redo log groups.

SQL> alter database recover managed standby database cancel;

Database altered.

Since the status of Group 2 is still clearing, lets clear it manually before dropping the group.

SQL> alter database clear logfile group 2;

Database altered.

SQL> alter database drop logfile group 2;

Database altered.

SQL> alter database add logfile group 2 size 100M;

Database altered.

SQL> select group#,status from v$log;

GROUP# STATUS
---------- ----------------
1 CURRENT
3 CLEARING
2 UNUSED

The same is the case with Group 3 whose status is CLEARING. We’ll clear it manually, drop it and create with the new size.

SQL> alter database clear logfile group 3;

Database altered.

SQL> alter database drop logfile group 3;

Database altered.

SQL> alter database add logfile group 3 size 100M;

Database altered.

To drop online redo log group 1 that is in status CURRENT, generate an archive on the primary database by switching logfile, and then clear the CURRENT online redo log group on the standby database, drop it and re-create it with the required size as done for groups 2 and 3 earlier.

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$log group by group#;

GROUP# size in MB
---------- ----------
1 100
2 100
3 100

Resizing Standby Redo Logs on standby database:

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$standby_log group by group#;

GROUP# size in MB
---------- ----------
4 50
5 50
6 50
7 50

SQL> select group#,status from v$standby_log;

GROUP# STATUS
---------- ----------
4 ACTIVE
5 UNASSIGNED
6 UNASSIGNED
7 UNASSIGNED

Clear the SRL group whose status is ACTIVE as done earlier, drop the group and re-create them with the size same as that of the Online Redo Logs.

For the SRL groups whose status is UNASSIGNED, just drop the group and recreate them with the size same as that of the Online Redo Logs.

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$log group by group#;

GROUP# size in MB
---------- ----------
1 100
2 100
3 100

SQL> select group#,sum(bytes/1024/1024)"size in MB" from v$standby_log group by group#;

GROUP# size in MB
---------- ----------
4 100
5 100
6 100
7 100

Once all the Standby Redo Logs and Online Redo Logs have been resize on both Primary and standby database, set the STANDBY_FILE_MANAGEMENT to AUTO on the standby database and start the recovery (MRP) on standby database.

SQL> alter system set standby_file_management=auto;

System altered.

SQL> alter database recover managed standby database disconnect from session using current logfile;

Database altered.

SQL> select process,status,sequence# from v$managed_standby;

PROCESS STATUS SEQUENCE#
--------- ------------ ----------
ARCH CONNECTED 0
ARCH CLOSING 66
ARCH CONNECTED 0
ARCH CLOSING 63
RFS IDLE 0
RFS IDLE 0
MRP0 WAIT_FOR_LOG 71
RFS IDLE 71
RFS IDLE 0

9 rows selected.

Primary:

SQL> select max(sequence#) from v$archived_log;

MAX(SEQUENCE#)
--------------
70

Standby:

SQL> select max(sequence#) from v$archived_log where applied='YES';

MAX(SEQUENCE#)
--------------
70


Happy Learning..!!
Cheers..!!

Standby Redo Logs

All DBA's and Interested persons who wish to implement Data Guard in Oracle 9i/10g/11g (9.x-11.x).


1. Standby Redo Logs - what are they and when to use them ??
-------------------------------------------------------------

Starting Oracle 9i you have to opportunity to add Standby Rodo Log Groups to your Online Redo Log Groups. These Standby Redo Logs then store the information received from the Primary Database. In case of a Failover situation, you will
have less data loss than without Standby Redo Logs.

In Oracle 91 only Physical standby support SRL but from 10g onwards both Physical and Logical standby supports SRL.

The great Advantage of Standby Redo Logs is that every Entry written into the Online RedoLogs of the Primary Database is transferred to the Standby Site and written into the Standby Redo Logs at the same time; therefore, you reduce the probability of Data Loss on the Standby Database.

Starting with 10g it is possible to start Real-Time Apply with Physical and Logical Standby Databases. With Real-Time Apply, the Redo is applied to the Standby Database from the Standby RedoLog instead of waiting until an Archive Log is created. So Standby Redo Logs are required for Real-Time Apply.


2. How Do I create Standby Redo Logs ??
------------------------------------------------------------

Standby Redo Logs are additional Redo Log Groups. If you were to query the V$LOGFILE view on the Standby Database, the output would typically look something like this:

   SQL> select * from v$logfile;

   GROUP# STATUS  TYPE    MEMBER
   ------ ------- ------- ------------------------------------
        1          ONLINE C:\ORACLE\ORADATA\STANDBY\REDO01.LOG
        2          ONLINE C:\ORACLE\ORADATA\STANDBY\REDO02.LOG
        3          ONLINE C:\ORACLE\ORADATA\STANDBY\REDO03.LOG

Before you add the Standby Redo Logs to your Standby Database, verify the number of maximum Logfile Groups and Logfile Members. If you don't remember these values, you can look at the CREATE CONTROLFILE script. To get such a
script, run the following command:

   SQL> alter database backup controlfile to trace;

This will create a *.TRC-file in your UDUMP Directory. When you edit or view this newly created file, you will find a script to recreate the Controlfile.

Notice the following relevant entries:

    MAXLOGFILES   8
    MAXLOGMEMBERS 3

In this example you can create a maximum of eight (8) Logfile groups.  Each group can contain a maximum of three (3) Members.

Now we can add Standby Redo Log files to the Standby Database (of course, the Standby Database must be in the MOUNT State):

   SQL> alter database add standby logfile group 4 ('C:\ORACLE\ORADATA\STANDBY\STBY04.LOG') SIZE 5M;

You can also add any further members to each group:

   SQL> alter database add standby logfile member 'C:\ORACLE\ORADATA\STANDBY\STBY14.LOG' to group 4;

Please keep in mind that if the RFS process is able to write into a Standby Redo Log, the filesize of the Standby Redo Log MUST be equal to the current Online Redo Log of the Primary Database!!

3. Limitations to Standby Redo Logs
----------------------------------------------

In a Oracle 9i/10g Data Guard Environment, the RFS Process on the Standby Database receives the Data from the Primary and writes it to Disk (either Standby Redo Logs or Archived Redo Logs).

If you consider using Standby Redo Logs, you must make certain that they are the same size as the Online Redo Logs. If you have different sizes of Online Redo Logs, you have to create corresponding Standby Redo Logs. The RFS process
won't attach Standby Redo Logs if they are different from the Online Redo Log.
It is recommended to have at least one more of Standby Redo Log Group as you have of Online Redo Log Groups per Thread and Size..

Standby Redo Logs are filled with the same information that is written to the Online Redo Logs of the Primary Database.  Therefore, only the LGWR can provide this information to the Standby RFS process, so Standby Redo Logs will only
benefit if you set LGWR as the Transmitter on the Primary Database in the LOG_ARCHIVE_DEST_n Initialization Parameter. Starting with 10.2.0, even the ARCH is able to write into Standby RedoLogs.

The RFS-Process always tries to allocate the next available Standby RedoLog, so it is possible that you encounter a Switch between only two Standby RedoLogs,although you created lots more of them. There's no rota defined here like in
Online RedoLogs. This is reported in Bug 2722195.

NOTE : Starting with 10.2.0, even the ARCH is able to write into Standby RedoLogs.

4. Differents in the Log Apply Services when using Standby Redo Logs
--------------------------------------------------------------------

In case you do not have Standby Redo Logs, an Archived Redo Log is created by the RFS process and when it has completed, this Archived Redo Log is appliedto the Standby Database by the MRP (Managed Recovery Process) or the Logical
Apply in Oracle 10g when using Logical Standby. An open (not fully written) ArchiveLog file cannot be applied on the Standby Database and will not be used in a Failover situation. This causes a certain data loss.

If you have Standby Redo Logs, the RFS process will write into the Standby Redo Log as mentioned above and when a log switch occurs, the Archiver Process of the Standby Database will archive this Standby Redo Log to an Archived Redo Log,
while the MRP process applies the information to the Standby Database.  In a Failover situation, you will also have access to the information already written in the Standby Redo Logs, so the information will not be lost.

Starting with Oracle 10g you have also the Option to use Real-Time Apply with Physical and Logical Standby Apply. When using Real-Time Apply we directly apply Redo Data from Standby RedoLogs. Real-Time Apply is also not able to apply Redo
from partial filled ArchiveLogs if there are no Standby RedoLogs. So Standby RedoLogs are mandatory for Real-Time Apply.

NOTE : In 12c DEFAULT MRP will go to REAL TIME APPLY mode.

Default Standby recovery in REAL time apply.
      SQL>alter database recover managed standby database disconnect;
To Start MRP in non real time apply mode use,(in 12c)
       SQL>alter database recover managed standby database using archived logfile disconnect;

5. Standby RedoLogs and the Data Guard Broker
---------------------------------------------

If there is an active Data Guard Broker Configuration on this Data Guard Environment the following Error can be raised in the Data Guard Broker Configuration if the setup Transport Method (LogXptMode) is 'SYNC' or 'ASYNC'
and there are no Standby RedoLogs configured:

 Warning: ORA-16809: multiple warnings detected for the database

In the corresponding DRC.LOG-Files and the Database Status you can find:

 ORA-16789: standby redo logs not configured

-> So please add corresponding Standby RedoLogs to the Standby Database. As per default the Data Guard Broker tries to start Log Apply Services in Real-Time Apply Mode which is not possible without Standby RedoLogs.Note that starting with Oracle 11g ARCH Log Transport Method is deprecated.

Happy Learning..!!

Cheers..!!

Friday, 15 January 2016

What if your listener went to hung state.

They will be few scenarios your listener will go to hung state. we cant even check what is the status of the listener. if we give any of the commands like start, stop or status, it wont progress further.

In that scenario, We can kill the listener process from the OS level and restart the listener. 


Cheers..!!