Tuesday, August 12, 2014

Using the AWR History Tables to compare performance - Part 2

Using the AWR History Tables to compare performance - Part 2

In the Part 1 of this post, we saw how we can use the AWR history tables to compare the top wait events between two different time periods.
We will now use the history tables to track and identify problem SQL statements.
Suppose we have been informed that there was a I/O performance issue early in the morning around 4.30 AM …
1) We obtain the SNAP_ID for the period in question
SQL> select snap_id,begin_interval_time,end_interval_time
from dba_hist_snapshot where to_char(begin_interval_time,’DD-MON-YYYY HH24:MI’)=’28-JUL-2009 04:00′;
   SNAP_ID BEGIN_INTERVAL_TIME
---------- ---------------------------------------------------------------------------
END_INTERVAL_TIME
---------------------------------------------------------------------------
     10951 28-JUL-09 04.00.08.054 AM
28-JUL-09 05.00.58.732 AM

2) We then obtain the Top 5 SQL statements executing during that period – we can sort on a number of variables like disk reads, buffer gets, I/O waits, Cpu time etc. We will sort it by disk reads to find the top 5 SQL statements with the most disk reads.
SQL> select * from
2 (
select
3 4 sql.sql_id c1,
5 sql.buffer_gets_delta c2,
6 sql.disk_reads_delta c3,
7 sql.iowait_delta c4
8 from
9 dba_hist_sqlstat sql,
10 dba_hist_snapshot s
11 where
12 s.snap_id = sql.snap_id
13 and
14 s.snap_id=10951
15 order by
16 c3 desc)
17 where rownum < 6
18 ;
C1                    C2         C3         C4
------------- ---------- ---------- ----------
gyvak7ftvukcy    6131099     216469  613703400
7vk0vmj5hubqt   16027977     157384  484194526
0ay748ut6y71y     293922     142957  852668499
0utjstq2kntvx   13269295     100043  299328212
ahx483x819uu1   23584192      81361  797258856

3) Now that we have the SQL ID, we will obtain the SQL Query for that SQL ID
SQL> select sql_text from dba_hist_sqltext
where sql_id=’gyvak7ftvukcy’;
SQL_TEXT
——————————————————————————–
select /*NORULE */ ‘DATAPOINT extents_left ‘ || ‘ ‘ ||
nvl(min(a.MAXEXTS – a.EXTENTS), 111) || CHR(10) ||
‘DATAPOINT extents_left_pct’ || ‘ ‘ ||
round(nvl(min(round(a.MAXEXTS – a.EXTENTS) * 100 / a.MAXEXTS), 100), 0) bpb
from (select ds.header_file file#,ds.header_block block#,
ds.extents,ds.max_extents maxexts,st.ts#,su.user#
from dba_segments ds,sys.ts$ st,sys.user$ su where
st.name=ds.tablespace_name and
su.name=ds.owner
and segment_type not in (‘SPACE HEADER’,’CACHE’ ) ) a,
……………..
…………………
4) We can then examine a single SQL statement and see if there are any performance trends. It shows that during the night between the hours of 1.00 AM and 3.00 AM, the I/O waits are the highest for this particular query and now we can then isolate the problem further by investigating into what other concurrent activities are happening during the same time interval.
1 select
2 s.snap_id,
3 to_char(s.begin_interval_time,’HH24:MI’) c1,
4 sql.executions_delta c2,
5 sql.buffer_gets_delta c3,
6 sql.disk_reads_delta c4,
7 sql.iowait_delta c5,
8 sql.apwait_delta c6,
9 sql.ccwait_delta c7,
10 sql.cpu_time_delta c8,
11 sql.elapsed_time_delta c9
12 from
13 dba_hist_sqlstat sql,
14 dba_hist_snapshot s
15 where
16 s.snap_id = sql.snap_id
17 and to_char(s.begin_interval_time,’DD-MON-YYYY’)=’28-JUL-2009′
18 and
19 sql.sql_id=’gyvak7ftvukcy’
20* order by 1
SQL> /
   SNAP_ID C1            C2         C3         C4         C5         C6         C7         C8         C9
---------- ----- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------
     10947 00:00          2     294862     100954  580304536          0       7601   21400306  597529767
     10948 01:00          1     148163      50573 1284907667          0         39   10410248 1295096968
     10949 02:00          1     146961      71551 1313801418          0        177   11410211 1324273555
     10950 03:00          1     146961      71485 1249378673          0         74   11980186 1259394083
     10951 04:00          2     293922     142957  852668499          0          0   19720409  872476253
     10952 05:00          2     259310     115831  576725343          0         52   14970399  590388211
     10953 06:00          1     181574      77533  482981819          0          0   16200248  495323735
     10954 07:00          2     293922     142096  608989402          0         30   20550363  625696011
     10955 08:00          2     237469      94186  522814616          0          0   10070410  533824217
     10956 09:00          1     203949     127931 1022499631          0          0   15140178 1039516623
     10957 10:00          2     159078      87453  570435749          0          0    6830445  580726055
     10958 11:00          1     282066     134766 1071336554          0          0   15960187 1089728917
     10959 12:00          2     293922     142264  494751879          0         66   20480362  511719534
     10960 13:00          1     148228      69950  293789050          0          0    9280086  302702194
     10961 14:00          2     293922     141880  719430817          0         40   18620390  737121746

15 rows selected.

Using the AWR History Tables to compare performance

Using the AWR History Tables to compare performance

Using the DBA_HIST AWR history tables, we can compare the top wait events between different days or different time periods.
Let us assume that we find that batch jobs which are executed at night particularly between 1 and 2 AM are experiencing performance issues.
In this case we are comparing performance of a particular database on the 20th and 21st of July for the time periods 01:00 to 02:00.
SQL>
select snap_id,to_char(BEGIN_INTERVAL_TIME,’DD-MON-YY HH24:MI:SS’) “Runtime”
from dba_hist_snapshot
3* where trunc(BEGIN_INTERVAL_TIME)=’21-JUL-2009′ order by snap_id;
SNAP_ID Runtime
———- ——————
10781 21-JUL-09 00:00:14
10782 21-JUL-09 01:01:01
10783 21-JUL-09 02:00:27
10784 21-JUL-09 03:00:42
……..
Since the snapshots are collected every hour, for the same time period on the previous day we substract 24 – so the snap_ids for the 20th of July are 10758 and 10759.
select * from
(select event, waits “Waits”, time “Wait Time (s)”, pct*100 “Percent of Total”, waitclass “Wait Class”
from (select e.event_name event, e.total_waits – nvl(b.total_waits,0) waits,
(e.time_waited_micro – nvl(b.time_waited_micro,0))/1000000 time
, (e.time_waited_micro – nvl(b.time_waited_micro,0))/(select sum(e1.time_waited_micro – nvl(b1.time_waited_micro,0))
from dba_hist_system_event b1 , dba_hist_system_event e1
where b1.snap_id(+) = b.snap_id and e1.snap_id = e.snap_id and b1.dbid(+) = b.dbid
and e1.dbid = e.dbid and b1.instance_number(+) = b.instance_number
and e1.instance_number = e.instance_number
and b1.event_id(+) = e1.event_id
and e1.total_waits > nvl(b1.total_waits,0)
and e1.wait_class <> ‘Idle’
) pct
, e.wait_class waitclass
from
dba_hist_system_event b ,
dba_hist_system_event e
where b.snap_id(+) = &pBgnSnap
and e.snap_id = &pEndSnap
and b.event_id(+) = e.event_id
and e.total_waits > nvl(b.total_waits,0)
and e.wait_class <> ‘Idle’
order by waits desc
)
where rownum < 11)
;
Note: we are filtering the rows to display only the Top 10 Wait Events
SnapID’s 10758 and 10759 (20th July)
EVENT                                         Waits Wait Time (s) Percent of Total Wait Class
---------------------------------------- ---------- ------------- ---------------- --------------------
control file sequential read                4803994     79.589894        2.4698686 System I/O
PX qref latch                               1451898      3.587572       .111331112 Other
db file sequential read                       57463    1914.86419       59.4229078 User I/O
SQL*Net message to client                     52040       .077309       .002399087 Network
db file scattered read                         9169    223.497921        6.9356858 User I/O
log file parallel write                        6692    371.933451       11.5420025 System I/O
SQL*Net break/reset to client                  6610       3.75458       .116513778 Application
log file sync                                  3175    211.309884       6.55746127 Commit
direct path read                               2077      1.349465       .041877191 User I/O
control file parallel write                    1857    112.049356        3.4771649 System I/O

10 rows selected.
SnapId’s 10782 and 10783 (21st July)
EVENT                                         Waits Wait Time (s) Percent of Total Wait Class
---------------------------------------- ---------- ------------- ---------------- --------------------
control file sequential read                6006299    101.375645       .512584025 System I/O
SQL*Net message to client                   1045461       1.16951       .005913374 Network
db file sequential read                      312287    7147.10849       36.1378085 User I/O
log file sync                                 86597    8044.46297       40.6750873 Commit
log file parallel write                       62614    3257.61227       16.4714119 System I/O
SQL*Net more data from client                 25708      6.923053        .03500492 Network
SQL*Net break/reset to client                 24824    361.693308       1.82882399 Application
db file scattered read                        18289    178.619015       .903148421 User I/O
SQL*Net more data to client                   14721       .531947       .002689675 Network
db file parallel write                         3128    310.442873       1.56968725 System I/O

10 rows selected.


Looking at this output for the same time period on two days, we find that on the 21st of July the top wait events seem to be all I/O related and if we see the wait event “log file sync” on the second day is significantly higher than the first day. The wait event “db file sequential read” is also significantly higher on the second day as well as compared to the first day.
We can use this information to quickly triage the problem and make the following checks:

Show the full name and path of the trace file for the current session

Show the full name and path of the trace file for the current session

rem *********************************************************** 
rem
rem File: CurrentSessionTraceStatus.sql 
rem Description: Show the full name and path of the trace file for the current session 
rem   
rem From 'Oracle Performance Survival Guide' by Guy Harrison
rem  Chapter 3 Page 55
rem  ISBN: 978-0137011957
rem  See www.guyharrison.net for further information
rem  
rem  This work is in the public domain NSA 
rem   
rem
rem ********************************************************* 


SELECT s.sql_trace, s.sql_trace_waits, s.sql_trace_binds,
          traceid, tracefile
     FROM v$session s JOIN v$process p ON (p.addr = s.paddr)
    WHERE audsid = USERENV ('SESSIONID')

Time model unioned with wait data to show waits combined with CPU timings

Time model unioned with wait data to show waits combined with CPU timings

rem *********************************************************** 
rem
rem File: timeModelSimple.sql 
rem Description: Time model unioned with wait data to show waits combined with CPU timings 
rem   
rem From 'Oracle Performance Survival Guide' by Guy Harrison
rem  Chapter 3 Page 73
rem  ISBN: 978-0137011957
rem  See www.guyharrison.net for further information
rem  
rem  This work is in the public domain NSA 
rem   
rem
rem ********************************************************* 


set lines 100
set pages 10000
set echo on 
column total_waits format  999999999


SELECT   event, total_waits,
         ROUND (time_waited_micro / 1000000) AS time_waited_secs,
         ROUND (time_waited_micro * 100 / 
            SUM (time_waited_micro) OVER (),2) AS pct_time
    FROM (SELECT event, total_waits, time_waited_micro
            FROM v$system_event
           WHERE wait_class <> 'Idle'
          UNION
          SELECT stat_name, NULL, VALUE
            FROM v$sys_time_model
           WHERE stat_name IN ('DB CPU', 'background cpu time'))
ORDER BY 3 DESC;

Non-idle wait times sorted by time waited

Non-idle wait times sorted by time waited

rem *********************************************************** 
rem
rem File: topWaits.sql 
rem Description: Non-idle wait times sorted by time waited
rem   
rem From 'Oracle Performance Survival Guide' by Guy Harrison
rem  Chapter 3 Page 70
rem  ISBN: 978-0137011957
rem  See www.guyharrison.net for further information
rem  
rem  This work is in the public domain NSA 
rem   
rem
rem ********************************************************* 


/* Formatted on 2008/08/15 13:37 (Formatter Plus v4.8.7) */
SET lines 100
SET pages 10000
COLUMN wait_class format a12
COLUMN event format a30
COLUMN total_waits format 999999
COLUMN total_us format 999999999
COLUMN pct_time format 99.99
COLUMN avg_us format 999999.99
SET echo on

SELECT   wait_class, event, total_waits AS waits,
         ROUND (time_waited_micro / 1000) AS total_ms,
         ROUND (time_waited_micro * 100 / SUM (time_waited_micro) OVER (),
                2
               ) AS pct_time,
         ROUND ((time_waited_micro / total_waits) / 1000, 2) AS avg_ms
    FROM v$system_event
   WHERE wait_class <> 'Idle'
ORDER BY time_waited_micro DESC;

Example of a login trigger that activates SQL trace

Example of a login trigger that activates SQL trace

rem *********************************************************** 
rem
rem File: loginTrigger.sql 
rem Description: Example of a login trigger that activates SQL trace
rem   
rem From 'Oracle Performance Survival Guide' by Guy Harrison
rem  Chapter 3 Page 58
rem  ISBN: 978-0137011957
rem  See www.guyharrison.net for further information
rem  
rem  This work is in the public domain NSA 
rem   
rem
rem ********************************************************* 


CREATE OR REPLACE TRIGGER SYSTEM.trace_login_trigger
AFTER LOGON
ON DATABASE
BEGIN
IF SYS_CONTEXT ('USERENV', 'MODULE')  LIKE '%TOAD%'
   THEN
   
      DBMS_SESSION.session_trace_enable (waits  => TRUE,
                                             binds  => FALSE);
      EXECUTE IMMEDIATE 'alter session set tracefile_identifier=TOAD';

   END IF;
END;

OPSG PKG Installation

OPSG PKG Installation 


Many scripts can be run without any installation providing that the user has access to V$ views.  However, a few require specialized views and that the OPSG_PKG be installed.  To install the package:

1. Open a command prompt within the "install" directory
2. execute either install_opsg.bat (Windows) or install_opsg.sh (Linux/Unix)
3. Respond to the prompts

Here is an example session:

C:\tmp\opsg\install>install_opsg

C:\tmp\opsg\install>sqlplus /nolog @install_opsg

SQL*Plus: Release 11.1.0.7.0 - Production on Thu Sep 10 10:55:02 2009

Copyright (c) 1982, 2008, Oracle.  All rights reserved.

Enter password for the (new) OPSG user:opsg
Enter SYS password:
Enter TNSNAMES entry:g11r2ga

The script creates a user OPSG which has privileges to run all of the scripts and which has appropriate permissions.  You can modify the script if you want to install a different user.   You do need the SYS password to install the user. 

Top 10 cached sql statements by elapsed time

Top 10 cached sql statements by elapsed time

rem *********************************************************** 
rem
rem File: topsql1.sql 
rem Description: Top 10 cached sql statements by elapsed time 
rem   
rem From 'Oracle Performance Survival Guide' by Guy Harrison
rem  Chapter 3 Page 41
rem  ISBN: 978-0137011957
rem  See www.guyharrison.net for further information
rem  
rem  This work is in the public domain NSA 
rem   
rem
rem ********************************************************* 


/* Formatted on 2008/07/19 14:21 (Formatter Plus v4.8.7) */
SELECT sql_id,child_number,sql_text, elapsed_time 
  FROM (SELECT sql_id, child_number, sql_text, elapsed_time, cpu_time,
               disk_reads,
               RANK () OVER (ORDER BY elapsed_time DESC) AS elapsed_rank
          FROM v$sql)
 WHERE elapsed_rank <= 10

Monday, August 11, 2014

ORA-01017: invalid username/password; logon denied error during standby database creation


ORA-01017: invalid username/password; logon denied error during standby database creation


I faced a peculiar error while configuring dataguard today. When performed all steps and trying to connect to target and auxiliary databases, got the below error
[oracle@dbserver2 dbs]$ rman target sys/sys123@proddb auxiliary sys/sys123@standb
 
Recovery Manager: Release 10.2.0.1.0 – Production on Wed Apr 25 20:38:34 2012
 
Copyright (c) 1982, 2005, Oracle.  All rights reserved.
 
RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-00554: initialization of internal recovery manager package failed
RMAN-04005: error from target database:
ORA-01017: invalid username/password; logon denied
To resolve this I tried changing SYS user password so many times, recreated password file and copied that to standby, but nothing worked out.

Finally I resolved this by checking following parameters which are the culprtis in my case

REMOTE_LOGIN_PASSWORD
REMOTE_OS_AUTHENT

By default the above parameters should set to EXCLUSIVE and FALSE values respectively. But due to some reason, they are set to NONE and TRUE in my case. I changed them to EXCLUSIVE and FALSE then its resolved

Note: changing those parameter values need a restart of database.

So, when ever you face similar issue of invalid username/password during standby creation, check the above parameters and change the values (if needed) and try.

Bug#6035495 in 10g standby environment


Bug#6035495 in 10g standby environment


I got a bug in one of my oracle 10g(10.2.0.3) standby database.
following are the details
Error Number : ORA-19909
Error Message : datafile 1 belongs to an orphan incarnation
Cause: it is due to bug 6035495.
Action : Rebuild Standby database
Permanent Fix : Apply 10.2.0.4 patch to both primary and standby databases.
For more information refer to metalink document 6035495.8.
P.S: The above error may also occur when an incomplete recovery was performed on the primary database.
Also, the following are the main things which are fixed & added when upgrading to 10.2.0.4 Patch

New additions

It synchronizes the cpu performance with the database performance (This feature is usually not included in all the patch set.
      But in this patch, this feature is inherited from all the patch sets up to Jan 2008 and is included as a whole in this patch set)


Oracle Configuration Manager 10.2.7.1 – This OCM will create a schema Oracle_OCM in our database which will collect the database
      configuration information on a monthly basis.


Bug Fixes:

·         ORA-12996 / ORA-12998 / corruption from ALTER TABLE DROP UNUSED COLUMNS CHECKPOINT
·         ORA-0600 [keltnfy-ldminit] – This ORA-600 error happens when resources are not available for oracle. (In our case, this error has occurred only
             once two times . We can confirm the exact root cause only based on the number of occurrences. With just one time, we can’t determine which
        resource is not enough for Oracle) 
·         Slow performance of data pump during Export/Import
·         ALTER TABLE DDL can fail with ORA-2050 / ORA-60 if the table has
–          a TRIGGER using %type variable of the other TABLE and DBLINK.
–          a materialized view (the master table is the target table of ALTER TABLE.)
·         The performance of some queries which involve large sorts can be CPU intensive due to the memory allocation pattern used by the sort.

Oracle Data Guard Concept


Oracle Data Guard Concept

Oracle Data Guard Concept
Oracle Data Guard is one of the most effective and comprehensive data availability, data protection and disaster recovery solutions available today for enterprise data.
Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates, maintains, and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions. 
Data Guard maintains these standby databases as transitional consistent copies of the production database. These standby databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or they may be located in the same city, same campus, or even in the same building. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage, and preventing any data loss.
Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used in combination with other Oracle High Availability (HA) solutions such as Real Application Clusters (RAC)Oracle Flashback and Oracle Recovery Manager (RMAN), to provide a very high level of data protection and data availability that is unprecedented in the industry.
The following diagram presents a hi-level overview of Oracle Data Guard.
  
 


Overview of Oracle Data Guard Functional Components

Data Guard Configuration:
A Data Guard configuration consists of one production (or primary) database and up to nine standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided that they can communicate with each other. However, for disaster recovery, it is recommended that the standby databases are hosted at sites that are geographically separated from the primary site.
Redo Apply and SQL Apply:
A standby database is initially created from a backup copy of the primary database. Once created, Data Guard automatically maintains the standby database as a transactional consistent copy of the primary database by transmitting primary database redo data to the standby system and then applying the redo logs to the standby database.
Data Guard provides two methods to apply this redo data to the standby database and keep it transactional consistent with the primary, and these methods correspond to the two types of standby databases supported by Data Guard.
  • Redo Apply, used for physical standby databases
  • SQL Apply, used for logical standby databases
A physical standby database provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schemas, including indexes are the same. The Redo Apply technology applies redoes data on the physical standby database using standard Oracle media recovery techniques. 
A logical standby database contains the same logical information as the production database, although the physical organization and structure of the data can be different. The SQL apply technology keeps the logical standby database synchronized with the primary database by transforming the data in the redo logs received from the primary database into SQL statements and then executing the SQL statements on the standby database. This makes it possible for the logical standby database to be accessed for queries and reporting purposes at the same time the SQL is being applied to it. Thus, a logical standby database can be used concurrently for data protection and reporting.
Role Management:
Using Data Guard, the role of a database can be switched from a primary role to a standby role and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two kinds of role transitions – a switchover and a failover. A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and the standby database transitions to the primary role. The transition occurs without having to re-create either database. A failover is an irreversible transition of a standby database to the primary role. This is only done in the event of a catastrophic failure of the primary database, which is assumed to be lost and to be used again in the Data Guard configuration, it must be re-instantiated as a standby from the new primary.
Data Guard Protection Modes:
In some situations, a business cannot afford to lose data at any cost. In other situations, some applications require maximum database performance and can tolerate a potential loss of data. Data Guard provides three distinct modes of data protection to satisfy these varied requirements:
  • Maximum Protection— This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss.
  • Maximum Availability— This mode is similar to the maximum protection mode, including zero data loss. However, if a standby database becomes unavailable (for example, because of network connectivity problems), processing continues on the primary database. When the fault is corrected, the standby database is automatically resynchronized with the primary database.
  • Maximum Performance— This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.

Data Guard Broker:
The Oracle Data Guard Broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. All management operations can be performed either through Oracle Enterprise Manager, which uses the Broker, or through the Broker’s specialized command-line interface (DGMGRL).
Data Guard Architecture Diagram
The following diagram shows an overview of the Oracle Data Guard architecture.
What’s New in Oracle Data Guard 10g Release 2?

This section will highlight some of the key new features of Oracle Data Guard 10g Release 2. For details into these features, please refer to the following:
Fast-Start Failover

This capability allows Data Guard to automatically, and quickly fail over to a previously chosen, synchronized standby database in the event of loss of the primary database, without requiring any manual steps to invoke the failover, and without incurring any data loss. Following a fast-start failover, once the old primary database is repaired, Data Guard automatically reinstates it to be a standby database. This act restores high availability to the Data Guard configuration. 
Improved Redo Transmission

Several enhancements have been made in the redo transmission architecture to make sure redo data generated on the primary database can be transmitted as quickly and efficiently as possible to the standby database(s). 
Easy conversion of a physical standby database to a reporting database

A physical standby database can be activated as a primary database, opened read/write for reporting purposes, and then flashed back to a point in the past to be easily converted back to a physical standby database. At this point, Data Guard automatically synchronizes the standby database with the primary database. This allows the physical standby database to be utilized for read/write reporting and cloning activities. 
Automatic deletion of applied archived redo log files in logical standby databases

Archived logs, once they are applied on the logical standby database, are automatically deleted, reducing storage consumption on the logical standby and improving Data Guard manageability. Physical standby databases have already had this functionality since Oracle Database 10g Release 1, with Flash Recovery Area. 
Fine-grained monitoring of Data Guard configurations
Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of Data Guard configurations, so that administrators may make an informed and expedient decision regarding managing this configuration. 
What’s New in Oracle Data Guard 10g Release 1?
This section will highlight some of the key new features of Oracle Data Guard 10g Release 1. For details into these features, please refer to the following:
General New Features:

Real Time Apply:
With this feature, redo data can be applied on the standby database (whether Redo Apply or SQL Apply) as soon as they have written to a Standby Redo Log (SRL). Prior releases of Data Guard require this redo data to be archived at the standby database in the form of archivelogs before they can be applied. 
The Real Time Apply feature allows standby databases to be closely synchronized with the primary database, enabling up-to-date and real-time reporting (especially for Data Guard SQL Apply). This also enables faster switchover and failover times, which in turn reduces planned and unplanned downtime for the business.
The impact of a disaster is often measured in terms of Recovery Point Objective (RPO – i.e. how much data can a business afford to lose in the event of a disaster) and Recovery Time Objective (RTO – i.e. how much time a business can afford to be down in the event of a disaster). With Oracle Data Guard, when Maximum Protection is used in combination with Real Time Apply, businesses get the benefits of both zero data loss as well as minimal downtime in the event of a disaster and this makes Oracle Data Guard the only solution available today with the best RPO and RTO benefits for a business.
Integration with Flashback Database:
Data Guard in 10g has been integrated with the Flashback family of features to bring the Flashback feature benefits to a Data Guard configuration.
One such benefit is human error protection. In Oracle9i, administrators may configure Data Guard with an apply delay to protect standby databases from possible logical data corruptions that occurred on the primary database. The side-effects of such delays are that any reporting that gets done on the standby database is done on old data, and switchover/failover gets delayed because the accumulated logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayed-reporting or delayed-switchover/failover issues do not exist, and – if logical corruptions do land up affecting both the primary and standby database, the administrator may decide to use Flashback Database on both the primary and standby databases to quickly revert the databases to an earlier point-in-time to back out such user errors. 
Another benefit that such integration provides is during failovers. In releases prior to 10g, following any failover operation, the old primary database must be recreated (as a new standby database) from a backup of the new primary database, if the administrator intends to bring it back in the Data Guard configuration. This may be an issue when the database sizes are fairly large, and the primary/standby databases are hundreds/thousands of miles away. However, in Data Guard 10g, after the primary server fault is repaired, the primary database may simply be brought up in mounted mode, “flashed back” (using flashback database) to the SCN at which the failover occurred, and then brought back as a standby database in the Data Guard configuration. No re-instantiation is required.
SQL Apply New Features:

Zero Downtime Instantiation:
Logical standby database can now be created from an online backup of the primary database, without shutting down or quiescing the primary database, as was the case in prior releases. No shutdown of the primary system implies production downtime is eliminated, and no quiesce implies no waiting for quiescing to take effect and no dependence on Resource Manager.
Rolling Upgrades:
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset 1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL Apply. The steps involve upgrading the logical standby database to the next release, running in a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the upgraded database, and then finally upgrading the old primary database. While running in a mixed mode for testing purpose, the upgrade can be aborted and the software downgraded, without data loss. For additional data protection during these steps, a second standby database may be used.
By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large maintenance windows typical of many administrative tasks, and enables the 24×7 operation of the business.
Additional Datatypes:
SQL Apply now supports the following additional data types.
  • NCLOB
  • LONG
  • LONG RAW
  • BINARY_FLOAT
  • BINARY_DOUBLE
  • IOT-s (without overflows and without LOB columns)
This support for additional datatypes allows logical standby databases to recover and protect a wider variety of data, thus increasing the overall database protection and recovery options for Data Guard.
Enterprise Manager / Data Guard Broker New Features:

RAC Support:
It is now possible to use the Data Guard Broker, and the Broker’s Command Line Interface (DGMGRL), as well as Enterprise Manager, to create and manage Data Guard configurations that contain RAC primary and RAC standby databases. In Oracle9i, such administration is possible only through SQL*Plus. In Data Guard 10g, Data Guard Broker interfaces with Oracle Clusterware such that it has control over critical operations during specific Data Guard state transitions, such as switchovers, failovers, protection mode changes and state changes.
Simplified Browser-based Interface
Administration of a Data Guard configuration can be done through the new streamlined browser-based HTML interface of Enterprise Manager that enables complete standby database lifecycle management. The focus of such streamlined administration is on:
  • Ease of use.
  • Management based on best practices.
  • Pre-built integration with other HA features.


Data Guard Benefits

  1. Disaster recovery and high availabilityData Guard provides an efficient and comprehensive disaster recovery and high availability solution. Automatic failover and easy-to-manage switchover capabilities allow quick role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages.

  1. Complete data protectionA standby database also provides an effective safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is received at the standby database and further when applied to the standby database.

  1. Efficient utilization of system resourcesA physical standby database can be used for backups and read-only reporting, thereby reducing the primary database workload and saving valuable CPU and I/O cycles. In Oracle Database 10g Release 2, a physical standby database can also be easily converted back and forth between being a physical standby database and an open read/write database. A logical standby database allows its tables to be simultaneously available for read-only access while they are updated from the primary database. A logical standby database also allows users to perform data manipulation operations on tables that are not updated from the primary database. Finally, additional indexes and materialized views can be created in the logical standby database for better reporting performance.


  1. Flexibility in data protection to balance availability against performance requirementsOracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.

  1. Protection from communication failuresIf network connectivity is lost between the primary and one or more standby databases, redo data cannot be sent from the primary to those standby databases. Once connectivity is re-established, the missing redo data is automatically detected by Data Guard and the necessary archive logs are automatically transmitted to the standby databases. The standby databases are resynchronized with the primary database, with no manual intervention by the administrator. 

  1. Centralized and simple managementData Guard Broker automates the management and monitoring tasks across the multiple databases in a Data Guard configuration. Administrators may use either Oracle Enterprise Manager or the Broker’s own specialized command-line interface (DGMGRL) to take advantage of this integrated management framework.


Integrated with Oracle databaseData Guard is available as an integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.

Configuration of Oracle 10g Data guard


Configuration of Oracle 10g Data guard


 Primary Database Requirements for Data Guard:FORCE LOGGING must be enabled:
SQL> select force_logging from v$database;
SQL> alter database force logging;
ARCHIVELOG mode and automatic archiving must be enabled:
SQL> archive log list
MAXLOGFILES >= (2 * Current Redo Log Groups) + 1:
SQL> select  records_used “Current Groups”,records_total “Max Groups” from v$controlfile_record_section  where type = ‘REDO LOG';
listener.ora Additions:Define the standby database SID on the standby site:
(SID_DESC=
 (SID_NAME=PROD2)
 (ORACLE_HOME=/pgms/oracle/product/v9204)
)
(in $ORACLE_HOME/network/admin/listener.ora)
 tnsnames.ora Additions:Define the standby database connect string on the primary site:
myserver_prod2 =
   (DESCRIPTION =
   (ADDRESS_LIST =
       (ADDRESS =(PROTOCOL = TCP)  (Host = 123.45.67.89) — whatever host IP has PROD2(Port = 1521)
       ) )
   (CONNECT_DATA = (SID = PROD2) ) )
(define myserver_prod and myserver_prod2 on both primary and standby sites for quick switchovers)
 sqlnet.ora and /etc/oratab Additions:Enable dead connection detection on the primary and standby sites:
sqlnet.expire_time=2
 (in $ORACLE_HOME/network/admin/sqlnet.ora)
Add the standby database’s entry to /etc/oratab on the standby site:
PROD2:/pgms/oracle/product/v9204:N
Standby Database Parameter File:Create the initPROD2.ora parameter file to be used for the standby database (done from primary database):
If your primary is using an spfile:
sqlplus “/ as sysdba”
SQL> create pfile=’$ORACLE_HOME/dbs/initPROD2.ora’from spfile;
Else, if your primary is using a pfile:
cp -p $ORACLE_HOME/dbs/initPROD.ora
$ORACLE_HOME/dbs/initPROD2.ora
Note: We will be modifying both the primary and standby parameter files to handle being in either the primary or the standby mode for quick switchovers.
Standby Database Parameters (changes in copy of primary’s values):Change pathnames, such as control_files, background_dump_dest,core_dump_dest, user_dump_dest, and audit_file_dest, and add:
# log_archive_dest = /orcl/oradata/PROD2/archivelogs
log_archive_dest_1 = ‘LOCATION=/orcl/oradata/PROD2/archivelogs MANDATORY’  # for switchover
log_archive_dest_state_1 = ENABLE  # for switchover
log_archive_dest_2 = ‘SERVICE=myserver_prod LGWR SYNC’  # for switchover
log_archive_dest_state_2 = ENABLE  # for switchover
standby_archive_dest = /orcl/oradata/PROD2/archivelogs
standby_file_management = AUTO  # or MANUAL for raw devices
remote_archive_enable = TRUE  # TRUE or RECEIVE, change RECEIVE to SEND on switchover
instance_name = PROD2
lock_name_space = PROD2  # use when primary and standby on same system; same as instance_name
fal_server = myserver_prod  # “fal” is Fetch Archive Log, for log gap resolution
fal_client = myserver_prod2
db_file_name_convert = (‘/PROD/’,’/PROD2/’)
log_file_name_convert = (‘/PROD/’,’/PROD2/’)
Primary Database Parameters (changes in primary’s values):#log_archive_dest = /orcl/oradata/PROD/archivelogs
log_archive_dest_1 = ‘LOCATION=/orcl/oradata/PROD/archivelogs MANDATORY’
log_archive_dest_state_1 = ENABLE
log_archive_dest_2 = ‘SERVICE=myserver_prod2 LGWR SYNC’
log_archive_dest_state_2 = ENABLE
standby_archive_dest = /orcl/oradata/PROD/archivelogs  # for switchover
standby_file_management = AUTO  # for switchover; or MANUAL for raw devices
remote_archive_enable = TRUE  # TRUE or SEND, change SEND to RECEIVE on switchover
instance_name = PROD
lock_name_space = PROD  # use when primary and standby on same system; same as instance_name
fal_server = myserver_prod2  # for switchover
fal_client = myserver_prod  # for switchover
db_file_name_convert = (‘/PROD2/’,’/PROD/’)  # for switchover
log_file_name_convert = (‘/PROD2/’,’/PROD/’)  # for switchover
(If primary uses spfile, wait until after the standby databasefiles are copied/created to make these parameter changes.)
Standby Database Datafiles, etc.Create the standby control file from the primary database:
SQL> alter database create standby controlfile as ‘/orcl/oradata/PROD2/ctrl_PROD_01.ctl';
Shut down the primary database and copy or FTP its datafiles, redo log files, and the just-created standby parameter file and standby control file, to the standby site.
Copy the standby control file on the standby site to the other file names listed in the control_files init.ora parameter.
Create the standby’s password file, if needed, on the standby site:
$ orapwd file=$ORACLE_HOME/dbs/orapwPROD2  password=<sys password> entries=5
Reload the listener on the primary and standby sites:
$ lsnrctl reload
Standby Database Startup:Start the standby database in nomount mode, create the spfile if wanted, mount the standby database, and change to managed recovery:
$ . oraenv   PROD2
$ sqlplus “/ as sysdba”
SQL> create spfile from pfile;
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;
SQL> exit
Primary Database Startup:If your primary is using an spfile, set the primary database parameters in the spfile as listed earlier. Sample “alter system” commands are shown below:
SQL> startup nomount
SQL> alter system reset log_archive_dest scope=spfile sid=’*';
SQL> alter system set log_archive_dest_1 = ‘LOCATION=/orcl/oradata/PROD/archivelogs MANDATORY’ scope=spfile;… etc
SQL> shutdown
Start up the primary database with the new parameters:
SQL> startup
Start archiving to the standby database by issuing a log switch:
SQL> alter system switch logfile;


Congratulations!!!  You now have a working standby database for your primary database






Add Standby Redo Log Groups to Standby Database

Create standby redo log groups on standby database (start with next group number; create one more group than current number of groups) after switching out of managed recovery mode:
SQL> sqlplus “/ as sysdba”
SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
SQL> select max(group#) maxgroup from v$logfile;
SQL> select max(bytes) / 1024 “size (K)” from v$log;
SQL> alter database add standby logfile group 4 (‘/orcl/oradata/PROD2/stby_log_PROD_4A.rdo’,’/orcl/oradata/PROD2/stby_log_PROD_4B.rdo’) size 4096K; … etc …
SQL> column member format a55
SQL> select vs.group#,vs.bytes,vl.member from v$standby_log vs,v$logfile vl where vs.group# = vl.group# order by vs.group#,vl.member;

Add Tempfile To Standby
Add a tempfile to the standby database for switchover or read-only access, then, switch back to managed recovery:
SQL> alter tablespace temp add tempfile ‘/data/oradata/PROD2/temp_PROD_01.dbf’ size 400064K reuse;
SQL> alter database recover managed standby database disconnect from session;
SQL> select * from v$tempfile;
SQL> exit


Add Standby Redo Log Groups to Primary Database
Create standby logfile groups on the primary database for switchovers (start with next group number; create one more group than current number of groups):
$ sqlplus “/ as sysdba”
SQL> select max(group#) maxgroup from v$logfile;
SQL> select max(bytes) / 1024 “size (K)” from v$log;
SQL> alter database add standby logfile group 4 (‘/orcl/oradata/PROD/stby_log_PROD_4A.rdo’, ‘/orcl/oradata/PROD/stby_log_PROD_4B.rdo’) size 4096K; … etc …
SQL> column member format a55
SQL> select vs.group#,vs.bytes,vl.member from v$standby_log vs, v$logfile vl where vs.group# = vl.group# order by vs.group#,vl.member;

Switch To Maximum Availability Protection Mode
Switch to the desired “maximum availability” protection mode on the primary database (from the default “maximum performance”):
SQL> select value from v$parameter where name = ‘log_archive_dest_2′;  — must show LGWR SYNC
SQL> shutdown normal
SQL> startup mount
SQL> alter database set standby database to maximize availability;
SQL> alter database open;
SQL> select protection_mode from v$database;


Test Updates Propagating to Standby
Try some edits on the primary and check to see that the changes made it to the standby:
On the primary:
SQL> update spriden set spriden_first_name = ‘James’ where spriden_pidm = 1234 and spriden_change_ind is null;
SQL> commit;
SQL> alter system switch logfile;


On the standby (wait a few seconds first):
SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
SQL> select * from spriden where spriden_pidm = 1234 and spriden_change_ind is null;
SQL> alter database recover managed standby database disconnect from session;

Running Reports with a Standby
Set standby to Read Only to run reports:
SQL> alter database recover managed standby database cancel;
SQL> alter database open read only;
SQL> @myreport.sql
SQL> alter database recover managed standby database disconnect from session;

Shutdown and Startup for Standby Database
To shut down a standby database:
If in read-only access, switch back to managed recovery (after terminating any other active sessions):
SQL> alter database recover managed standby database disconnect from session;
Cancel managed recovery and shutdown:
SQL> alter database recover managed standby database cancel;
SQL> shutdown immediate

To start up a standby database:
SQL> startup nomount
SQL> alter database mount standby database;
SQL> alter database recover managed standby database disconnect from session;


Switchover – Swapping Primary and Standby
End all activities on the primary and standby database.
On the primary (switchover status should show “TO STANDBY”):
SQL> select database_role,switchover_status from v$database;
SQL> alter database commit to switchover to physical standby;
SQL> shutdown immediate
SQL> startup nomount
SQL> alter database mount standby database;
On the standby (switchover status should show “SWITCHOVER PENDING”):
SQL> select database_role,switchover_status from v$database;
SQL> alter database commit to switchover to primary;
SQL> shutdown normal
SQL> startup
On the primary:SQL> alter database recover managed standby database disconnect from session;
On the standby:SQL> alter system archive log current;
Change tnsnames.ora entry on all servers to swap the connect strings (myserver_prod and myserver_prod2).


Failover – Standby Becomes Primary

End all activities on the standby database.
May need to resolve redo log gaps (not shown here).
On the standby: SQL> alter database recover managed standby database finish;
SQL> alter database commit to switchover to primary;
SQL> shutdown immediate
SQL> startup
Change tnsnames.ora entry on all servers to point the primary connect string to the standby database.
New standby needs to be created.  Old primary is no longer functional.
Monitoring Standby Database
select count(*) from v$archive_gap;  
This query detects gaps in the logs that have been received.  If any rows are returned by
this query then there is a gap in the sequence numbers of the logs that have been received.
This gap must be resolved before logs can be applied.

SELECT decode(count(*),0,0,1) FROM v$managed_standby WHERE (PROCESS=’ARCH’ AND STATUS NOT
IN (‘CONNECTED’)) OR (PROCESS=’MRP0′ AND STATUS NOT IN (‘WAIT_FOR_LOG’,’APPLYING_LOG’))
OR (PROCESS=’RFS’ AND STATUS NOT IN (‘IDLE’,’RECEIVING’));
This query detects bad statuses.  When a bad status is present this query will return a “1”.
The ‘ARCH’ process should always be ‘CONNECTED’.  The ‘MRP0′ process should always be waiting for a log or applying a log, and when this is not true it will report the error in the status. The ‘RFS’ process exists when the Primary is connected to the Standby and should always be ‘IDLE’ or ‘RECEIVING’.

SELECT DECODE(COUNT(DISTINCT PROCESS),3,0,1)
FROM v$managed_standby;
This query detects missing processes.  If we do not have exactly 3 distinct processes then there is a problem, and this query will return a “1”. 
The most likely process to be missing is the ‘RFS’ which is the connection to the Primary database. You must resolve the problem preventing the Primary from connecting to the Standby before this process will start running again.
Verify all STANDBY PROCESSES are running normally on the STANDBY database.

SELECT PROCESS,STATUS,RESETLOG_ID,SEQUENCE#,ACTIVE_AGENTS
FROM V$MANAGED_STANDBY ;
A query with good results follows proving all processes are connected
with normal statuses.

SELECT SEQUENCE#, FIRST_TIME, NEXT_TIME, APPLIED
FROM V$ARCHIVED_LOG
WHERE FIRST_TIME > TRUNC(SYSDATE)
ORDER BY SEQUENCE#;

Data guard Related View
V$DATABASE
PROTECTION_LEVEL: current protection mode setting.
FS_FAILOVER_STATUS: synchronization status
DBA_LOGSTDBY_UNSUPPORTED: unsupported tables.
DBA_LOGSTDBY_EVENTS: monitor transaction activity.
V$LOG: Redo log changed.
V$MANAGED_STANDBY : Recovery progress.



Oracle Data Guard Concept


Oracle Data Guard is one of the most effective and comprehensive data availability, data protection and disaster recovery solutions available today for enterprise data.
Oracle Data Guard is the management, monitoring, and automation software infrastructure that creates, maintains, and monitors one or more standby databases to protect enterprise data from failures, disasters, errors, and corruptions. 
Data Guard maintains these standby databases as transitional consistent copies of the production database. These standby databases can be located at remote disaster recovery sites thousands of miles away from the production data center, or they may be located in the same city, same campus, or even in the same building. If the production database becomes unavailable because of a planned or an unplanned outage, Data Guard can switch any standby database to the production role, thus minimizing the downtime associated with the outage, and preventing any data loss.
Available as a feature of the Enterprise Edition of the Oracle Database, Data Guard can be used in combination with other Oracle High Availability (HA) solutions such as Real Application Clusters (RAC)Oracle Flashback and Oracle Recovery Manager (RMAN), to provide a very high level of data protection and data availability that is unprecedented in the industry.
The following diagram presents a hi-level overview of Oracle Data Guard.
  dg1






Overview of Oracle Data Guard Functional Components

Data Guard Configuration:
A Data Guard configuration consists of one production (or primary) database and up to nine standby databases. The databases in a Data Guard configuration are connected by Oracle Net and may be dispersed geographically. There are no restrictions on where the databases are located, provided that they can communicate with each other. However, for disaster recovery, it is recommended that the standby databases are hosted at sites that are geographically separated from the primary site.

Redo Apply and SQL Apply:
A standby database is initially created from a backup copy of the primary database. Once created, Data Guard automatically maintains the standby database as a transactional consistent copy of the primary database by transmitting primary database redo data to the standby system and then applying the redo logs to the standby database.
Data Guard provides two methods to apply this redo data to the standby database and keep it transactional consistent with the primary, and these methods correspond to the two types of standby databases supported by Data Guard.
  • Redo Apply, used for physical standby databases
  • SQL Apply, used for logical standby databases
A physical standby database provides a physically identical copy of the primary database, with on-disk database structures that are identical to the primary database on a block-for-block basis. The database schemas, including indexes are the same. The Redo Apply technology applies redoes data on the physical standby database using standard Oracle media recovery techniques. 
A logical standby database contains the same logical information as the production database, although the physical organization and structure of the data can be different. The SQL apply technology keeps the logical standby database synchronized with the primary database by transforming the data in the redo logs received from the primary database into SQL statements and then executing the SQL statements on the standby database. This makes it possible for the logical standby database to be accessed for queries and reporting purposes at the same time the SQL is being applied to it. Thus, a logical standby database can be used concurrently for data protection and reporting.

Role Management:
Using Data Guard, the role of a database can be switched from a primary role to a standby role and vice versa, ensuring no data loss in the process, and minimizing downtime. There are two kinds of role transitions – a switchover and a failover. A switchover is a role reversal between the primary database and one of its standby databases. This is typically done for planned maintenance of the primary system. During a switchover, the primary database transitions to a standby role and the standby database transitions to the primary role. The transition occurs without having to re-create either database. A failover is an irreversible transition of a standby database to the primary role. This is only done in the event of a catastrophic failure of the primary database, which is assumed to be lost and to be used again in the Data Guard configuration, it must be re-instantiated as a standby from the new primary.


Data Guard Protection Modes:
In some situations, a business cannot afford to lose data at any cost. In other situations, some applications require maximum database performance and can tolerate a potential loss of data. Data Guard provides three distinct modes of data protection to satisfy these varied requirements:
  • Maximum Protection— This mode offers the highest level of data protection. Data is synchronously transmitted to the standby database from the primary database and transactions are not committed on the primary database unless the redo data is available on at least one standby database configured in this mode. If the last standby database configured in this mode becomes unavailable, processing stops on the primary database. This mode ensures no-data-loss.
  • Maximum Availability— This mode is similar to the maximum protection mode, including zero data loss. However, if a standby database becomes unavailable (for example, because of network connectivity problems), processing continues on the primary database. When the fault is corrected, the standby database is automatically resynchronized with the primary database.
  • Maximum Performance— This mode offers slightly less data protection on the primary database, but higher performance than maximum availability mode. In this mode, as the primary database processes transactions, redo data is asynchronously shipped to the standby database. The commit operation of the primary database does not wait for the standby database to acknowledge receipt of redo data before completing write operations on the primary database. If any standby destination becomes unavailable, processing continues on the primary database and there is little effect on primary database performance.

Data Guard Broker:
The Oracle Data Guard Broker is a distributed management framework that automates and centralizes the creation, maintenance, and monitoring of Data Guard configurations. All management operations can be performed either through Oracle Enterprise Manager, which uses the Broker, or through the Broker’s specialized command-line interface (DGMGRL).


















Data Guard Architecture Diagram
The following diagram shows an overview of the Oracle Data Guard architecture.
 dg
What’s New in Oracle Data Guard 10g Release 2?

This section will highlight some of the key new features of Oracle Data Guard 10g Release 2. For details into these features, please refer to the following:

Fast-Start Failover

This capability allows Data Guard to automatically, and quickly fail over to a previously chosen, synchronized standby database in the event of loss of the primary database, without requiring any manual steps to invoke the failover, and without incurring any data loss. Following a fast-start failover, once the old primary database is repaired, Data Guard automatically reinstates it to be a standby database. This act restores high availability to the Data Guard configuration. 

Improved Redo Transmission

Several enhancements have been made in the redo transmission architecture to make sure redo data generated on the primary database can be transmitted as quickly and efficiently as possible to the standby database(s). 

Easy conversion of a physical standby database to a reporting database

A physical standby database can be activated as a primary database, opened read/write for reporting purposes, and then flashed back to a point in the past to be easily converted back to a physical standby database. At this point, Data Guard automatically synchronizes the standby database with the primary database. This allows the physical standby database to be utilized for read/write reporting and cloning activities. 



Automatic deletion of applied archived redo log files in logical standby databases

Archived logs, once they are applied on the logical standby database, are automatically deleted, reducing storage consumption on the logical standby and improving Data Guard manageability. Physical standby databases have already had this functionality since Oracle Database 10g Release 1, with Flash Recovery Area. 

Fine-grained monitoring of Data Guard configurations
Oracle Enterprise Manager has been enhanced to provide granular, up-to-date monitoring of Data Guard configurations, so that administrators may make an informed and expedient decision regarding managing this configuration. 

What’s New in Oracle Data Guard 10g Release 1?
This section will highlight some of the key new features of Oracle Data Guard 10g Release 1. For details into these features, please refer to the following:

General New Features:

Real Time Apply:
With this feature, redo data can be applied on the standby database (whether Redo Apply or SQL Apply) as soon as they have written to a Standby Redo Log (SRL). Prior releases of Data Guard require this redo data to be archived at the standby database in the form of archivelogs before they can be applied. 
The Real Time Apply feature allows standby databases to be closely synchronized with the primary database, enabling up-to-date and real-time reporting (especially for Data Guard SQL Apply). This also enables faster switchover and failover times, which in turn reduces planned and unplanned downtime for the business.
The impact of a disaster is often measured in terms of Recovery Point Objective (RPO – i.e. how much data can a business afford to lose in the event of a disaster) and Recovery Time Objective (RTO – i.e. how much time a business can afford to be down in the event of a disaster). With Oracle Data Guard, when Maximum Protection is used in combination with Real Time Apply, businesses get the benefits of both zero data loss as well as minimal downtime in the event of a disaster and this makes Oracle Data Guard the only solution available today with the best RPO and RTO benefits for a business.

Integration with Flashback Database:
Data Guard in 10g has been integrated with the Flashback family of features to bring the Flashback feature benefits to a Data Guard configuration.
One such benefit is human error protection. In Oracle9i, administrators may configure Data Guard with an apply delay to protect standby databases from possible logical data corruptions that occurred on the primary database. The side-effects of such delays are that any reporting that gets done on the standby database is done on old data, and switchover/failover gets delayed because the accumulated logs have to be applied first. In Data Guard 10g, with the Real Time Apply feature, such delayed-reporting or delayed-switchover/failover issues do not exist, and – if logical corruptions do land up affecting both the primary and standby database, the administrator may decide to use Flashback Database on both the primary and standby databases to quickly revert the databases to an earlier point-in-time to back out such user errors. 
Another benefit that such integration provides is during failovers. In releases prior to 10g, following any failover operation, the old primary database must be recreated (as a new standby database) from a backup of the new primary database, if the administrator intends to bring it back in the Data Guard configuration. This may be an issue when the database sizes are fairly large, and the primary/standby databases are hundreds/thousands of miles away. However, in Data Guard 10g, after the primary server fault is repaired, the primary database may simply be brought up in mounted mode, “flashed back” (using flashback database) to the SCN at which the failover occurred, and then brought back as a standby database in the Data Guard configuration. No re-instantiation is required.


SQL Apply New Features:

Zero Downtime Instantiation:
Logical standby database can now be created from an online backup of the primary database, without shutting down or quiescing the primary database, as was the case in prior releases. No shutdown of the primary system implies production downtime is eliminated, and no quiesce implies no waiting for quiescing to take effect and no dependence on Resource Manager.

Rolling Upgrades:
Oracle Database 10g supports database software upgrades (from Oracle Database 10g Patchset 1 onwards) in a rolling fashion, with near zero database downtime, by using Data Guard SQL Apply. The steps involve upgrading the logical standby database to the next release, running in a mixed mode to test and validate the upgrade, doing a role reversal by switching over to the upgraded database, and then finally upgrading the old primary database. While running in a mixed mode for testing purpose, the upgrade can be aborted and the software downgraded, without data loss. For additional data protection during these steps, a second standby database may be used.

By supporting rolling upgrades with minimal downtimes, Data Guard reduces the large maintenance windows typical of many administrative tasks, and enables the 24×7 operation of the business.


Additional Datatypes:
SQL Apply now supports the following additional data types.
  • NCLOB
  • LONG
  • LONG RAW
  • BINARY_FLOAT
  • BINARY_DOUBLE
  • IOT-s (without overflows and without LOB columns)
This support for additional datatypes allows logical standby databases to recover and protect a wider variety of data, thus increasing the overall database protection and recovery options for Data Guard.

Enterprise Manager / Data Guard Broker New Features:

RAC Support:
It is now possible to use the Data Guard Broker, and the Broker’s Command Line Interface (DGMGRL), as well as Enterprise Manager, to create and manage Data Guard configurations that contain RAC primary and RAC standby databases. In Oracle9i, such administration is possible only through SQL*Plus. In Data Guard 10g, Data Guard Broker interfaces with Oracle Clusterware such that it has control over critical operations during specific Data Guard state transitions, such as switchovers, failovers, protection mode changes and state changes.
Simplified Browser-based Interface
Administration of a Data Guard configuration can be done through the new streamlined browser-based HTML interface of Enterprise Manager that enables complete standby database lifecycle management. The focus of such streamlined administration is on:
  • Ease of use.
  • Management based on best practices.
  • Pre-built integration with other HA features.


Data Guard Benefits

  1. Disaster recovery and high availabilityData Guard provides an efficient and comprehensive disaster recovery and high availability solution. Automatic failover and easy-to-manage switchover capabilities allow quick role reversals between primary and standby databases, minimizing the downtime of the primary database for planned and unplanned outages. 

  1. Complete data protectionA standby database also provides an effective safeguard against data corruptions and user errors. Storage level physical corruptions on the primary database do not propagate to the standby database. Similarly, logical corruptions or user errors that cause the primary database to be permanently damaged can be resolved. Finally, the redo data is validated at the time it is received at the standby database and further when applied to the standby database. 

  1. Efficient utilization of system resourcesA physical standby database can be used for backups and read-only reporting, thereby reducing the primary database workload and saving valuable CPU and I/O cycles. In Oracle Database 10g Release 2, a physical standby database can also be easily converted back and forth between being a physical standby database and an open read/write database. A logical standby database allows its tables to be simultaneously available for read-only access while they are updated from the primary database. A logical standby database also allows users to perform data manipulation operations on tables that are not updated from the primary database. Finally, additional indexes and materialized views can be created in the logical standby database for better reporting performance.


  1. Flexibility in data protection to balance availability against performance requirementsOracle Data Guard offers the maximum protection, maximum availability, and maximum performance modes to help enterprises balance data availability against system performance requirements.

  1. Protection from communication failuresIf network connectivity is lost between the primary and one or more standby databases, redo data cannot be sent from the primary to those standby databases. Once connectivity is re-established, the missing redo data is automatically detected by Data Guard and the necessary archive logs are automatically transmitted to the standby databases. The standby databases are resynchronized with the primary database, with no manual intervention by the administrator. 

  1. Centralized and simple managementData Guard Broker automates the management and monitoring tasks across the multiple databases in a Data Guard configuration. Administrators may use either Oracle Enterprise Manager or the Broker’s own specialized command-line interface (DGMGRL) to take advantage of this integrated management framework.


Integrated with Oracle databaseData Guard is available as an integrated feature of the Oracle Database (Enterprise Edition) at no extra cost.