Tuesday, December 25, 2012

Standby Database

How does it work? In a nutshell redolog data is shipped from the source data to the target database through Oracle Net services.
SOURCE log_archive_dest_1=’localpath/’ INSTANCE NAME:MHGTST
TARGET log_archive_dest_1=’localpath/’ INSTANCE NAME:STDTST
Creating a Physical Standby 1. Verify that the database is in force logging mode.
SQL>SELECT FORCE_LOGGING FROM v$database;
_________ NO
2. If it is not in force logging mode set the database in force logging mode.
SQL>ALTER DATABASE FORCE_LOGGING;
Also ensure that the database in archivelog mode, no archive log mode no STANDBY :-)
3. Check whether log_archive_dest_2 is already set. If it is set then use log_archive_dest_3
SQL> ALTER SYSTEM SET log_archive_dest_3=’SERVICE=stdtst’;
System altered.
4. Shutdown the source database
SQL> shutdown immediate;
Database closed. Database dismounted. ORACLE instance shut down.
5. cp the datafiles to the new location
  cd /home/oracle/oracle/oradata/mhgtst /home/oracle/oracle/oradata/stdtst   copy the admin tree ( bdump, cdump, pfile etc. ) to the new location   cp -rp /home/oracle/oracle/product/10.2.0/db_1/admin/mhgtst \          /home/oracle/oracle/product/10.2.0/db_1/admin/stdtst
6. SQL> startup mount;
ORACLE instance started.
7. create the standby control file
SQL> alter database create standby controlfile as ‘/tmp/stdtst.ctl’;
Database altered.
8. open the source db for other users SQL> alter database open;
Database altered.
9.CREATE the pfile from spfile and make changes so that all references to mhgtst now point to stdtst. db_unique_name should still be the same in both the databases.
Make the following changes to the init.ora file of the STANDBY dataabase:- *.user_dump_dest=’/home/oracle/oracle/product/10.2.0/db_1/admin/stdtst/udump’ *.db_file_name_convert=’mhgtst’,’stdtst’ *.log_file_name_convert=’mhgtst’,’stdtst’; *.db_name=’stdtst’; *.db_unique_name=stdtst
10. Create password file for the standby database. orapwd file=stdtst entries=2 password=oracle
11. Make final changes   Remove the log_archive_dest_3 from the standby init.ora file.   Create the appropriate entires for log_archive_dest_1 and   log_archive_dest_2
  mkdir -p /home/oracle/oracle/logarch/stdtst/dest_1   mkdir -p /home/oracle/oracle/logarch/stdtst/dest_2
12. Mount the database
export ORACLE_SID=stdtst SQL>startup nomount;
Oracle Instance started
13. Try to mount the database:- copy the control file from the source database
[oracle@racdb1 stdtst]$ cp /tmp/stdtst.ctl ./control01.ctl [oracle@racdb1 stdtst]$ cp /tmp/stdtst.ctl ./control02.ctl [oracle@racdb1 stdtst]$ cp /tmp/stdtst.ctl ./control03.ctl
SQL> alter database mount standby database;
alter database mount standby database * ERROR at line 1: ORA-01103: database name ‘MHGTST’ in control file is not ‘;’
In previous versions of oracle database we had to set LOCK_NAME_SPACE parameter, this is a obsolete / deprecated parameter in 10g.
VERY IMPORTANT
Your db_name in the standby instance should be pointing to the source db name. db_name=mhgtst This is the source db name.
SQL> alter database mount standby database   2  /
Database altered.
14. Recover the stand by database SQL> recover standby database; ORA-00279: change 1174794 generated at 08/04/2006 08:48:11 needed for thread 1 ORA-00289: suggestion : /home/oracle/oracle/logarch/stdtst/dest_2/1_8_597228601.dbf ORA-00280: change 1174794 for thread 1 is in sequence #8
Specify log: {suggested | filename | AUTO | CANCEL}
If you want the process to do recovery in background issue the command below:-
ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION;
15. Validation Switch the logfile on the source so that the log file is applied to the target database. mhgtst -> stdtst
The alert.log should have the following entries in it:- Media Recovery Log /home/oracle/oracle/logarch/stdtst/dest_1/1_51_597228601.dbf Media Recovery Log /home/oracle/oracle/logarch/stdtst/dest_1/1_52_597228601.dbf Media Recovery Waiting for thread 1 sequence 53 Fri Aug  4 12:33:53 2006 RFS[1]: No standby redo logfiles created RFS[1]: Archived Log: ‘/home/oracle/oracle/logarch/stdtst/dest_1/1_53_597228601.dbf’ Fri Aug  4 12:33:58 2006 Media Recovery Log /home/oracle/oracle/logarch/stdtst/dest_1/1_53_597228601.dbf Media Recovery Waiting for thread 1 sequence 54 Fri Aug  4 12:34:50 2006 RFS[1]: No standby redo logfiles created RFS[1]: Archived Log: ‘/home/oracle/oracle/logarch/stdtst/dest_1/1_54_597228601.dbf’ Fri Aug  4 12:34:53 2006
16. Canceling recover managed database
SQL>ALTER DATABASE STANDBY DATABASE CANCEL;
Database Altered
17.ALTER DATABASE OPEN;
DB is now open for normal operations
18. Use queries till the application is up and running. In the mean time remember there are a few log switches in the source database. TARGET SHUTDOWN IMMEDIATE and put the physical database in recover mode.
STARTUP MOUNT; ALTER DATABASE RECOVER MANAGED STANDBY DATABASE DISCONNECT FROM SESSION /
ERRORS ORA-01678: parameter log_file_name_convert must be pairs of pattern and replacement strings There was a semicolon at the end of the log_file_name_convert, remnove that and the database was able to start in nomount mode. Please check your setting for log file name convert parameter.
Media Recovery Start Managed Standby Recovery not using Real Time Apply ORA-279 signalled during: ALTER DATABASE RECOVER  standby database  …
SQL> alter database recover managed standby database disconnect from SQL> session;
ORA-12154: TNS:could not resolve the connect identifier specified PING[ARC0]: Heartbeat failed to connect to standby ‘stdtst’. Error is 12154.
This error is because we have not yet configured the name stdtst service.
Configure the listener SID_LIST_LISTENER =   (SID_LIST =     (SID_DESC =       (SID_NAME = PLSExtProc)       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)       (PROGRAM = extproc)     )     (SID_DESC =       (SID_NAME = mhgtst)       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)       (PROGRAM = extproc)     )     (SID_DESC =       (SID_NAME = stdtst)       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)       (PROGRAM = extproc)     )   )
Reload the listener Listening Endpoints Summary… (DESCRIPTION=(ADDRESS=(PROTOCOL=ipc)(KEY=EXTPROC1)))   (DESCRIPTION=(ADDRESS=(PROTOCOL=tcp)(HOST=127.0.0.1)(PORT=1521))) Services Summary… Service “PLSExtProc” has 1 instance(s).   Instance “PLSExtProc”, status UNKNOWN, has 1 handler(s) for this service… Service “mhgtst” has 1 instance(s).   Instance “mhgtst”, status UNKNOWN, has 1 handler(s) for this service… Service “stdtst” has 1 instance(s).   Instance “stdtst”, status UNKNOWN, has 1 handler(s) for this service…
Configure tnsnames STDTST =   (DESCRIPTION =     (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521))     (CONNECT_DATA =       (SERVER = DEDICATED)       (SERVICE_NAME = STDTST)     )   )
Verify tnsping Attempting to contact (DESCRIPTION = (ADDRESS = (PROTOCOL = TCP)(HOST = localhost.localdomain)(PORT = 1521)) (CONNECT_DATA = (SERVER = DEDICATED) (SERVICE_NAME = STDTST))) OK (20 msec)
Switch the log file and see if it is been sent to the service.
ORA-28547: connection to server failed, probable Oracle Net admin error PING[ARC0]: Heartbeat failed to connect to standby ‘stdtst’. Error is 28547.
Usually this error means that there is some incompatibility in sqlnet.ora , tnsnames.ora and listener.ora SID_LIST_LISTENER =   (SID_LIST =     (SID_DESC =       (SID_NAME = PLSExtProc)       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)       (PROGRAM = extproc)     )     (SID_DESC =       (SID_NAME = mhgtst )       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)     )     (SID_DESC =       (SID_NAME = stdtst )       (ORACLE_HOME = /home/oracle/oracle/product/10.2.0/db_1)     )   )
SQL> alter database mount standby database; alter database mount standby database * ERROR at line 1: ORA-00205: error in identifying control file, check alert log for more info This error is because we didn’t put the control file from the source whe we issued a standby control file command.


When implementing a backup and recovery strategy, you have the following solutions available:
- Recovery Manager (RMAN)
This tool integrates with sessions running on an Oracle database to perform a
range of backup and recovery activities, including maintaining an RMAN
repository of historical data about backups. You can access RMAN through the command line or through Oracle Enterprise Manager.
- User-managed backup and recovery
In this solution, you perform backup and recovery with a mixture of host
operating system commands and SQL*Plus recovery commands.
Both of the preceding solutions are supported by Oracle and are fully documented, but RMAN is the preferred solution for database backup and recovery.
RMAN performs the same types of backup and recovery available through user-managed techniques more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.
Most of this manual focuses on RMAN-based backup and recovery.
User-managed backup and recovery techniques are covered in Section VIII, “Performing User-Managed Backup and Recovery.” RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:
- Incremental backups
An incremental backup stores only blocks changed since a previous backup.
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery. If you enable block change tracking, then you can improve performance by avoiding full scans of every input datafile. You use the BACKUP INCREMENTAL command to perform incremental backups.
- Block media recovery
You an repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup. You use the RECOVER command to perform block media recovery.
- Unused block compression
In unused block compression, RMAN can skip data blocks that have never been used and, in some cases, used blocks that are currently unused.
- Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of backups.
- Encrypted backups
RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format. To create encrypted backups on disk, the database must use the Advanced Security Option. To create encrypted backups directly on tape, RMAN must use the Oracle Secure Backup SBT interface, but does not require the Advanced Security Option.
Whether you use RMAN or user-managed methods, you can supplement physical backups with logical backups of schema objects made with Data Pump Export utility.
You can later use Data Pump Import to re-create data after restore and recovery. Logical backups are for the most part beyond the scope of the backup and recovery documentation.
- Oracle Flashback Technology
As explained in Oracle Database Concepts, Oracle Flashback Technology complements your physical backup and recovery strategy. This set of features provides an additional layer of data protection. Specifically, you can use flashback features to view past states of data and rewind your database without restoring backups or performing point-in-time recovery. In general, flashback features are more efficient and less disruptive than media recovery in most situations in which they apply.
- Logical Flashback Features
Most of the flashback features of Oracle operate at the logical level, enabling you to view and manipulate database objects. The logical-level flashback features of Oracle do not depend on RMAN and are available whether or not RMAN is part of your backup strategy. With the exception of Flashback Drop, the logical flashback features rely on undo data, which are records of the effects of each database update and the values overwritten in the update.
Oracle Database includes the following logical flashback features:
- Oracle Flashback Query
You can specify a target time and run queries against a database, viewing results as they would have appeared at the target time. To recover from an unwanted change like an update to a table, you could choose a target time before the error and run a query to retrieve the contents of the lost rows.
Oracle Database Advanced Application Developer’s Guide explains how to use this feature.
- Oracle Flashback Version Query
You can view all versions of all rows that ever existed in one or more tables in a specified time interval. You can also retrieve metadata about the differing versions of the rows, including start and end time, operation, and transaction ID of the transaction that created the version. You can use this feature to recover lost data values and to audit changes to the tables queried.
Oracle Database Advanced Aplication Developer’s Guide explains how to use this feature.
- Oracle Flashback Transaction Query
You can view changes made by a single transaction, or by all the transactions uring a period of time. Oracle Database Advanced Application Developer’s Guide xplains how to use this feature.
- Oracle Flashback Transaction
You can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, had never happened. Oracle
Database Advanced Application Developer’s Guide explains how to use this feature.
- Oracle Flashback Table
You can recover a table or set of tables to a specified point in time in the past
without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table restores tables while automatically maintaining
associated attributes such as current indexes, triggers, and constraints, and in this way enabling you to avoid finding and restoring database-specific properties.
- Oracle Flashback Drop
You can reverse the effects of a DROP TABLE statement. “Rewinding a DROP
TABLE Operation with Flashback Drop” on page 16-7 explains how to use this
feature.
A flashback data archive enables you to use some of the logical flashback features to access data from far back in the past. A flashback data archive consists of one or more tablespaces or parts of tablespaces. When you create a flashback data archive, you specify the name, retention period, and tablespace. You can also specify a default flashback data archive. The database automatically purges old historical data the day after the retention period expires.
You can turn flashback archiving on and off for individual tables. By default, flashback archiving is turned off for every table.
- Flashback Database
At the physical level, Oracle Flashback Database provides a more efficient data protection alternative to database point-in-time recovery (DBPITR). If the current datafiles have unwanted changes, then you can use the RMAN command FLASHBACK DATABASE to revert the datafiles to their contents at a past time. The end product is much like the result of a DBPITR, but is generally much faster because it does not require restoring datafiles from backup and requires less redo than media recovery.
Flashback Database uses flashback logs to access past versions of data blocks and some information from archived redo logs. Flashback Database requires that you configure a flash recovery area for a database because the flashback logs can only be stored there. Flashback logging is not enabled by default. Space used for flashback logs is managed automatically by the database and balanced against space required for other files in the flash recovery area.
Oracle Database also supports restore points in conjunction with Flashback Database and backup and recovery. A restore point is an alias corresponding to a system change number (SCN). You can create a restore point at any time if you anticipate needing to return part or all of a database to its contents at that time. A guaranteed restore point ensures that you can use Flashback Database to return a database to the time of the restore point.
- Data Recovery Advisor
Oracle Database includes a Data Recovery Advisor tool that automatically diagnoses persistent data failures, presents appropriate repair options, and executes repairs at your request. Data Recovery Advisor provides a single point of entry for Oracle backup and recovery solutions. You can use Data Recovery Advisor through the Enterprise Manager Database Control or Grid Control console or through the RMAN command-line client.
A database failure usually manifests itself as a set of symptoms: error messages, alerts, trace files and dumps, and failed data integrity checks. Data Recovery Advisor automatically diagnoses and informs you of these failures. Within the context of Data Recovery Advisor, a failure is a persistent data corruption that can be directly mapped to a set of repair actions. Each failure has a status of open or closed. Each failure also has a priority of critical, high, or low.
Failures are detected by data integrity checks, which are diagnostic procedures executed to assess the health of the database or its components. If a data integrity check reveals a failure, then Data Recovery Advisor automatically assesses the effect of a set of failures and maps it to a set of repair options. In most cases, Data Recovery Advisor presents both automated and manual repair options.
Data Recovery Advisor determines the best automated repair option and its effect on the database. The repair option may include repairs such as datafile restore and recovery, media recovery, Flashback Database, and so on. Before presenting an automated repair option, Data Recovery Advisor validates it with respect to the specific environment and the availability of media components required to complete the proposed repair.
If you choose an automated repair option, then RMAN coordinates sessions on the Oracle database to perform the repair for you. The Data Recovery Advisor tool verifies the repair success and closes the appropriate failures.
Read Full Post »

Happy Valentine’s Day

Posted in Backup and Recovery on February 14, 2011 | 1 Comment »
Don’t wait until it’s too late
to tell someone how much you love,
how much you care.
Because when they’re gone,
no matter how loud you shout and cry,
they won’t hear you anymore.
Happy Valentine’s Day
Read Full Post »
       This was the first question ask to me during one interview.
According to oracle documentation you already heard/aware that during an Oracle tablespace hot backup, a script or program or command puts a tablespace into backup mode, then copies the datafiles to disk or tape, then takes the tablespace out of backup mode. We can check the backup mode from V$BACKUP view. In case of user managed backup, backup process starts after issuing ALTER TABLESPACE tbs_name BEGIN BACKUP; or ALTER DATABASE BEGIN BACKUP; command and backup process ends by ALTER TABLESPACE tbs_name END BACKUP; or ALTER DATABASE END BACKUP; command.
Hot backup is demonstrated inside the topic http://samadhandba.wordpress.com/2011/02/10/user-managed-hot-backup-of-oracle-database/
Although the process is very clear and well understood but there are many misconception around hot backup. The misconception start what is actually done during hot backup, is data file opens writeable during backup process? or changes are stored somewhere in the SGA, the redologs, the rollback/undo segments or some combination thereof, and then written back into the datafile when the tablespace is taken out of backup mode?
Well, around the writeable issue inside datafile there is other misconception like “During hot backup process there is generated huge amount of redo data which in fact slows down the database dramatically if the database is in archivelog mode.”
Now let’s know what actually happens during hot backup. The hot backup steps are,
1)The corresponding tablespace is checkpointed.
2)The checkpoint SCN marker in the datafile headers cease to increment with checkpoints.
3)Full images of changed DB blocks are written to the redologs.
Whenever you issue,
ALTER TABLESPACE tbs_name BEGIN BACKUP;
command, at that point a checkpoint is performed against the target tablespace and the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data.
But during backup process, the corresponding datafiles in the tablespace allow just normal read/write operations, that is I/O activity is not frozen.
In case of redo log generation, each block will be recorded into the redo log files, the first time it the block is changed. So if a row is modified for the first time inside date block since hot backup started the complete block image is recorded in the redo log files but subsequent transactions on the block will only record the transaction just as normal.
Above three steps are required to guarantee consistency during the file is restored and recovery. By freezing the checkpoint SCN in the file headers, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the redolog file containing that SCN, and apply recovery starting there. Note that checkpoints to datafiles in hot backup mode are not suppressed during the backup, only the incrementing of the main checkpoint SCN flag. A “hot backup checkpoint” SCN marker in the file header continues to increment as periodic or incremental checkpoints progress normally.
By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be available in the redologs in case they are ever used for a recovery.
Now many one claims that during hot backup process there is excessive redo log generation than in normal mode. It actually depends on the amount of blocks changes during hot backup process. Because the first time a block is changed logging of full images of changed blocks in these tablespaces are recorded to the redo logs. Normally, Oracle logs an entry in the redologs for every change in the database, but it does not log the whole image of the database block. But during the hot backup process by logging full images of changed DB blocks to the redologs, Oracle eliminates the possibility of the backup containing irresolvable split blocks. To understand this reasoning, you must first understand what a split block is.
Typically, Oracle database blocks are a multiple of O/S blocks. For instance, most windows filesystems have a default block size of 512 bytes and unix filesystems have a default blocksize 2k, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 2k chunks, or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, it is reading in O/S-block sized increments. If the database writer happens to be writing a DB block into the datafile at the same time that your script is reading that block’s constituent O/S blocks, your backup copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block.
By logging the full block image of the changed block to the redologs, it guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the redologs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved through application of full block images from the redologs.
Read Full Post »
Used managed backup means you take backup without any oracle feature. Suppose if you take backup by Operating system then it is called user managed backup. And the term hot backup means taking your backup whenever your database is at open state.
To take full database backup follow the following steps.
1)Before proceed remember you can take online/hot backup whenever your database is in Archivelog mode. If your database run on noarchivelog mode then you must take consistent backup that is after cleanly shutdown. In order to determine the archival mode, issue the query,
SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
————
ARCHIVELOG
If you see ARCHIVELOG then you can proceed further. In order to take backup while you are in noarhivelog mode follow other post on my blog.
2)Determine the files that you need to take backup.
Whenever you decide to take your database backup then take backup of data files , online redo log files ,control files, spfile.
In order to decide which files you need to backup issue the following query.
SQL>SELECT NAME “File Need Backup” FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE
UNION ALL
SELECT VALUE FROM V$PARAMETER WHERE NAME=’spfile’;
File Need Backup
——————————————————————————–
/oradata2/data1/dbase/system01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/users01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
/oradata2/6.dbf
/oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf
/oradata2/data_test.dbf
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo02.log
/oradata2/data1/dbase/redo01.log
/oracle/app/oracle/product/10.2.0/db_1/dbs/cntrldupbase.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora
13 rows selected.
So after running the above query I can say I need to backup 13 files.
3)Take the tablespace in backup mode rather than offline and read-only tablespace. In case of offline and read only tablespace you do not have to place the tablespace in backup mode because the database is not permitting changes to the datafiles.
You can check the status, tablespace_name and it’s associated data file name with the following query,
SELECT t.STATUS,t.TABLESPACE_NAME “Tablespace”, f.FILE_NAME “Datafile”
FROM DBA_TABLESPACES t, DBA_DATA_FILES f
WHERE t.TABLESPACE_NAME = f.TABLESPACE_NAME;
ORDER BY t.NAME;
Take the tablespace in backup mode rather than offline and read-only tablespace.
You can easily make a script of taking the online tablespace in backup mode by following query.
SQL>SELECT ‘ALTER TABLESPACE ‘ ||TABLESPACE_NAME ||’ BEGIN BACKUP;’ “Script” FROM DBA_TABLESPACES WHERE STATUS NOT IN (‘READ ONLY’,’OFFLINE’);
Script
————————————————————-
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
ALTER TABLESPACE UNDOTBS1 BEGIN BACKUP;
ALTER TABLESPACE SYSAUX BEGIN BACKUP;
ALTER TABLESPACE TEMP BEGIN BACKUP;
ALTER TABLESPACE USERS BEGIN BACKUP;
ALTER TABLESPACE TEMP_T BEGIN BACKUP;
6 rows selected.
Alternatively, you can issue
SQL>ALTER DATABASE BEGIN BACKUP;
4)Copy the datafile to backup location.
After making a tablespace in backup mode take backup/copy of the associated datafiles. Here you can also make a script in order to copy datafiles to another location.
For online tablespace you must at first take it backup mode. You can check whether backup mode now active or not by issuing following query,
SQL>SELECT t.name AS “TB_NAME”, d.file# as “DF#”, d.name AS “DF_NAME”, b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
SQL> SELECT ‘host scp ‘|| FILE_NAME || ‘ &backup_location ‘ “Backup Command” FROM DBA_DATA_FILES;
Enter value for backup_location: /backup
old 1: SELECT ‘host scp ‘|| FILE_NAME || ‘ &backup_location ‘ “Backup Command” FROM DBA_DATA_FILES
new 1: SELECT ‘host scp ‘|| FILE_NAME || ‘ /backup ‘ “Backup Command” FROM DBA_DATA_FILES
Backup Command
——————————————————————————————
host scp /oradata2/data1/dbase/system01.dbf /backup
host scp /oradata2/data1/dbase/undotbs01.dbf /backup
host scp /oradata2/data1/dbase/sysaux01.dbf /backup
host scp /oradata2/data1/dbase/users01.dbf /backup
host scp /oradata2/data.dbf /backup
host scp /oradata2/data1/data02.dbf /backup
host scp /oradata2/6.dbf /backup
host scp /oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf /backup
host scp /oradata2/data_test.dbf /backup
9 rows selected.
Also you can backup network files. Do a recursive search for *.ora starting in your Oracle home directory and under it.
In order to make script for to copy data files for those tablespace which are only in backup mode then issue,
SQL>SELECT ‘host scp ‘|| d.name ||’ &backup_location’ FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
Run the script that you genereted.
On windows or other operating system you can use graphical browser to copy or other associated copy command.
5)Whenever you copy is finished make the tablespace out of backup mode. You can issue BEGIN BACKUP .. SCP serially (Take one tablespace in begin backup mode and then copy the associated datafiles and make the tablespace out of backup mode) or you can do it parallely(Take all tablespaces in begin backup mode and then copy the associated datafiles of all tabelspaces and then make the tablespace out of backup mode).
You here also make a script like,
SQL>SELECT ‘ALTER TABLESPACE ‘ ||t.name ||’ END BACKUP;’ “End Backup Script”
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
End Backup Script
———————————————————-
ALTER TABLESPACE SYSTEM END BACKUP;
You if you have taken Database in backup mode then issue
SQL>ALTER DATABASE END BACKUP;
Read Full Post »
Error Description:
The scenario is I have created a materialized view through database link over a table reside on the remote database.
I used the ON COMMIT option and it fails with ORA-12054 as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on commit
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)=’Y’;
from PHONES@lnxdb where upper(IS_SOLD)=’Y’
*
ERROR at line 7:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Cause of the Problem
The ON COMMIT clause of CREATE MATERIALIZE VIEW has some restrictions. Such as,
•ON COMMIT clause is not supported for materialized views containing object types.
•With ON COMMIT clause inserting into the materialized view table(master table) is not supported by selecting rows from remote tables.
In our case we satisfy the second restriction and hence error comes.
Solution of the Problem
To implement the solution you have to provide ON DEMAND clause. You have to create as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on demand
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)=’Y’;
Minimum Version of the database that can be directly upgraded to Oracle 11g Release 2 Source Database                Target Database
9.2.0.8 or higher                    11.2.x
10.1.0.5 or higher                  11.2.x
10.2.0.2 or higher                  11.2.x
11.1.0.6 or higher                  11.2.x
The following database version will require an indirect upgrade path.
Source Database —> Upgrade Path for Target Database—>Target Database
7.3.3 (or lower)—–>   7.3.4 —>   9.2.0.8 —->11.2.x
8.0.5 (or lower)—->    8.0.6 —>    9.2.0.8 —->11.2.x
8.1.7 (or lower)—->    8.1.7.4—> 10.2.0.4—->11.2.x
9.0.1.3 (or lower)—-> 9.0.1.4– ->10.2.0.4—->11.2.x
9.2.0.7(or lower)—->9.2.0.8—->11.2.x
Here I am upgrading my Oracle 10.2.0.4 database to Oracle 11.2.0.2
1. Copy Pre upgrade Information gathering Script:
  • Log in to the system as the owner of the Oracle Database 11g Release 2 (11.2) Oracle home directory.
  • Copy the Pre-Upgrade Information Tool (utlu112i.sql) and utltzuv2.sql from the Oracle Database 11g Release 2 (11.2) ORACLE_HOME/rdbms/admin directory to a directory outside of the Oracle home, such as the temporary directory on your system.
 
$ORACLE_HOME/rdbms/admin/utlu112i.sql
2. Execute Pre Upgrade Script:
  • Should be change to the directory where utlu112i.sql  had been copied in the previous step.
  • Start SQL*Plus and connect to the database instance as a user with SYSDBA privileges. Then run and spool the utlu112i.sql file. Please note that the database should be started using the Source Oracle Home.
$ sqlplus ’/ as sysdba’
SQL> spool upgrade_info.log
SQL> @utlu112i.sql
SQL> spool off
SQL>
Check the spool file and examine the output of the upgrade information tool. 3. Check for the integrity of the source database:

Check for the integrity of the source database prior to starting the upgrade by downloading and running dbupgdiag.sql script from below My Oracle Support article
Note 556610.1  Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql) (Avoid this step if don’t have support access)
 
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus ”/ as sysdba”
SQL> @utlrp.sql
After validating the invalid objects, re-run dbupgdiag.sql in the database once again and make sure that everything is fine.
4. Check that National Characterset:
Check that National Characterset (NLS_NCHAR_CHARACTERSET) is UTF8 and AL16UTF16. select value from NLS_DATABASE_PARAMETERS where parameter = ‘NLS_NCHAR_CHARACTERSET’;
If this is UTF8 or AL16UTF16 then no action is needed.
If is not UTF8 or AL16UTF16 then refer the following article
Note 276914.1 The National Character Set in Oracle 9i and 10g.
 
5. Optimizer Statistics:
When upgrading to Oracle Database 11g Release 2 (11.2), optimizer statistics are collected for dictionary tables that lack statistics. This statistics collection can be time consuming for databases with a large number of dictionary tables, but statistics gathering only occurs for those tables that lack statistics or are significantly changed during the upgrade
 
$ sqlplus ”/as sysdba” SQL> EXEC DBMS_STATS.GATHER_DICTIONARY_STATS;
 
6.Ensure that no files need media recovery:
Sql> SELECT * FROM v$recover_file;
 
7. Stop the listener for the database:
 
Make pfile from spfile;
 
$ lsnrctl stop
 
8. Suhtdown the Database:
Shutdown the database.
 
$ sqlplus ”/as sysdba”
SQL> shutdown immediate;
 
9. Back Up the Database: 1- Perform Cold Backup
(or)
2- Take a backup using RMAN
 
Connect to RMAN: rman ”target / nocatalog”
RUN
{
ALLOCATE CHANNEL chan_name TYPE DISK;
BACKUP DATABASE FORMAT ’%U’ TAG before_upgrade;
BACKUP CURRENT CONTROLFILE TO ”;
}
 
10 Backup and change pfile:
 
Make a backup of the init.ora file.
Comment out obsolete parameters
* The DIAGNOSTIC_DEST initialization parameter replaces the USER_DUMP_DEST, BACKGROUND_DUMP_DEST.
 
Note: Once the Parameter file is modified as per your requirement, copy the file to $ORACLE_HOME/dbs (11g Oracle Home )
 
11 Set Environment Variables:
 
If your operating system is UNIX then complete this step, else skip to next Step. 1. Make sure the following environment variables point to the Oracle 11g Release directories:
– ORACLE_BASE
– ORACLE_HOME
– PATH
$ export ORACLE_HOME=
$ export PATH=$ORACLE_HOME/bin:$PATH
$ export ORACLE_BASE=
Note : If ORACLE_BASE is not known, after setting Path towards 11g Oracle Home, execute ‘orabase’, which will point the location of base.
$ orabase
/uo1/app/oracle
 
2. Update the oratab entry, to set the new ORACLE_HOME pointing towards ORCL and disable automatic startup
 
Sample /etc/oratab #orcl:/opt/oracle/product/10.2/db_1:N
orcl:/opt/oracle/product/11.2/db_1:N
Note : After /etc/oratab is updated to have sid and Oracle Home (11.2), you can execute oraenv (/usr/local/bin/oraenv) and set the environment. The input has to be the sid which is entered in /etc/oratab against 11g home.
for Instance, [oracle@localhost ~]$ . oraenv
ORACLE_SID = [orcl] ? orcl
The Oracle base for ORACLE_HOME=/opt/oracle/product/11.2/db_1 is /u01/app/oracle
[oracle@localhost ~]$
 
12 Upgrade Database:
At the operating system prompt, change to the $ORACLE_HOME/rdbms/admin directory of 11gR2 Oracle Home.
 
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus ”/ as sysdba”
SQL> startup UPGRADE
 
Set the system to spool results to a log file for later verification after the upgrade is completed and start the upgrade script.
 
SQL> set echo on
SQL> SPOOL upgrade.log
SQL> @catupgrd.sql
SQL> spool off
These measures are an important final step to ensure the integrity and consistency of the newly upgraded Oracle Database software. Also, if you encountered a message listing obsolete initialization parameters when you started the database for upgrade, then remove the obsolete initialization parameters from the parameter file before restarting. If necessary, convert the SPFILE to a PFILE so you can edit the file to delete parameters.
 
Post Upgrade Steps
 
13 Post Upgrade:
Start the database and run the Post-Upgrade Status Tool $ORACLE_HOME/rdbms/admin/utlu112s.sql which provides a summary of the upgrade at the end of the spool log. It displays the status of the database components in the upgraded database and the time required to complete each component upgrade. Any errors that occur during the upgrade are listed with each component and must be addressed
$ sqlplus ”/as sysdba”
SQL> STARTUP
SQL> @utlu112s.sql
 
14 Recompile Invalid Objects:
This script can be run concurrently with utlrp.sql. Run utlrp.sql to recompile any remaining stored PL/SQL and Java code in another session.
SQL> @utlrp.sql
 
15 Check for the integrity of the source database:
 
Check for the integrity of the upgraded database by running dbupgdiag.sql script from below Metalink article
Note 556610.1  Script to Collect DB Upgrade/Migrate Diagnostic Information (dbupgdiag.sql)
If the dbupgdiag.sql script reports any invalid objects, run $ORACLE_HOME/rdbms/admin/utlrp.sql (multiple times) to validate the invalid objects in the database, until there is no change in the number of invalid objects. After validating the invalid objects, re-run dbupgdiag.sql in the upgraded database once again and make sure that everything is fine.
16 Configure & Start Listener.ora : Modify the listener.ora file:
For the upgraded instance(s) modify the ORACLE_HOME parameter to point to the new ORACLE_HOME. Start the listener :
 
lsnrctl start

17 Crosscheck Environment Variables:
Set Environment Variables 1. Make sure the following environment variables point to the Oracle 11g Release directories:
– ORACLE_BASE
– ORACLE_HOME
– PATH
Also check that your oratab file and any client scripts that set the value of ORACLE_HOME point to the Oracle Database 11g Release 2 (11.2) home.
Note : If you are upgrading a cluster database, then perform these checks on all nodes in which this cluster database has instances configured.
2. Modify /etc/oratab entry to use automatic startup
SID:ORACLE_HOME:Y For Instance,
orcl:/opt/oracle/product/11.2/db_1:Y
 
18 Spfile from Pfile:
 Edit init.ora: – If you changed the CLUSTER_DATABASE parameter prior the upgrade set it back to TRUE
– Migrate your initialization parameter file to a server parameter file.
Create a server parameter file with a initialization parameter file
SQL> create spfile from pfile;
This will create a spfile as a copy of the init.ora file located in $ORACLE_HOME/dbs (UNIX) & %ORACLE_HOME%\database (Windows).
19 Start the Database with spfile:
Shutdown the database:
Sql> shutdown immediate
Now start the database it will user spfile
Sql> startup
Check the Alert log file for any Error.
Database is ready to use now.

Upgrade Oracle 10g Release 2 from 10201 to 10204

 
This post demonstrate a step by step guide to apply oracle patchset 10.2.0.4 (patch number 6810189) on 10.2.0.1 database. My current environment is Oracle 10gR2 (10.2.0.1) installed on Redhat Enterprise Linux 5 update 4 32-bit.

Preparing for the upgrade to 10.2.0.4

Stop all oracle components running like LISTENER, EM , ISQLPLUS and DB itself etc.
/* Stop the isqlplus if running */
$ isqlplusctl stop
iSQL*Plus 10.2.0.1.0
Copyright (c) 2003, 2005, Oracle.  All rights reserved.
Stopping iSQL*Plus ...
iSQL*Plus stopped.

/* Stop the EM dbconsole */

$ emctl stop dbconsole
TZ set to US/Eastern
Oracle Enterprise Manager 10g Database Control Release 10.2.0.1.0  
Copyright (c) 1996, 2005 Oracle Corporation.  All rights reserved.

http://ora10.home.com:1158/em/console/aboutApplication

Stopping Oracle Enterprise Manager 10g Database Control ... 
 ...  Stopped. 

/* Stop the listener */

$ lsnrctl stop

Copyright (c) 1991, 2005, Oracle.  All rights reserved.

Connecting to (DESCRIPTION=(ADDRESS=(PROTOCOL=IPC)(KEY=EXTPROC1)))
The command completed successfully

/* Shutdown the database itself */

$ sqlplus / as sysdba

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 

Take a backup of Oracle Home and Database

Once the oracle database is shutdown, take a cold backup of your database and a backup of your ORACLE_HOME.
$cd /u01/apps/oracle/oradata/

/*
 All my data files , control files and log files are in a directory ora10g
 at the location /u01/apps/oracle/oradata/.
 I am going to make a tar archive of ora10g directory. If these files
 are at separate locations then add all those locations into the tar archive.
 And since this is just a test database and is very small in size so tar archive
 works much better then every thing else. But if it would be a production db and 
 is big in size then I would consider other faster ways to take a cold backup of
 my data files.
*/

$ tar czf /home/oracle/ora10g.tar.gz ora10g

$ echo $ORACLE_HOME
/u01/apps/oracle/product/10.2.0/db_1

/*
 This is my ORACLE_HOME so I would make a tar archive of "db_1" directory.
*/

$ cd /u01/apps/oracle/product/10.2.0/
$ tar czf /home/oracle/oraHomeBackup.tar.gz db_1

Manage your data with TimeZone before upgrade

(Only perform this step if you have data or Scheduler jobs with TZ info)
From 9i onwards Oracle has 2 datatypes that may have data stored affected by a update of the RDBMS DST (Daylight Saving Time) definitions, those are TIMESTAMP WITH LOCAL TIME ZONE (TSLTZ) and TIMESTAMP WITH TIME ZONE (TSTZ). If you have TZ data stored in your database you need to go through the following steps to ensure the integrity of your data while the database upgrade.
Check which TIMEZONE version file you are currently using.
SQL> select version from v$timezone_file;

   VERSION
----------
         2
If this gives 4 then you may simply proceed with the upgrade even if you have TZ data.
If this gives higher then 4, look at the meta link note: Note 553812.1
If this gives lower then 4, perform the following steps:
Download utltzpv4.sql and run it.
SQL> @utltzpv4.sql    
DROP TABLE sys.sys_tzuv2_temptab CASCADE CONSTRAINTS
               *
ERROR at line 1:
ORA-00942: table or view does not exist



Table created.

DROP TABLE sys.sys_tzuv2_affected_regions CASCADE CONSTRAINTS
               *
ERROR at line 1:
ORA-00942: table or view does not exist



Table created.

Your current timezone version is 2!
.
Do a select * from sys.sys_tzuv2_temptab; to see if any TIMEZONE
data is affected by version 4 transition rules.
.
Any table with YES in the nested_tab column (last column) needs
a manual check as these are nested tables.

PL/SQL procedure successfully completed.


Commit complete.

SQL> 

/* Once the script finishes successfully execute the following query */

column table_owner format a4
column column_name format a18
select * from sys_tzuv2_temptab;

TABL TABLE_NAME                     COLUMN_NAME          ROWCOUNT NES
---- ------------------------------ ------------------ ---------- ---
SYS  SCHEDULER$_JOB                 LAST_ENABLED_TIME           3
SYS  SCHEDULER$_JOB                 LAST_END_DATE               1
SYS  SCHEDULER$_JOB                 LAST_START_DATE             1
SYS  SCHEDULER$_JOB                 NEXT_RUN_DATE               1
SYS  SCHEDULER$_JOB                 START_DATE                  1
SYS  SCHEDULER$_JOB_RUN_DETAILS     REQ_START_DATE              1
SYS  SCHEDULER$_JOB_RUN_DETAILS     START_DATE                  1
SYS  SCHEDULER$_WINDOW              LAST_START_DATE             2
SYS  SCHEDULER$_WINDOW              NEXT_START_DATE             2

9 rows selected.
If it returns no rows, there is nothing that needs to be done. Just proceed with the upgrade.
If it retunrs the detail of columns that contain TZ data which may be affected by the upgrade, see metalink note: Note 553812.1
The Note 553812.1 states that if you see SYS owned SCHEDULER objects then it is safe to ignore them and proceed with the upgrade. But if you see user data or user created jobs here then you need to take a backup of data before upgrade and restore it back after the upgrade. Remove any user created jobs and re-create them after the upgrade.

Download Oracle Patchset 10.2.0.4 (6810189)

Now is the time to prepare the oracle 10.2.0.4 patchset installer.
You can download the patchset from

For Linux x86 (32-bit):

10.2.0.4 for Linux x86

For Linux x86-64 (64-bit):

10.2.0.4 for Linux x86-64
Once downloaded login as root in another console and execute following:
# xhost +SI:localuser:oracle

Install the patchset 10.2.0.4

Now come back to the oracle user console and move to the directory where you downloaded the patch and unzip the file.
$ cd /home/oracle
$ unzip p6810189_10204_Linux-x86.zip
$ cd Disk1/
$ ./runInstaller
The first screen is welcome screen.
Oracle patchset 10.2.0.4 Installer Welcome page
Provide the Oracle home details here (The oracle 10.2.0.1 home).
Oracle patchset 10.2.0.4 Installer Orale Home Details
The installer will perform prerequisite checks on this screen. Make sure you see the message “The overall result of this check is passed” in the output.
Oracle patchset 10.2.0.4 Installer Prerequisites Checks
Oracle configuration Manager allows you to associate your configuration with your metalink support account. You may skip this.
Oracle patchset 10.2.0.4 Installer Configuration Manager
Installation Summary.
Oracle patchset 10.2.0.4 Installer Summary
Installation progress.
Oracle patchset 10.2.0.4 Installer Progress
Once progress shows 100%, you will be asked to perform some root specific actions.
Oracle patchset 10.2.0.4 Installer Root specific
Login as root
# which dbhome
/usr/local/bin/dbhome
/* 
   this shows the location of dbhome, oraenv and coraenv files
   rename them for 10.2.0.1 as the root.sh create new ones for 
   10.2.0.4
*/
# cd /usr/local/bin/
# mv dbhome dbhome_10201
# mv oraenv oraenv_10201
# mv coraenv coraenv_10201

/* Now execute the script suggested by the installer. */

# /u01/apps/oracle/product/10.2.0/db_1/root.sh
Running Oracle10 root.sh script...

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/apps/oracle/product/10.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root.sh script.
Now product-specific root actions will be performed.
When root.sh finishes successfully come back to installer and press ok. Then you should see the End of Installation page as below.
Welcome Oracle patchset 10.2.0.4 Installer End of Installation
Press exit and your ORACLE_HOME is patched with 10.2.0.4 patchset. All your db’s working under this ORACLE_HOME will become unusable unless you upgrade your database to 10.2.04 as well.
$ sqlplus / as sysdba

Connected to an idle instance.

SQL> startup
ORACLE instance started.

Total System Global Area  608174080 bytes
Fixed Size                  1268896 bytes
Variable Size             171967328 bytes
Database Buffers          427819008 bytes
Redo Buffers                7118848 bytes
Database mounted.
ORA-01092: ORACLE instance terminated. Disconnection forced

SQL> exit
As you can see the database is unable to open and throwing the error ORA-01092. Lets look at the alert log file to know what actually happened.
$ tail -f /u01/apps/oracle/admin/ora10g/bdump/alert_ora10g.log 
SMON: enabling cache recovery
Fri Jul  2 15:30:15 2010
Errors in file /u01/apps/oracle/admin/ora10g/udump/ora10g_ora_12856.trc:
ORA-00704: bootstrap process failure
ORA-39700: database must be opened with UPGRADE option
Fri Jul  2 15:30:15 2010
Error 704 happened during db open, shutting down database
USER: terminating instance due to error 704
Instance terminated by USER, pid = 12856
ORA-1092 signalled during: ALTER DATABASE OPEN...
$
The alert log states that the database has to be upgraded first using UPGRADE option to be able to OPEN normally.

Upgrade the database from 10.2.0.1 to 10.2.0.4

Now startup the database with upgrade option and run the pre-upgrade information tool to see if the database is okay for the upgrade and if there is some thing to be changed before starting the upgrade.
$ cd $ORACLE_HOME/rdbms/admin
$ sqlplus / as sysdba

SQL> startup upgrade

SQL> spool pre_upgrade.log

SQL> @utlu102i.sql
Oracle Database 10.2 Upgrade Information Utility    07-02-2010 17:39:25
.
**********************************************************************
Database:
**********************************************************************
--> name:       ORA10G
--> version:    10.2.0.1.0
--> compatible: 10.2.0.1.0
--> blocksize:  8192
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 488 MB
.... AUTOEXTEND additional space required: 8 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 400 MB
.... AUTOEXTEND additional space required: 370 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 245 MB
.... AUTOEXTEND additional space required: 15 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 58 MB
.... AUTOEXTEND additional space required: 38 MB
--> EXAMPLE tablespace is adequate for the upgrade.
.... minimum required size: 69 MB
.
**********************************************************************
Update Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 10.2 init.ora or spfile]
**********************************************************************
-- No obsolete parameters found. No changes are required
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> Oracle Data Mining           [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> Oracle OLAP API              [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
.

PL/SQL procedure successfully completed.

SQL> spool off
The output from utlu102i.sql shows that every thing is fine, no changes are required and the database is ready for upgrade.
Lets start the upgrade process.
SQL> spool upgrade.log
SQL> @catupgrd.sql
.
.
.
[output trimmed]
.
.
.
Oracle Database 10.2 Upgrade Status Utility           07-02-2010 18:13:40
.
Component                                Status         Version  HH:MM:SS
Oracle Database Server                    VALID      10.2.0.4.0  00:09:32
JServer JAVA Virtual Machine              VALID      10.2.0.4.0  00:03:34
Oracle XDK                                VALID      10.2.0.4.0  00:00:29
Oracle Database Java Packages             VALID      10.2.0.4.0  00:00:39
Oracle Text                               VALID      10.2.0.4.0  00:00:20
Oracle XML Database                       VALID      10.2.0.4.0  00:01:29
Oracle Workspace Manager                  VALID      10.2.0.4.3  00:00:39
Oracle Data Mining                        VALID      10.2.0.4.0  00:00:18
OLAP Analytic Workspace                   VALID      10.2.0.4.0  00:00:19
OLAP Catalog                              VALID      10.2.0.4.0  00:00:50
Oracle OLAP API                           VALID      10.2.0.4.0  00:00:45
Oracle interMedia                         VALID      10.2.0.4.0  00:03:48
Spatial                                   VALID      10.2.0.4.0  00:01:29
Oracle Expression Filter                  VALID      10.2.0.4.0  00:00:09
Oracle Enterprise Manager                 VALID      10.2.0.4.0  00:01:16
Oracle Rule Manager                       VALID      10.2.0.4.0  00:00:07
.
Total Upgrade Time: 00:25:52
DOC>#######################################################################
DOC>#######################################################################
DOC>
DOC>   The above PL/SQL lists the SERVER components in the upgraded
DOC>   database, along with their current version and status.
DOC>
DOC>   Please review the status and version columns and look for
DOC>   any errors in the spool log file.  If there are errors in the spool
DOC>   file, or any components are not VALID or not the current version,
DOC>   consult the Oracle Database Upgrade Guide for troubleshooting
DOC>   recommendations.
DOC>
DOC>   Next shutdown immediate, restart for normal operation, and then
DOC>   run utlrp.sql to recompile any invalid application objects.
DOC>
DOC>#######################################################################
DOC>#######################################################################
DOC>#
SQL> spool off
SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> 
When the upgrade script catupgrd.sql finishes shutdown the database and open the spool of the upgrade process and try to find out if any thing failed. If you see something failed try to fix it and re-run the upgrade process.
The upgrade process may leave many objects invalid in the database. Perform a normal startup and run the utlrp.sql script to recompile any invalid objects.
$ sqlplus / as sysdba

Connected to an idle instance.

SQL> startup

SQL> spool recompile.log

SQL> @utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN  2010-07-02 18:23:16

DOC>   The following PL/SQL block invokes UTL_RECOMP to recompile invalid
DOC>   objects in the database. Recompilation time is proportional to the
DOC>   number of invalid objects in the database, so this command may take
DOC>   a long time to execute on a database with a large number of invalid
DOC>   objects.
DOC>
DOC>   Use the following queries to track recompilation progress:
DOC>
DOC>   1. Query returning the number of invalid objects remaining. This
DOC>      number should decrease with time.
DOC>         SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);
DOC>
DOC>   2. Query returning the number of objects compiled so far. This number
DOC>      should increase with time.
DOC>         SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
DOC>
DOC>   This script automatically chooses serial or parallel recompilation
DOC>   based on the number of CPUs available (parameter cpu_count) multiplied
DOC>   by the number of threads per CPU (parameter parallel_threads_per_cpu).
DOC>   On RAC, this number is added across all RAC nodes.
DOC>
DOC>   UTL_RECOMP uses DBMS_SCHEDULER to create jobs for parallel
DOC>   recompilation. Jobs are created without instance affinity so that they
DOC>   can migrate across RAC nodes. Use the following queries to verify
DOC>   whether UTL_RECOMP jobs are being created and run correctly:
DOC>
DOC>   1. Query showing jobs created by UTL_RECOMP
DOC>         SELECT job_name FROM dba_scheduler_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>
DOC>   2. Query showing UTL_RECOMP jobs that are running
DOC>         SELECT job_name FROM dba_scheduler_running_jobs
DOC>            WHERE job_name like 'UTL_RECOMP_SLAVE_%';
DOC>#

PL/SQL procedure successfully completed.


TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END  2010-07-02 18:23:50


PL/SQL procedure successfully completed.

DOC> The following query reports the number of objects that have compiled
DOC> with errors (objects that compile with errors have status set to 3 in
DOC> obj$). If the number is higher than expected, please examine the error
DOC> messages reported with each object (using SHOW ERRORS) to see if they
DOC> point to system misconfiguration or resource constraints that must be
DOC> fixed before attempting to recompile these objects.
DOC>#

OBJECTS WITH ERRORS
-------------------
                  0

DOC> The following query reports the number of errors caught during
DOC> recompilation. If this number is non-zero, please query the error
DOC> messages in the table UTL_RECOMP_ERRORS to see if any of these errors
DOC> are due to misconfiguration or resource constraints that must be
DOC> fixed before objects can compile successfully.
DOC>#

ERRORS DURING RECOMPILATION
---------------------------
                          0


PL/SQL procedure successfully completed.

SQL> spool off

/*
  When the script utlrp.sql completes go ahead and verify if all the components are 
  upgraded to 10.2.0.4
*/
set lines 10000
set pages 1000
column comp_name format a40
column version format a12
column status format a6
select comp_name, version, status from sys.dba_registry;

COMP_NAME                                VERSION      STATUS
---------------------------------------- ------------ ------
Oracle Database Catalog Views            10.2.0.4.0   VALID
Oracle Database Packages and Types       10.2.0.4.0   VALID
Oracle Workspace Manager                 10.2.0.4.3   VALID
JServer JAVA Virtual Machine             10.2.0.4.0   VALID
Oracle XDK                               10.2.0.4.0   VALID
Oracle Database Java Packages            10.2.0.4.0   VALID
Oracle Expression Filter                 10.2.0.4.0   VALID
Oracle Data Mining                       10.2.0.4.0   VALID
Oracle Text                              10.2.0.4.0   VALID
Oracle XML Database                      10.2.0.4.0   VALID
Oracle Rule Manager                      10.2.0.4.0   VALID
Oracle interMedia                        10.2.0.4.0   VALID
OLAP Analytic Workspace                  10.2.0.4.0   VALID
Oracle OLAP API                          10.2.0.4.0   VALID
OLAP Catalog                             10.2.0.4.0   VALID
Spatial                                  10.2.0.4.0   VALID
Oracle Enterprise Manager                10.2.0.4.0   VALID

17 rows selected.
The above query shows that the database components are at 10.2.0.4 version now.

Restore the database back to 10.2.0.1 if any thing failed

Let’s suppse the upgrade process fails, then you can always go back to 10.2.0.1 level be restoring the backup we took at the beginning of the process and start over the upgrade.
Just shutdown the database and restore the backups as follows:
$ echo $ORACLE_HOME
/u01/apps/oracle/product/10.2.0/db_1

$ cd /u01/apps/oracle/product/10.2.0/
$ rm -fr db_1
$ tar xzf /home/oracle/oraHomeBackup.tar.gz *

/* This will restore the old ORACLE_HOME */

$ cd /u01/apps/oracle/oradata/
$ rm -fr ora10g
$ tar xzf /home/oracle/ora10g.tar.gz *

/* This will restore a consistent copy of datafiles, controlfiles and redo log files */
Now start the database and see which version is it?
$ sqlplus / as sysdba

Connected to an idle instance.

SQL> startup

set lines 10000
set pages 1000
column comp_name format a40
column version format a12
column status format a6
select comp_name, version, status from sys.dba_registry;

COMP_NAME                                VERSION      STATUS
---------------------------------------- ------------ ------
Oracle Database Catalog Views            10.2.0.1.0   VALID
Oracle Database Packages and Types       10.2.0.1.0   VALID
Oracle Workspace Manager                 10.2.0.1.0   VALID
JServer JAVA Virtual Machine             10.2.0.1.0   VALID
Oracle XDK                               10.2.0.1.0   VALID
Oracle Database Java Packages            10.2.0.1.0   VALID
Oracle Expression Filter                 10.2.0.1.0   VALID
Oracle Data Mining                       10.2.0.1.0   VALID
Oracle Text                              10.2.0.1.0   VALID
Oracle XML Database                      10.2.0.1.0   VALID
Oracle Rules Manager                     10.2.0.1.0   VALID
Oracle interMedia                        10.2.0.1.0   VALID
OLAP Analytic Workspace                  10.2.0.1.0   VALID
Oracle OLAP API                          10.2.0.1.0   VALID
OLAP Catalog                             10.2.0.1.0   VALID
Spatial                                  10.2.0.1.0   VALID
Oracle Enterprise Manager                10.2.0.1.0   VALID

17 rows selected.
We are back again from where we started.

Saturday, December 1, 2012

Step By Step Process To Create Physical Standby Database Using RMAN


Hi,
I got many mails to post some steps for implementing the Standby Database in a real time environment.This exercise I have completed few years back but didn’t posted in Blog.For implementing the Standby Database using RMAN follow
The below steps:

Applies to: Oracle Server - Enterprise Edition - Version: 8.1.7.4 to 11.2.0.1.0
Reference:
http://download.oracle.com/docs/cd/B10501_01/server.920/a96566/rcmstand.htm

Summary of Steps:
Step 1: Backup the database that includes backup of datafiles, archivelogs and controlfile for standby
Step 2: Move the backups to the standby server
Step 3: Make proper changes in the parameter files of both primary and standby database
Step 4: Do the restore and recover on standby database
Step 5: Put the standby database in recover managed mode

As a primary responsibility of DBA ,please make sure you backup controlfile,Datafiles & archivelogs .In the below example we will make use of this backup for creating Standy Database.


Step 1: Backup the database that includes backup of datafiles, archivelogs and controlfile for standby

C:\>rman target /

Recovery Manager: Release 10.2.0.3.0 - Production on Wed Feb 4 12:28:47 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

connected to target database: TESTDB (DBID=2606456859)

RMAN> run
2> {
3> allocate channel c1 type disk format ‘\\node1.in.Test.com\TESTDB_Backup\%d_DATA_%U_%T';
4> allocate channel c2 type disk format '\\node1.in.Test.com\TESTDB_Backup\%d_DATA_%U_%T';
5> allocate channel c3 type disk format '\\node1.in.Test.com\TESTDB_Backup\%d_DATA_%U_%T';
6> backup database plus archivelog;
7> }

released channel: ORA_DISK_1
allocated channel: c1
channel c1: sid=615 instance=TESTDB1 devtype =DISK

allocated channel: c2
channel c2: sid=596 instance=TESTDB1 devtype =DISK

allocated channel: c3
channel c3: sid=619 instance=TESTDB1 devtype =DISK


Starting backup at 04-FEB-09
current log archived
channel c1: starting archive log backupset
channel c1: specifying archive log(s) in backup set
input archive log thread=2 sequence=2855 recid=7974 stamp=677037473
input archive log thread=2 sequence=2856 recid=7978 stamp=677070799

…………………………….
…………………………….
…………………………….
…………………………….

channel c3: backup set complete, elapsed time: 00:01:06
Finished backup at 04-FEB-09

Starting backup at 04-FEB-09
channel c1: starting full datafile backupset
channel c1: specifying datafile(s) in backupset
input datafile fno=00004 name=H:\ORADATA\TESTDB\UNDOTBS02.DBF
input datafile fno=00008 name=H:\ORADATA\TESTDB\COMET01.DBF
input datafile fno=00006 name=H:\ORADATA\TESTDB\PHASER01.DBF
input datafile fno=00001 name=H:\ORADATA\TESTDB\SYSTEM01.DBF
input datafile fno=00016 name=H:\ORADATA\TESTDB\HDK_TBS_01.DBF
input datafile fno=00014 name=H:\ORADATA\TESTDB\LCS_TBS_01
input datafile fno=00011 name=H:\ORADATA\TESTDB\COMET04.DBF
channel c1: starting piece 1 at 04-FEB-09
channel c2: starting full datafile backupset
…………………………….
…………………………….
…………………………….
…………………………….
input archive log thread=1 sequence=4872 recid=8154 stamp=677957862
channel c2: starting piece 1 at 04-FEB-09
channel c1: finished piece 1 at 04-FEB-09
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D1K6HK7D_1_1_20090204 tag=TAG20090204T173748 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:36
channel c2: finished piece 1 at 04-FEB-09
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D2K6HK7D_1_1_20090204 tag=TAG20090204T173748 comment=NONE
channel c2: backup set complete, elapsed time: 00:00:36
Finished backup at 04-FEB-09

Starting Control File and SPFILE Autobackup at 04-FEB-09
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\C-2606456859-20090204-00 comment=NONE
Finished Control File and SPFILE Autobackup at 04-FEB-09
released channel: c1
released channel: c2
released channel: c3

RMAN>
RMAN> run
2> {
3> allocate channel c1 type disk format '\\node1.in.Test.com\TESTDB_Backup\%d_CTRL_%U_%T';
4> backup current controlfile for standby;
5> }

allocated channel: c1
channel c1: sid=615 instance=TESTDB1 devtyTEST=DISK

Starting backup at 04-FEB-09
channel c1: starting full datafile backupset
channel c1: sTESTcifying datafile(s) in backupset
including standby control file in backupset
channel c1: starting piece 1 at 04-FEB-09
channel c1: finished piece 1 at 04-FEB-09
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_CTRL_D4K6HOIR_1_1_20090204 tag=TAG20090204T185211 comment=NONE
channel c1: backup set complete, elapsed time: 00:00:08
Finished backup at 04-FEB-09

Starting Control File and SPFILE Autobackup at 04-FEB-09
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\C-2606456859-20090204-01 comment=NONE
Finished Control File and SPFILE Autobackup at 04-FEB-09
released channel: c1

RMAN> exit

Step 2: Move the backups to the standby server
+ FTP or SCP the backup pieces.
+ If the backups are on NFS mount then mount the NFS on standby server with the same name as you mounted on primary database.
+ If the backups are on tape device then make sure that you make proper changes on standby server so that you can restore the backups on standby server.

Note 1: NFS configuration is a relatively straightforward process. The processes that need to be running can all start at boot time with a few modifications to your /etc/rc.conf file.
Note 2:Add the entry in /etc/exports file in Linux.Study the below link for details.In solaris same process is followed by specifying the entry in /etc/dfs/dfstab and in Windows server you need to shared the folder by going to the properties.
http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/network-nfs.html


Step 3: Make proper changes in the parameter files of both primary and standby database
Add the below parameter in primary database parameter file :
log_archive_dest_2='SERVICE=STANDBY'

Add the below parameters in standby database parameter file :
Create pfile as follows (copy the pfile from primary and do changes in db_unique_name, instance_name, db_file_name_convert, log_file_name_convert and standby_archive_dest.)

db_name = TESTDB
instance_name = TESTcluster
db_files = 1024
db_file_multiblock_read_count = 8
db_block_size = 8192
control_files = F:\ORADATA\TESTDB\STDCONTROL.CTL
sga_max_size = 1288490188
sga_target = 1073741824
log_checkpoint_interval = 10000
log_checkpoint_timeout = 1800
processes = 250
parallel_max_servers = 5
log_archive_format = ARC%S_%R.%T
global_names = true
undo_management = AUTO
undo_tablespace = UNDOTBS1
thread = 1
db_block_size = 8192
remote_login_passwordfile = exclusive
compatible = 10.2.0.3
sort_area_size = 66560
open_cursors = 300
audit_file_dest = C:\oracle10g\product\10.2.0\db_1\admin\TESTDB\ADUMP
background_dump_dest = C:\oracle10g\product\10.2.0\db_1\admin\TESTDB\BDUMP
user_dump_dest = C:\oracle10g\product\10.2.0\db_1\admin\TESTDB\UDUMP
core_dump_dest = C:\oracle10g\product\10.2.0\db_1\admin\TESTDB\CDUMP
LOG_ARCHIVE_DEST_1 = 'LOCATION=F:\oradata\TESTDB\arch'
DB_FILE_NAME_CONVERT = ('H:\ORADATA\TESTDB','F:\ORADATA\TESTDB')
LOG_FILE_NAME_CONVERT = ('I:\ORADATA\TESTDB','F:\oradata\TESTDB', 'J:\ORADATA\TESTDB','F:\oradata\TESTDB')
DB_UNIQUE_NAME = TESTDB
STANDBY_FILE_MANAGEMENT = AUTO
STANDBY_ARCHIVE_DEST ='F:\oradata\TESTDB\arch'


Step 4: Create instance

On windows use following command to create instance
D:\>oradim -NEW TESTcluster -STARTMODE auto -SYSPWD sys

On LINIX or UNIX just set the ORACLE_SID

Step 5: Create password file

Use the following command to create password file
D:\>orapwd file="path of the database folder" password=

Step 6: Do the restore and recover on standby database
Connect to target database and startup using pfile with nomount option.

C:\WINDOWS\system32>set ORACLE_SID=TESTCLUSTER
C:\WINDOWS\system32>rman target sys

Recovery Manager: Release 10.2.0.3.0 - Production on Wed Feb 4 18:54:41 2009

Copyright (c) 1982, 2005, Oracle. All rights reserved.

target database Password:
connected to target database (not started)

RMAN> set dbid=2606456859;

executing command: SET DBID

RMAN> startup nomount;

Oracle instance started

Total System Global Area 1291845632 bytes

Fixed Size 1292276 bytes
Variable Size 494929932 bytes
Database Buffers 788529152 bytes
Redo Buffers 7094272 bytes

RMAN>
RMAN> restore standby controlfile from '\\node1\TESTDB_Backup\TESTDB_CTRL_D4K6HOIR_1_1_20090204';

Starting restore at 04-FEB-09
using target database control file instead of recovery catalog
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=267 device type=DISK

channel ORA_DISK_1: restoring control file
channel ORA_DISK_1: restore complete, elapsed time: 00:00:06
output filename=F:\ORADATA\TESTDB\STDCONTROL.CTL
Finished restore at 04-FEB-09

RMAN> sql alter database mount standby database’;

sql statement: alter database mount
released channel: ORA_DISK_1

RMAN> restore database;

Starting restore at 04-FEB-09
allocated channel: ORA_DISK_1
channel ORA_DISK_1: sid=263 device type=DISK

channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: sTESTcifying datafile(s) to restore from backup set
restoring datafile 00002 to F:\ORADATA\TESTDB\UNDOTBS01.DBF
restoring datafile 00007 to F:\ORADATA\TESTDB\RMSUSR01.DBF
restoring datafile 00010 to F:\ORADATA\TESTDB\COMET03.DBF
restoring datafile 00013 to F:\ORADATA\TESTDB\SCADA_TBS01.DBF
restoring datafile 00015 to F:\ORADATA\TESTDB\TEST_TRACE01.DBF
restoring datafile 00017 to F:\ORADATA\TESTDB\INDX_PHASE_01.DBF
restoring datafile 00018 to F:\ORADATA\TESTDB\INDX_EDITOR_01.DBF
channel ORA_DISK_1: reading from backup piece \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_CVK6H488_1_1_20090204
channel ORA_DISK_1: restored backup piece 1
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_CVK6H488_1_1_20090204 tag=TAG20090204T130511
channel ORA_DISK_1: restore complete, elapsed time: 00:23:57
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00003 to F:\ORADATA\TESTDB\SYSAUX01.DBF
restoring datafile 00005 to F:\ORADATA\TESTDB\USERS01.DBF
restoring datafile 00009 to F:\ORADATA\TESTDB\COMET02.DBF
restoring datafile 00012 to F:\ORADATA\TESTDB\EDITOR_TBS01.DBF
restoring datafile 00019 to F:\ORADATA\TESTDB\INDX_COMET_01.DBF
restoring datafile 00020 to F:\ORADATA\TESTDB\AUTOCALPF.DBF
restoring datafile 00021 to F:\ORADATA\TESTDB\AUTOCALPF_INDEX.DBF
channel ORA_DISK_1: reading from backup piece \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D0K6H488_1_1_20090204
channel ORA_DISK_1: restored backup piece 1
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D0K6H488_1_1_20090204 tag=TAG20090204T130511
channel ORA_DISK_1: restore complete, elapsed time: 00:28:45
channel ORA_DISK_1: starting datafile backupset restore
channel ORA_DISK_1: specifying datafile(s) to restore from backup set
restoring datafile 00001 to F:\ORADATA\TESTDB\SYSTEM01.DBF
restoring datafile 00004 to F:\ORADATA\TESTDB\UNDOTBS02.DBF
restoring datafile 00006 to F:\ORADATA\TESTDB\PHASER01.DBF
restoring datafile 00008 to F:\ORADATA\TESTDB\COMET01.DBF
restoring datafile 00011 to F:\ORADATA\TESTDB\COMET04.DBF
restoring datafile 00014 to F:\ORADATA\TESTDB\LCS_TBS_01
restoring datafile 00016 to F:\ORADATA\TESTDB\HDK_TBS_01.DBF
channel ORA_DISK_1: reading from backup piece \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_CUK6H488_1_1_20090204
channel ORA_DISK_1: restored backup piece 1
piece handle=\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_CUK6H488_1_1_20090204 tag=TAG20090204T130511
channel ORA_DISK_1: restore complete, elapsed time: 00:37:46
Finished restore at 04-FEB-09

Step 7: Recover till the current SCN

Check the backed up archivelogs and recover standby database till that SCN

RMAN> list backup of archivelog all;
……………………………………………………………
……………………………………………………………
……………………………………………………………
……………………………………………………………

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
2389 38.81M DISK 00:00:34 04-FEB-09
BP Key: 3706 Status: AVAILABLE Compressed: NO Tag: TAG20090204T173748
Piece Name: \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D2K6HK7D_1_1_20090204

List of Archived Logs in backup set 2389
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
1 4872 220387955 04-FEB-09 220524445 04-FEB-09

BS Key Size Device Type Elapsed Time Completion Time
------- ---------- ----------- ------------ ---------------
2390 46.58M DISK 00:00:35 04-FEB-09
BP Key: 3707 Status: AVAILABLE Compressed: NO Tag: TAG20090204T173748
Piece Name: \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D1K6HK7D_1_1_20090204

List of Archived Logs in backup set 2390
Thrd Seq Low SCN Low Time Next SCN Next Time
---- ------- ---------- --------- ---------- ---------
2 2907 220387972 04-FEB-09 220524442 04-FEB-09



“it list all the backed up archives”
“check the highest SCN backed up and give till next SCN number for recovery as follows”

RMAN> recover database until sequence 4873;

Starting recover at 05-FEB-09
using channel ORA_DISK_1

starting media recovery

archive log thread 2 sequence 2908 is already on disk as file F:\ORADATA\TESTDB\ARCH\ARC02908_0643721627.002
channel ORA_DISK_1: starting archive log restore to default destination
channel ORA_DISK_1: restoring archive log
archive log thread=1 sequence=4872
channel ORA_DISK_1: reading from backup piece \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D2K6HK7D_1_1_20090204
ORA-19870: error reading backup piece \\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D2K6HK7D_1_1_20090204
ORA-19505: failed to identify file "\\NODE1.IN.TEST.COM\TESTDB_BACKUP\TESTDB_DATA_D2K6HK7D_1_1_20090204"
ORA-27041: unable to open file
OSD-04002: unable to open file
O/S-Error: (OS 2) The system cannot find the file specified.
failover to previous backup
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OTESTN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'F:\ORADATA\TESTDB\SYSTEM01.DBF'

RMAN-00571: ===========================================================
RMAN-00569: =============== ERROR MESSAGE STACK FOLLOWS ===============
RMAN-00571: ===========================================================
RMAN-03002: failure of recover command at 02/05/2009 11:00:30
RMAN-20506: no backup of archivelog found
RMAN-06053: unable to perform media recovery because of missing log
RMAN-06025: no backup of log thread 1 seq 4872 lowscn 220387955 found to restore

RMAN> exit


Note : No need to worry about the errors, you can safely ignore and move to step 7
Oracle Error:
ORA-01547: warning: RECOVER succeeded but OTESTN RESETLOGS would get error below
ORA-01194: file 1 needs more recovery to be consistent
ORA-01110: data file 1: 'F:\ORADATA\TESTDB\SYSTEM01.DBF'



Step 8: Put the standby database in recover managed mode

C:\WINDOWS\system32>sqlplus sys as sysdba

SQL*Plus: Release 10.2.0.3.0 - Production on Thu Feb 5 11:05:21 2009

Copyright (c) 1982, 2006, Oracle. All Rights Reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.3.0 - Production
With the Partitioning, OLAP and Data Mining options

SQL> alter database recover managed standby database disconnect from session;

Database altered.

Here is your Standby Database ready use it when ever you want to switch the role whenever there is a requirement to do some critical changes in primary or production make your standby as primary and to implement the fast start fail over where something happens to primary than standby will
Automatically become primary.

In my upcoming posts,I will explain few scenarios on failover and switchover cases and useful information for working with Dataguard and Standby Databases.

Hope it helps.