Tuesday, December 25, 2012



When implementing a backup and recovery strategy, you have the following solutions available:
- Recovery Manager (RMAN)
This tool integrates with sessions running on an Oracle database to perform a
range of backup and recovery activities, including maintaining an RMAN
repository of historical data about backups. You can access RMAN through the command line or through Oracle Enterprise Manager.
- User-managed backup and recovery
In this solution, you perform backup and recovery with a mixture of host
operating system commands and SQL*Plus recovery commands.
Both of the preceding solutions are supported by Oracle and are fully documented, but RMAN is the preferred solution for database backup and recovery.
RMAN performs the same types of backup and recovery available through user-managed techniques more easily, provides a common interface for backup tasks across different host operating systems, and offers a number of backup techniques not available through user-managed methods.
Most of this manual focuses on RMAN-based backup and recovery.
User-managed backup and recovery techniques are covered in Section VIII, “Performing User-Managed Backup and Recovery.” RMAN gives you access to several backup and recovery techniques and features not available with user-managed backup and recovery. The most noteworthy are the following:
- Incremental backups
An incremental backup stores only blocks changed since a previous backup.
Thus, they provide more compact backups and faster recovery, thereby reducing the need to apply redo during datafile media recovery. If you enable block change tracking, then you can improve performance by avoiding full scans of every input datafile. You use the BACKUP INCREMENTAL command to perform incremental backups.
- Block media recovery
You an repair a datafile with only a small number of corrupt data blocks without taking it offline or restoring it from backup. You use the RECOVER command to perform block media recovery.
- Unused block compression
In unused block compression, RMAN can skip data blocks that have never been used and, in some cases, used blocks that are currently unused.
- Binary compression
A binary compression mechanism integrated into Oracle Database reduces the size of backups.
- Encrypted backups
RMAN uses backup encryption capabilities integrated into Oracle Database to store backup sets in an encrypted format. To create encrypted backups on disk, the database must use the Advanced Security Option. To create encrypted backups directly on tape, RMAN must use the Oracle Secure Backup SBT interface, but does not require the Advanced Security Option.
Whether you use RMAN or user-managed methods, you can supplement physical backups with logical backups of schema objects made with Data Pump Export utility.
You can later use Data Pump Import to re-create data after restore and recovery. Logical backups are for the most part beyond the scope of the backup and recovery documentation.
- Oracle Flashback Technology
As explained in Oracle Database Concepts, Oracle Flashback Technology complements your physical backup and recovery strategy. This set of features provides an additional layer of data protection. Specifically, you can use flashback features to view past states of data and rewind your database without restoring backups or performing point-in-time recovery. In general, flashback features are more efficient and less disruptive than media recovery in most situations in which they apply.
- Logical Flashback Features
Most of the flashback features of Oracle operate at the logical level, enabling you to view and manipulate database objects. The logical-level flashback features of Oracle do not depend on RMAN and are available whether or not RMAN is part of your backup strategy. With the exception of Flashback Drop, the logical flashback features rely on undo data, which are records of the effects of each database update and the values overwritten in the update.
Oracle Database includes the following logical flashback features:
- Oracle Flashback Query
You can specify a target time and run queries against a database, viewing results as they would have appeared at the target time. To recover from an unwanted change like an update to a table, you could choose a target time before the error and run a query to retrieve the contents of the lost rows.
Oracle Database Advanced Application Developer’s Guide explains how to use this feature.
- Oracle Flashback Version Query
You can view all versions of all rows that ever existed in one or more tables in a specified time interval. You can also retrieve metadata about the differing versions of the rows, including start and end time, operation, and transaction ID of the transaction that created the version. You can use this feature to recover lost data values and to audit changes to the tables queried.
Oracle Database Advanced Aplication Developer’s Guide explains how to use this feature.
- Oracle Flashback Transaction Query
You can view changes made by a single transaction, or by all the transactions uring a period of time. Oracle Database Advanced Application Developer’s Guide xplains how to use this feature.
- Oracle Flashback Transaction
You can reverse a transaction. Oracle Database determines the dependencies between transactions and in effect creates a compensating transaction that reverses the unwanted changes. The database rewinds to a state as if the transaction, and any transactions that could be dependent on it, had never happened. Oracle
Database Advanced Application Developer’s Guide explains how to use this feature.
- Oracle Flashback Table
You can recover a table or set of tables to a specified point in time in the past
without taking any part of the database offline. In many cases, Flashback Table eliminates the need to perform more complicated point-in-time recovery operations. Flashback Table restores tables while automatically maintaining
associated attributes such as current indexes, triggers, and constraints, and in this way enabling you to avoid finding and restoring database-specific properties.
- Oracle Flashback Drop
You can reverse the effects of a DROP TABLE statement. “Rewinding a DROP
TABLE Operation with Flashback Drop” on page 16-7 explains how to use this
feature.
A flashback data archive enables you to use some of the logical flashback features to access data from far back in the past. A flashback data archive consists of one or more tablespaces or parts of tablespaces. When you create a flashback data archive, you specify the name, retention period, and tablespace. You can also specify a default flashback data archive. The database automatically purges old historical data the day after the retention period expires.
You can turn flashback archiving on and off for individual tables. By default, flashback archiving is turned off for every table.
- Flashback Database
At the physical level, Oracle Flashback Database provides a more efficient data protection alternative to database point-in-time recovery (DBPITR). If the current datafiles have unwanted changes, then you can use the RMAN command FLASHBACK DATABASE to revert the datafiles to their contents at a past time. The end product is much like the result of a DBPITR, but is generally much faster because it does not require restoring datafiles from backup and requires less redo than media recovery.
Flashback Database uses flashback logs to access past versions of data blocks and some information from archived redo logs. Flashback Database requires that you configure a flash recovery area for a database because the flashback logs can only be stored there. Flashback logging is not enabled by default. Space used for flashback logs is managed automatically by the database and balanced against space required for other files in the flash recovery area.
Oracle Database also supports restore points in conjunction with Flashback Database and backup and recovery. A restore point is an alias corresponding to a system change number (SCN). You can create a restore point at any time if you anticipate needing to return part or all of a database to its contents at that time. A guaranteed restore point ensures that you can use Flashback Database to return a database to the time of the restore point.
- Data Recovery Advisor
Oracle Database includes a Data Recovery Advisor tool that automatically diagnoses persistent data failures, presents appropriate repair options, and executes repairs at your request. Data Recovery Advisor provides a single point of entry for Oracle backup and recovery solutions. You can use Data Recovery Advisor through the Enterprise Manager Database Control or Grid Control console or through the RMAN command-line client.
A database failure usually manifests itself as a set of symptoms: error messages, alerts, trace files and dumps, and failed data integrity checks. Data Recovery Advisor automatically diagnoses and informs you of these failures. Within the context of Data Recovery Advisor, a failure is a persistent data corruption that can be directly mapped to a set of repair actions. Each failure has a status of open or closed. Each failure also has a priority of critical, high, or low.
Failures are detected by data integrity checks, which are diagnostic procedures executed to assess the health of the database or its components. If a data integrity check reveals a failure, then Data Recovery Advisor automatically assesses the effect of a set of failures and maps it to a set of repair options. In most cases, Data Recovery Advisor presents both automated and manual repair options.
Data Recovery Advisor determines the best automated repair option and its effect on the database. The repair option may include repairs such as datafile restore and recovery, media recovery, Flashback Database, and so on. Before presenting an automated repair option, Data Recovery Advisor validates it with respect to the specific environment and the availability of media components required to complete the proposed repair.
If you choose an automated repair option, then RMAN coordinates sessions on the Oracle database to perform the repair for you. The Data Recovery Advisor tool verifies the repair success and closes the appropriate failures.
Read Full Post »

Happy Valentine’s Day

Posted in Backup and Recovery on February 14, 2011 | 1 Comment »
Don’t wait until it’s too late
to tell someone how much you love,
how much you care.
Because when they’re gone,
no matter how loud you shout and cry,
they won’t hear you anymore.
Happy Valentine’s Day
Read Full Post »
       This was the first question ask to me during one interview.
According to oracle documentation you already heard/aware that during an Oracle tablespace hot backup, a script or program or command puts a tablespace into backup mode, then copies the datafiles to disk or tape, then takes the tablespace out of backup mode. We can check the backup mode from V$BACKUP view. In case of user managed backup, backup process starts after issuing ALTER TABLESPACE tbs_name BEGIN BACKUP; or ALTER DATABASE BEGIN BACKUP; command and backup process ends by ALTER TABLESPACE tbs_name END BACKUP; or ALTER DATABASE END BACKUP; command.
Hot backup is demonstrated inside the topic http://samadhandba.wordpress.com/2011/02/10/user-managed-hot-backup-of-oracle-database/
Although the process is very clear and well understood but there are many misconception around hot backup. The misconception start what is actually done during hot backup, is data file opens writeable during backup process? or changes are stored somewhere in the SGA, the redologs, the rollback/undo segments or some combination thereof, and then written back into the datafile when the tablespace is taken out of backup mode?
Well, around the writeable issue inside datafile there is other misconception like “During hot backup process there is generated huge amount of redo data which in fact slows down the database dramatically if the database is in archivelog mode.”
Now let’s know what actually happens during hot backup. The hot backup steps are,
1)The corresponding tablespace is checkpointed.
2)The checkpoint SCN marker in the datafile headers cease to increment with checkpoints.
3)Full images of changed DB blocks are written to the redologs.
Whenever you issue,
ALTER TABLESPACE tbs_name BEGIN BACKUP;
command, at that point a checkpoint is performed against the target tablespace and the datafile header is frozen, so no more updates are allowed on it (the datafile header), this is for the database to know which was the last time the tablespace had a consistent image of the data.
But during backup process, the corresponding datafiles in the tablespace allow just normal read/write operations, that is I/O activity is not frozen.
In case of redo log generation, each block will be recorded into the redo log files, the first time it the block is changed. So if a row is modified for the first time inside date block since hot backup started the complete block image is recorded in the redo log files but subsequent transactions on the block will only record the transaction just as normal.
Above three steps are required to guarantee consistency during the file is restored and recovery. By freezing the checkpoint SCN in the file headers, any subsequent recovery on that backup copy of the file will know that it must commence at that SCN. Having an old SCN in the file header tells recovery that the file is an old one, and that it should look for the redolog file containing that SCN, and apply recovery starting there. Note that checkpoints to datafiles in hot backup mode are not suppressed during the backup, only the incrementing of the main checkpoint SCN flag. A “hot backup checkpoint” SCN marker in the file header continues to increment as periodic or incremental checkpoints progress normally.
By initially checkpointing the datafiles that comprise the tablespace and logging full block images to redo, Oracle guarantees that any blocks changed in the datafile while in hot backup mode will also be available in the redologs in case they are ever used for a recovery.
Now many one claims that during hot backup process there is excessive redo log generation than in normal mode. It actually depends on the amount of blocks changes during hot backup process. Because the first time a block is changed logging of full images of changed blocks in these tablespaces are recorded to the redo logs. Normally, Oracle logs an entry in the redologs for every change in the database, but it does not log the whole image of the database block. But during the hot backup process by logging full images of changed DB blocks to the redologs, Oracle eliminates the possibility of the backup containing irresolvable split blocks. To understand this reasoning, you must first understand what a split block is.
Typically, Oracle database blocks are a multiple of O/S blocks. For instance, most windows filesystems have a default block size of 512 bytes and unix filesystems have a default blocksize 2k, while Oracle’s default block size is 8k. This means that the filesystem stores data in 512 byte chunks, while Oracle performs reads and writes in 2k chunks, or multiples thereof. While backing up a datafile, your backup script makes a copy of the datafile from the filesystem, using O/S utilities such as copy, dd, cpio, or OCOPY. As it is making this copy, it is reading in O/S-block sized increments. If the database writer happens to be writing a DB block into the datafile at the same time that your script is reading that block’s constituent O/S blocks, your backup copy of the DB block could contain some O/S blocks from before the database performed the write, and some from after. This would be a split block.
By logging the full block image of the changed block to the redologs, it guarantees that in the event of a recovery, any split blocks that might be in the backup copy of the datafile will be resolved by overlaying them with the full legitimate image of the block from the redologs. Upon completion of a recovery, any blocks that got copied in a split state into the backup will have been resolved through application of full block images from the redologs.
Read Full Post »
Used managed backup means you take backup without any oracle feature. Suppose if you take backup by Operating system then it is called user managed backup. And the term hot backup means taking your backup whenever your database is at open state.
To take full database backup follow the following steps.
1)Before proceed remember you can take online/hot backup whenever your database is in Archivelog mode. If your database run on noarchivelog mode then you must take consistent backup that is after cleanly shutdown. In order to determine the archival mode, issue the query,
SQL> SELECT LOG_MODE FROM V$DATABASE;
LOG_MODE
————
ARCHIVELOG
If you see ARCHIVELOG then you can proceed further. In order to take backup while you are in noarhivelog mode follow other post on my blog.
2)Determine the files that you need to take backup.
Whenever you decide to take your database backup then take backup of data files , online redo log files ,control files, spfile.
In order to decide which files you need to backup issue the following query.
SQL>SELECT NAME “File Need Backup” FROM V$DATAFILE
UNION ALL
SELECT MEMBER FROM V$LOGFILE
UNION ALL
SELECT NAME FROM V$CONTROLFILE
UNION ALL
SELECT VALUE FROM V$PARAMETER WHERE NAME=’spfile’;
File Need Backup
——————————————————————————–
/oradata2/data1/dbase/system01.dbf
/oradata2/data1/dbase/undotbs01.dbf
/oradata2/data1/dbase/sysaux01.dbf
/oradata2/data1/dbase/users01.dbf
/oradata2/data.dbf
/oradata2/data1/data02.dbf
/oradata2/6.dbf
/oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf
/oradata2/data_test.dbf
/oradata2/data1/dbase/redo03.log
/oradata2/data1/dbase/redo02.log
/oradata2/data1/dbase/redo01.log
/oracle/app/oracle/product/10.2.0/db_1/dbs/cntrldupbase.dbf
/oracle/app/oracle/product/10.2.0/db_1/dbs/spfiledupbase.ora
13 rows selected.
So after running the above query I can say I need to backup 13 files.
3)Take the tablespace in backup mode rather than offline and read-only tablespace. In case of offline and read only tablespace you do not have to place the tablespace in backup mode because the database is not permitting changes to the datafiles.
You can check the status, tablespace_name and it’s associated data file name with the following query,
SELECT t.STATUS,t.TABLESPACE_NAME “Tablespace”, f.FILE_NAME “Datafile”
FROM DBA_TABLESPACES t, DBA_DATA_FILES f
WHERE t.TABLESPACE_NAME = f.TABLESPACE_NAME;
ORDER BY t.NAME;
Take the tablespace in backup mode rather than offline and read-only tablespace.
You can easily make a script of taking the online tablespace in backup mode by following query.
SQL>SELECT ‘ALTER TABLESPACE ‘ ||TABLESPACE_NAME ||’ BEGIN BACKUP;’ “Script” FROM DBA_TABLESPACES WHERE STATUS NOT IN (‘READ ONLY’,’OFFLINE’);
Script
————————————————————-
ALTER TABLESPACE SYSTEM BEGIN BACKUP;
ALTER TABLESPACE UNDOTBS1 BEGIN BACKUP;
ALTER TABLESPACE SYSAUX BEGIN BACKUP;
ALTER TABLESPACE TEMP BEGIN BACKUP;
ALTER TABLESPACE USERS BEGIN BACKUP;
ALTER TABLESPACE TEMP_T BEGIN BACKUP;
6 rows selected.
Alternatively, you can issue
SQL>ALTER DATABASE BEGIN BACKUP;
4)Copy the datafile to backup location.
After making a tablespace in backup mode take backup/copy of the associated datafiles. Here you can also make a script in order to copy datafiles to another location.
For online tablespace you must at first take it backup mode. You can check whether backup mode now active or not by issuing following query,
SQL>SELECT t.name AS “TB_NAME”, d.file# as “DF#”, d.name AS “DF_NAME”, b.status
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
SQL> SELECT ‘host scp ‘|| FILE_NAME || ‘ &backup_location ‘ “Backup Command” FROM DBA_DATA_FILES;
Enter value for backup_location: /backup
old 1: SELECT ‘host scp ‘|| FILE_NAME || ‘ &backup_location ‘ “Backup Command” FROM DBA_DATA_FILES
new 1: SELECT ‘host scp ‘|| FILE_NAME || ‘ /backup ‘ “Backup Command” FROM DBA_DATA_FILES
Backup Command
——————————————————————————————
host scp /oradata2/data1/dbase/system01.dbf /backup
host scp /oradata2/data1/dbase/undotbs01.dbf /backup
host scp /oradata2/data1/dbase/sysaux01.dbf /backup
host scp /oradata2/data1/dbase/users01.dbf /backup
host scp /oradata2/data.dbf /backup
host scp /oradata2/data1/data02.dbf /backup
host scp /oradata2/6.dbf /backup
host scp /oradata2/DBASE/datafile/o1_mf_permanen_42l31vg0_.dbf /backup
host scp /oradata2/data_test.dbf /backup
9 rows selected.
Also you can backup network files. Do a recursive search for *.ora starting in your Oracle home directory and under it.
In order to make script for to copy data files for those tablespace which are only in backup mode then issue,
SQL>SELECT ‘host scp ‘|| d.name ||’ &backup_location’ FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
Run the script that you genereted.
On windows or other operating system you can use graphical browser to copy or other associated copy command.
5)Whenever you copy is finished make the tablespace out of backup mode. You can issue BEGIN BACKUP .. SCP serially (Take one tablespace in begin backup mode and then copy the associated datafiles and make the tablespace out of backup mode) or you can do it parallely(Take all tablespaces in begin backup mode and then copy the associated datafiles of all tabelspaces and then make the tablespace out of backup mode).
You here also make a script like,
SQL>SELECT ‘ALTER TABLESPACE ‘ ||t.name ||’ END BACKUP;’ “End Backup Script”
FROM V$DATAFILE d, V$TABLESPACE t, V$BACKUP b
WHERE d.TS#=t.TS#
AND b.FILE#=d.FILE#
AND b.STATUS=’ACTIVE’;
End Backup Script
———————————————————-
ALTER TABLESPACE SYSTEM END BACKUP;
You if you have taken Database in backup mode then issue
SQL>ALTER DATABASE END BACKUP;
Read Full Post »
Error Description:
The scenario is I have created a materialized view through database link over a table reside on the remote database.
I used the ON COMMIT option and it fails with ORA-12054 as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on commit
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)=’Y’;
from PHONES@lnxdb where upper(IS_SOLD)=’Y’
*
ERROR at line 7:
ORA-12054: cannot set the ON COMMIT refresh attribute for the materialized view
Cause of the Problem
The ON COMMIT clause of CREATE MATERIALIZE VIEW has some restrictions. Such as,
•ON COMMIT clause is not supported for materialized views containing object types.
•With ON COMMIT clause inserting into the materialized view table(master table) is not supported by selecting rows from remote tables.
In our case we satisfy the second restriction and hence error comes.
Solution of the Problem
To implement the solution you have to provide ON DEMAND clause. You have to create as below.
SQL> create materialized view scott.mv_phones
2 TABLESPACE users
3 REFRESH FAST on demand
4 WITH rowid
5 AS select ID, PASS,
6 CONN_EXPIRY_DATE, CONN_STATUS
7 from PHONES@lnxdb where upper(IS_SOLD)=’Y’;

No comments:

Post a Comment