Friday, September 28, 2012

An Introduction to Linux Shell Scripting for DBAs

by Casimir Saternos
Learn some basic bash shell scripts for installing, running, and maintaining Oracle databases on Linux.

Published November 2005
About seven years ago, Oracle released the first commercial database on Linux. Since then, Oracle, Red Hat, and Novell/SUSE have been steadily collaborating on changes to Linux kernel as they relate to database and application performance. For that reason, Oracle Database 10g for Linux includes enhancements that are closely related to the operating system. Now more than ever, DBAs need to have knowledge of and experience on this platform to best administer the systems under their watch.
There is a traditional division of responsibilities between sysadmins and DBAs. However, in practice, the distinction is not always clear. Many IT shops employ individuals who address concerns at the database as well as the operating system levels. And of course the Oracle Database itself uses operating system resources and is designed to interact closely with its environment.
Furthermore, many sysadmins and DBAs find it necessary or convenient to automate tasks related to their work. The installation of software, monitoring of system resources, and management of systems involve repetitive and error-prone tasks are better addressed through automated processes than manual procedures.
One method that is used to automate such tasks is shell scripting. Shell scripts play a significant role in the Linux System from the time it is installed. Various scripts are called when the system is started up and shut down. Utilities by Oracle and other third-party vendors are invoked through shell scripts. Because they can be developed quickly, they have historically been used for prototyping applications. System Administrators have taken advantage of the functionality available through shell scripting to provide solutions that are tailored for the particular requirements and idiosyncrasies of the systems under their watch.
In this article, I will introduce functionality available through "bash" shell scripting that is relevant to installing, running, and maintaining Oracle databases on a Linux platform. Note that this article is designed for Linux scripting beginners or DBAs who are relatively new to Linux; most experienced Linux sysadmins will not find it helpful.
What Is a Shell Script?
A shell script is simply a text file containing a sequence of commands. When you run the file—or script—it executes the commands contained in the file. The term shell simply refers to the particular command-line user interface you use to communicate with the Linux kernel. Several different shells are available, including the C shell (csh), Korn shell (ksh), Bourne shell (sh), and Bourne-Again shell (bash). The shell itself is a command that reads lines from either a file or the terminal, interprets them, and generally executes other commands. The Bourne-Again shell incorporates features of the other shells mentioned and is the one that was used for the scripts this article demonstrates.
The first line in the script file can be used to dictate which shell will be used to run the script. This is the meaning of the first line found in all of the script examples:
#!/bin/bash
       
Why Use Shell Scripts? Depending on your background, you may not see any immediate value to shell scripting as it relates to the DBA’s work. If you do not have experience with UNIX or UNIX-like systems, the myriad of cryptic commands might be cause for concern. Besides, in addition to being a relational database, Oracle 10g provides a robust platform for processing data within the database as well as several methods of interacting with the OS outside of the database.
However, there are several reasons you might find yourself delving into the world of shell scripting, including the following:
  • You are in a situation in which you must support already existing scripts.
  • You want to automate system setup that occurs prior to the installation of the Oracle software. For instance, you could write a script to check the initial state of the OS and report any prerequisites that must be met before installation of the software. The script might also create relevant OS users and groups and set environmental variables for the users.
  • A running Oracle database can be used to execute manual or scheduled tasks. However, some tasks need to be run when the database is not running. You can use a script to stop or start a database (as well as a listener or a related database process). Such an action cannot be initiated from within the database itself.
  • You need a mechanism for monitoring the state of a database (i.e. if it is running and available to process queries). Such a script could also monitor other processes and resources that are not Oracle-specific to provide a fuller picture of what is occurring in the system.
  • You need to automate backups. Oracle Recovery Manager (RMAN) is a utility that allows you to develop backup scripts that can run on any platform. You can call Oracle Recovery Manager from a shell script and use it to perform a wide range of backup and recovery activities.
  • You might have a requirement that is not specific to a single database. Perhaps you have several databases on a single machine. It may not be advisable to fulfill the requirement by using a single database, and it introduces potential security issues as well. Under such circumstances, shell scripting provides a means for fulfilling the requirement in a manner that does not associate the process with a single database.
When Not to Use Shell Scripts Oracle Database includes functionality that extends beyond the traditional definition of an RDBMS. Like any other piece of software, it uses resources that are provided by the operating system, but it can “see” and “change” its environment to a much greater degree than other software. SQL and Oracle’s fixed views provide a picture of the system from inside the database, whereas shell scripting provides a view of the system from outside of the database. Shell scripting is not the solution for every problem.
It important to recognize that many aspects of the operating system can be monitored and modified from within the database. Oracle’s fixed views (the views with a v$ prefix) can be used to determine the host name of the machine (v$instance) or the platform name on which the database is running (v$database). The location and other attributes of the files associated with the database can be determined in this manner as well. The location and other attributes of datafiles (v$datafile, dba_data_files), temp files (v$tempfile, dba_temp_files), redo logs (v$logfile), archive logs (v$archived_log), and control files (v$controlfile) can be queried directly from the database. You can determine information about the flash recovery area ($recovery_file_dest) from this view, as well as by looking at some init.ora parameters (db_recovery_file_dest, db_recovery_file_dest_size). The status of processes (v$process) and memory (v$sga, v$sgastat, and so on) can be queried as well. There are various built-in PL/SQL packages as well as the ability to create Java and C database objects that allow for additional access to the underlying OS.
If you are considering scripting for a task that requires a good deal of database access, scripting is probably not your best option. Later in this article, there is a description of how to access the database by using SQL*Plus but in many cases, you are better off approaching the problem by using another language.
The charts below summarize information accessible from within the database itself:
Server/OS Information
Server identification
Representative Query
Notes
Host name where the instance is running
select host_name
from v$instance;
You can also obtain this information by running the following from bash:
hostname
or
uname –n
Operating system platform
select platform_name from v$database –-(10g)
Similar information is returned if you run uname –s

File Information

Oracle file locations
Representative Query
Notes
Control files
select name
from v$controlfile;
Location of the database control files. The init.ora parameter control_files also contains this information.
Datafiles
select file_name
from Dba_data_files;
Location of the database datafiles
Temp files
select file_name
from Dba_temp_files;
Location of database temporary files
Log files
select member
from v$logfile;
Location of redo logs
Archived logs
select name
from v$archived_log
Location of archived redo logs. The init.ora parameters log_archive_dest_n also contain this information. This query will not return results if your database is not in Archivelog mode.
Flash recovery area
select name
from v$recovery_file_dest
The directory being used in an Oracle 10g installation for the flash recovery area. The init.ora parameter db_recovery_file_dest also contains this information.
Other points of access on the file system indicated by parameters
select *
from v$parameter

where value like '%/%'
or
value like '%/%';
The results of this query can vary significantly, depending on your installation and version of Oracle Database. Parameters that may be returned include: spfile
standby_archive_dest
utl_file_dir
background_dump_dest user_dump_dest
core_dump_dest
audit_file_dest
dg_broker_config_file1
dg_broker_config_file2
Programmatic access to the file system
select directory_path from dba_directories
The Oracle UTL_FILE_DIR parameter and DIRECTORY database objects can be used to access files that are not a part of standard database functioning.

Process Information

Processor/Processes
Representative Query
Notes
Session Processes
select p.spid, s.username, s.program
from v$process p, v$session s
where p.addr=s.paddr order by 2, 3, 1
The spid can be correlated with ps –ef results to compare the information available within the database with OS information for a given process.
Processes related to parallelism
select slave_name, status
from v$PQ_SLAVE
Many aspects of Oracle Database, such as loading, querying, object creation, recovery, and replication, can take advantage of parallelism to speed up activities that can be broken down. The parameter parallel_threads_per_cpu sets an instance’s default degree of parallelism.

Memory Information

Memory
Representative Query
Notes
Program Global Area
select * from V$PGASTAT
The parameter pga_aggregate_target is used to configure memory for all dedicated server connections.
Linux utilities, including vmstat and top, can be used to monitor memory usage.
System Global Area
select * from v$sga

The parameters SGA_MAX_SIZE and SGA_TARGET are used to configure dynamic memory allocation features of Oracle Database 10 g. Other parameters can be used to manually allocate memory for particular purposes.
Again, various Linux utilities are available to monitor memory allocation.
BASH Scripts
Scripts are either called as part of an automated process (with no human intervention) or run interactively, with a user responding to prompts. As long as you have executable permission for a file, you can run it from the command line by typing its name. If you do not have executable permission for the file but do have read permission, you can run the script by preceding it with sh.
If a script is designed to be run without user input, several options are available for calling it. You can run a script in the background and continue even if you disconnect, by entering a command in the following form:
nohup /path_to_dir/myscript_here.sh &
This can be useful for running scripts that take a long time to complete. The at command can be used to execute a script in the future, and cron can be used to schedule scripts to execute on a recurring basis.
The following examples cover the essential aspects of providing output to view (using echo), looping, conditional logic, and variable assignment.

print_args.sh. Arguments are words to the right of the command name that are passed into the script. To access the first parameter, you use the $1 variable. The $0 variable contains the name of the script itself. The $# variable contains the number of arguments in the script. A handy way to iterate through all of the parameters passed involves the use of a while loop and the shift command. This command is what lets you iterate through all the arguments in the argument list (rather than remaining in an infinite loop).
while [ $# -ne 0  ]
do
        echo $1
        shift
done
If a script takes a filename as an argument (or prompts a user for a filename) and the file will be read later in the script, it is advisable to check whether it is accessible and readable. For example, a recovery script that involves the selection of a backed-up control file might prompt the user to make a selection that will be used later in the script to restore the file.
if [ ! -r $1 ]; then # not exists and is readable
                        echo "File $1 does not exist or is not readable."
                exit;
fi      
The sequence of characters
   
if [ ! -r $1 ];
is what actually performs the test. If the contents between the brackets evaluate to true, the commands that appear between if and fi will be executed. The actual test appears between the brackets. The exclamation point serves to negate the test you are doing. The -r option checks to see if the file is readable. What is being tested in this particular case is the first argument being passed to the script. By using a different test ( -d ), you can check to find out if a given entry is a directory (see is_a_directory.sh).

do_continue.sh. This example is a simple representative sequence of commands that can be used to read user input for various purposes. Before running a process that can result in data loss or other undesirable results under certain conditions that are not determinable from within the script, it is advisable to include a prompt asking if the user actually wants the script to execute the next command or commands. The following example asks if the user wants to continue, reads a variable named doContinue from the command line, and evaluates what the user entered. If the user enters anything other than “y,” that person is informed that the script is “quitting” and it exits without executing the remainder of the script following the end of the if block ( fi).
doContinue=n
echo -n "Do you really want to continue? (y/n) " 
read doContinue

if [ "$doContinue" != "y" ]; then
      echo "Quitting..."
      exit
fi
It is imperative that only users with the correct permissions and environment run a given script. A useful check in a script tests the user who is attempting to run the script. If you enclose a command within back-quote ( ) characters, the results of the command can be returned to the script. The following example retrieves the currently logged-on user, by using whoami, and displays the date, by using the date command later in the script.
       
echo "You are logged in as ‘whoami‘";

if [ ‘whoami‘ != "oracle" ]; then
  echo "Must be logged on as oracle to run this script."
  exit
fi

echo "Running script at ‘date‘"
Scripts written to interact with Oracle Database sometimes require the entry of sensitive information such as a database password. The stty –echo command turns off the screen echo, so that the information entered for the subsequent read command will not appear on the screen. After the sensitive information has been read and stored in a variable ( pw in the example below), the display can be turned back on with stty echo.
       
stty -echo    
        echo -n "Enter the database system password:  "
        read pw
stty echo
Oracle Scripts
Some files reside in a fixed location for a given Oracle install. You can determine the Oracle inventory by viewing the /etc/oraInst.loc file. The /etc/oratab file identifies databases (and other Oracle programs) installed on the server.
get_inv_location.sh. This script is a bit less intuitive than the previous examples. By breaking down the script into its component commands, you will get a better understanding of what is being accomplished.
To determine inventory location, you are going to pipe the results of the cat command (which displays the contents of the file) to grep (a utility that prints lines that match a given pattern). You are searching for lines that contain the literal inventory_loc .
cat /etc/oraInst.loc | grep inventory_loc
If there is more than one inventory location due to multiple installs, you want to exclude lines commented out with a # . The –v option excludes lines that contain the given pattern.
       
cat /etc/oraInst.loc |grep -v "#"|grep inventory_loc

The result of this command will look something like this:
inventory_loc=/u01/oraInventory
You can redirect standard out to a file, using the > redirection. If the file does not exist, it is created. If it does exist, it is overwritten.

       
cat /etc/oraInst.loc|grep -v "#"|grep inventory_loc > tmp
Once you have the record that indicates the inventory location, you want to remove the portion of the record prior to the equal sign. This time you pipe the results of the cat command to awk (a pattern-scanning and processing language often used to split up variable-length fields), which essentially tokenizes the string. The –F option directs awk to use the equal sign as the delimiter. You then print the second token of this string ( $2 ), which is everything to the right of the equal sign. The result is our inventory location (/u01/oraInventory).
cat tmp | awk -F= '{print $2}'
There is no particular reason to allow the temporary file (tmp) to remain, so it can be removed.

rm tmp
list_oracle_homes.sh. If you would like to determine the ORACLE_HOME for a given database, you have several options. You can log on as the database user and echo the $ORACLE_HOME variable. You can also search through the /etc/oratab file and select the name associated with a given instance. Database entries in this file are of the form
   
$ORACLE_SID:$ORACLE_HOME:<N|Y>:
The following one-liner prints out the ORACLE_HOME of an entry with an ORACLE_SID of TESTDB:
cat /etc/oratab | awk -F: '{if ($1=="TESTDB") print $2 }'

However, what if you have a requirement that an operation needs to be performed on each ORACLE_HOME listed in the /etc/orainst file? You can iterate through such a list by utilizing the following code snippet.
dblist=‘cat /etc/oratab | grep -v "#" | awk -F: '{print $2 }'‘

for ohome in $dblist ; do
  echo $ohome
done
The dblist variable is being used as an array. All ORACLE_HOME paths are held by this variable. A for loop is used to iterate through this list, and each entry is assigned to the variable ohome and then echoed to standard out.

search_log.sh. A variety of logs are generated by Oracle products, and you might be interested in monitoring them. The database alert log contains messages that are critical to database operations. Log files are also generated when products are installed or deinstalled and when patches are applied. The following script iterates over a file passed to it as an argument. If any lines are found that contain ORA-, an e-mail message is sent to a designated recipient.
   
cat $1 | grep ORA- > alert.err

if [ ‘cat alert.err|wc -l‘ -gt 0 ]
then
        mail -s "$0 $1 Errors" administrator@yourcompany.com < alert.err
fi      
The specific test being performed is a count of the number of words that exist in the file alert.err, which is written when you redirect to alert.err. If the word count ( wc) is greater than ( -gt) zero, the contents of the if block will execute. In this case, you are using mail ( send mail might also be used) to send a message. The title of the message contains the script being executed ( $0), the name of the log being searched ( $1), and the lines that matched our initial search ( ORA-) as the body of the message.
Environmental variables such as ORACLE_HOME, ORACLE_BASE, and ORACLE_SID can be used to locate resources that are not in a fixed location in the Linux environment. If you are administering an Oracle E-Business Suite 11i application instance, you have numerous other environmental variables that can be used to locate resources. These include APPL_TOP, TWO_TASK, CONTEXT_NAME, and CONTEXT_FILE, to name a few. To see a complete list in your environment, execute the following command and examine the resulting file (myenv.txt):
env > myenv.txt
Various combinations of these environmental variables can be used as the location of a file being searched. For example, an alert log location might be designated as
   
$ORACLE_BASE/admin/$ORACLE_SID/bdump/alert_$ORACLE_SID.log

Based on the principles introduced in this script, a larger one can be written and scheduled to execute at periodic intervals that will search the contents of the alert log (or another file of interest) and send an e-mail if any errors exist. Then the contents of the log can be moved to another file, so that only the most recent error messages will be sent via e-mail.
Oracle Recovery Manager Scripts. Oracle Recovery Manager (RMAN) is a utility that can be used to administer database backup and recovery. This greatly simplifies administration on multiple platforms, because all backup scripts can be written to be run by RMAN, reducing the amount of platform-specific code. RMAN can be called by the underlying operating system and passed a script. For example, a cold (cold.sh) backup might consist of the following script:
   
#!/bin/bash
rman target / <<EOF
shutdown immediate;
startup mount;
backup spfile;
backup database;
alter database open;
delete noprompt obsolete;
quit;
EOF     
Line 1 indicates that you are using the bash shell. Line 2 invokes Oracle Recovery Manager and specifies the OS user login to the target database (specified in the environmental variable $ORACLE_SID). The <<EOF following this indicates that the subsequent commands will be passed into RMAN to be processed. The EOF on the last line indicates that you have reached the end of the series of commands to be passed into RMAN. RMAN is then used to shut down the database, start up and mount the database, and proceed to back up the server parameter file and the contents of the database. The database is then opened. Any backups that are older than those specified in the retention policy are then deleted. See the RMAN documentation to construct a backup that is relevant for your situation. Nightly backups are commonly scheduled and run automatically. The script above could be called and the contents of standard out sent to an e-mail address with the following command:
sh cold.sh | mail -s"Backup ‘date‘" administrator@yourcompany.com

Other Oracle utilities can be run from within shell scripts as well. The tnsping utility can be used to see if a given Oracle connection identifier can contact the listener. You might run this utility to check for connection problems:
tnsping ptch04 |grep TNS- 
Database exports and imports (traditional and data pump) are also good candidates for the scripting of repeating processes.

Database Installation. Many of the steps involved in the setup of a database can be automated. Before you install Oracle 10g on Linux, you need to run various tests to verify the minimum required version of packages and the settings of kernel parameters. You can query the version of a package by using the rpm command with the –q option.
   
rpm -q compat-libstdc++

You can determine various aspects of the system by looking at the /proc “virtual” or “pseudo” file system. It contains not real files but, rather, runtime system information that can be viewed as if it resided in files. For instance, /proc/meminfo contains memory information for the system, and grep MemTotal /proc/meminfo displays the total memory of the system. By using awk as you did earlier, you could isolate the amount of memory in kilobytes, by using:
grep MemTotal /proc/meminfo | awk '{print $2}'
Such a command could be used in the context of a script that would do comparisons and respond accordingly (even updating the system itself). The sample scripts 10gchecks_kernel.sh and 10gchecks.sh simply display current and recommended versions and settings based on the Oracle documentation.

Database Monitoring. The ps command can be used to report process status and to check to see if a database, listener, script, or any other process of interest is running. If you want to list all the databases that are currently running on a server, you can run the following command:
   
echo "‘ps -ef | grep smon|grep -v grep|awk '{print $8}'| awk -F \"_\" 
'{print$3}'‘"
Although this is functional, it is a bit difficult to understand at first glance. The first command, ps (using the -ef options for a full listing of all processes), finds all of the processes running on the server. The next, grep, searches for SMON (the Oracle System Monitor background process), which indicates that the database is running. You want to remove entries that refer to the grep command itself, which is being run. You then use awk to locate the eighth column in the listing, which contains the system monitor process name in the form ora_smon_<oracle_sid>. The last instance of awk then uses the underscore character as the delimiter to search for and print the database name that owns the SMON process. The underscore character needs to appear in quotes, and the backslash is used before each of the quotes to escape those quotes (because the entire string appears within a set of double quotes).

exec_sql.sh. As mentioned earlier, it is possible to query the database from a shell script provided the user has access to sqlplus. The following example returns a (space-delimited) list of machines that are currently maintaining sessions in the database:
   
#!/bin/bash

output=‘sqlplus -s "/ as sysdba" <<EOF
       set heading off feedback off verify off
       select distinct machine from v\\$session;
       exit
EOF
‘

echo $output    
This script is similar to the previous RMAN script in that you are inputting commands into another program. A local OS authenticated connection is made to the database as sysdba. To prevent extraneous messages from being returned, this script turns off the heading, feedback, and verify options of SQL*Plus. The query is executed and SQL*Plus is exited.
Note the double-backslash before the dollar sign in the view name. These are required escape sequences within the string: The first slash escapes the second slash, which escapes the dollar sign. Again, not pretty, but functional.
As mentioned previously, if you are going to be writing something that requires extensive database access, shell scripting is not the best option. It might be better to rewrite the script in PL/SQL, Perl (which uses syntax similar to that used in shell scripting), Python, Java, or another language of your choice.

Conclusion
Shell scripting can be an effective tool for quickly automating repetitive and error-prone administration tasks. The examples in this article provide an introduction to the possibilities available but are far from comprehensive. Every system has distinct quirks and foibles and a unique configuration. An administrator will develop unique solutions to meet the needs of the particular system.

Database Tuning

There are basically two forms of tuning: o Speed: short response time o High throughput scalability: higher load at a comparable response time or throughput. During this course, methods to identify and resolve bottlenecks will be discussed. The result of tuning should be visible to users, either as a decrease in the time it takes to perform a task, or as an increase in the number of concurrent sessions. Tuning is performed because either a problem already exists or the DBA wishes to prevent problems from occurring. Some examples of items to be monitored are critical table growth, changes in the statements users execute, and I/O distribution across devices. I will explain where waits and bottlenecks exist, and how to resolve these Problems. Tuning can be divided into different phases: o Application design and programming o Database configuration o Adding a new application o Troubleshooting and tuning Proactive Tuning Considerations during Development o Tune the design. o Tune the application. o Tune memory. o Tune I/O. o Tune contention. o Tune the operating system Tuning Steps During Production o Locate the bottleneck by using tools. o Determine the reason for the bottleneck. o Resolve the cause. o Check that the bottleneck has been resolved. Tuning Steps during Production The tuning methodology for production systems works by resolving problems when they occur: 1. Locate a bottleneck by using tools, such as STATSPACK, UTLBSTAT and UTLESTAT, or Oracle Enterprise Manager. 2. The bottleneck usually manifests itself as a wait event. Determine the reason for the wait event. 3. Resolve the cause of the wait. This could mean changing the size of a member of the System Global Area. 4. Check that the change has produced a positive effect on the system by running the application again, and then using the tools used in step 1. 5. Repeat the process if your goals have not yet been achieved. Diagnostic and Tuning Tools Alert.log file : You can check the time between log switches in the alert.log file. The alert log file consists of a chronological log of messages and errors. The following information is logged in the alert log file: o Internal errors (ORA-600) and block corruption errors (ORA-1578 or ORA-1498) o Operations that affect database structures and parameters, and statements such as CREATE DATABASE, STARTUP, SHUTDOWN, ARCHIVE LOG, and RECOVER o The values of all nondefault initialization parameters at the time the instance starts o The location of the ALERT.LOG file is given by the parameter BACKGROUND_DUMP_DEST Background trace files These files are created by the background processes. In general these files contain diagnostic information, not information regarding performance tuning. By using events, Oracle Support can write information to these files regarding performance. User trace files User trace files can be generated by server processes at the user's or DBA's request. Emergency Performance Techniques This chapter contains the following sections: Introduction to Emergency Performance Techniques Steps in the Emergency Performance Method Introduction to Emergency Performance Techniques Introduction to Emergency Performance Techniques An emergency situation, a component of the system has changed to transform it from a reliable, predictable system to one that is unpredictable and not satisfying user requests. In this case, the role of the performance engineer is to rapidly determine what has changed and take appropriate actions to resume normal service as quickly as possible. In many cases, it is necessary to take immediate action, and a rigorous Performance improvement project is unrealistic. After addressing the immediate performance problem, the performance engineer must collect sufficient debugging information either to get better clarity on the performance problem or to at least ensure that it does not happen again. However, shortcuts are taken in various stages because of the timely nature of the problem. Keeping detailed notes and records of facts found as the debugging process progresses is essential for later analysis and justification of any remedial actions. This is analogous to a doctor keeping good patient notes for future reference. Steps in the Emergency Performance Method The Emergency Performance Method is as follows: 1. Survey the performance problem and collect the symptoms of the performance problem. This process should include the following: User feedback on how the system is underperforming. Is the problem throughput or response time? Ask the question, "What has changed since we last had good performance?" This answer can give clues to the problem; however, getting unbiased answers in an escalated situation can be difficult. 2. Sanity-check the hardware utilization of all components of the application system. Check where the highest CPU utilization is, and check the disk, memory usage, and network performance on all the system components. This quick process identifies which tier is causing the problem. If the problem is in Steps in the Emergency Performance Method Emergency Performance Techniques 3-3 the application, then shift analysis to application debugging. Otherwise, move on to database server analysis. 3. Determine if the database server is constrained on CPU or if it is spending time waiting on wait events. If the database server is CPU-constrained, then investigate the following: Sessions that are consuming large amounts of CPU at the operating system level Sessions or statements that perform many buffer gets at the database level (check V$SESSTAT, V$SQL) Execution plan changes causing sub-optimal SQL execution (these can be difficult to locate) Incorrect setting of initialization parameters Algorithmic issues as a result of code changes or upgrades of all components If the database sessions are waiting on events, then follow the wait events listed in V$SESSION_WAIT to determine what is causing serialization. In cases of massive contention for the library cache, it might not be possible to logon or submit SQL to the database. In this case, use historical data to determine why there is suddenly contention on this latch. If most waits are for I/O, then sample the SQL being run by the sessions that are performing all of the I/Os. 4. Apply emergency action to stabilize the system. This could involve actions that take parts of the application off-line or restrict the workload that can be applied to the system. It could also involve a system restart or the termination of job in process. These naturally have service level implications. 5. Validate that the system is stable. Having made changes and restrictions to the system, validate that the system is now stable, and collect a reference set of statistics for the database.
Concurrent Manager 
  as name suggest is Manager which manages your concurrent requests in Oracle Apps 11i. You can assign single request or batch job & let concurrent manager manage these request. Without going to technical definition & explanation about them I will go straight to the point which you might be interested in looking as Sysadmin or Oracle Apps DBA i.e. from startup/shutdown to configuration then finally to tuning. So tie up your belt & get ready for ride to CM.
 How to Start/Stop Concurrent Manager ? In Oracle Applications 11i as you might already be aware that all startup shutdown scripts in 11i(11.5.10) are in $OAD_TOP/admin/script/$CONTEXT_NAME where $OAD_TOP is also called as $COMMON_TOP and $CONTEXT_NAME is your SID_hostname , lets say your SID or Instance Name is VISION and its on machine with name machine1 so your CONTEXT_NAME will be VISION_machine1. So script to start/stop concurrent manager is adcmctl.sh where ad (In my view) is application dba , cm is concurrent manager & ctl means control. Well if you really want to explore more I'll suggest you open this file & go through it . If you open this script you will see syntax to run it in start or stop mode so I am not going to spoon feed you by giving syntax. When you run adcmctl.sh it records execution of this command in text file with name adcmctl.txt at directory $COMMON_TOP/admin/log/$CONTEXT_NAME. adcmctl.sh called another script startmgr which in turn calls batchmgr in $FND_TOP/bin and this is actual file which fire up Internal Concurrent Manager Process (Open these two files as they are text file and read content in these files. Lets check important parameter in this file
  • DISPLAY Set it to some valid XServer , if this is not set properly your reports display will fail (specially one with PDF or Bar code dislay)
  • DIAG_PARAM Diagnosis parameter it will take value Y or N with default N, If you encounter some error while starting Concurrent Manager you change this to Y and then extra logging will be enabled then check Concurrent Manager log file.
  • CONCSUB stands for CONC(Concurrent) SUB(Submit) , used to submit concurrent request like shut down concurrent manager.
  • Sunday, September 23, 2012

    MULTY NODE AND SINGLE NODE INSTALLATION

    ALL CLIPS OF DBA TASKS

    ALL CLIPS OF DBA TASKS  

    Every thing of dba training

    Oracle DBA Justin - How to clone an Oracle database not using RMAN  

    Oracle DBA Justin - How to create an RMAN recovery catalog for an Oracle database # 1 of 2  


     Converting Oracle Database from Windows to Linux using RMAN 

     

    Oracle Database: How-To: Create a New database using DBCA 

    Oracle - Datapump Import full example 

    How to Export and import oracle 10g database .rm 

    How to Start and Stop and Oracle Database 

    Oracle DBA Justin - How to backup an 11g Oracle database cold 

    Oracle 11g RMAN Catalog Creation 

    Performing Block Media Recovery with RMAN - Step by Step video Tutorial

     

     

     

     

     

     

     

     

     

    LOADER UTILITIES

    LOADER UTILITIES

    MONETARING DATABASE

    MONETARING  DATABASE

    MANUAL BACKUP AND RECOVERY

    MANUAL BACKUP AND RECOVERY

    ARCHITECTURE

    ARCHITECTURE

    DATA PUMP

    CLONING

    PATCHING

    Saturday, September 22, 2012

    LDR SOLUTIONS


     LDR SOLUTIONS 

    • CORE DBA AND APPS DBA

      • CORE DBA AND APPS DBA
      1. CORE AND APPS LDR CONCEPTS 
      2. CORE AND APPS LDR PROJECTS 
      3. ALL CLIPS OF DBA TASKS


      CATEGORIES

      RMAN Incremental Backups

      RMAN incremental backups back up only datafile blocks that have changed since a specified previous backup. You can make incremental backups of databases, individual tablespaces or datafiles.
      The goal of an incremental backup is to back up only those data blocks that have changed since a previous backup.
      The primary reasons for making incremental backups part of your strategy are:
      • For use in a strategy based on incrementally updated backups, where these incremental backups are used to periodically roll forward an image copy of the database
      • To reduce the amount of time needed for daily backups
      • To save network bandwidth when backing up over a network
      • To get adequate backup performance when the aggregate tape bandwidth available for tape write I/Os is much less than the aggregate disk bandwidth for disk read I/Os
      • To be able to recover changes to objects created with the NOLOGGING option. For example, direct load inserts do not create redo log entries and their changes cannot be reproduced with media recovery. They do, however, change data blocks and so are captured by incremental backups.
      • To reduce backup sizes for NOARCHIVELOG databases. Instead of making a whole database backup every time, you can make incremental backups.
        As with full backups, if you are in ARCHIVELOG mode, you can make incremental backups if the database is open; if the database is in NOARCHIVELOG mode, then you can only make incremental backups after a consistent shutdown.
        See Also:
        Oracle Database Concepts for more information about NOLOGGING mode
      One effective strategy is to make incremental backups to disk, and then back up the resulting backup sets to a media manager with BACKUP AS BACKUPSET. The incremental backups are generally smaller than full backups, which limits the space required to store them until they are moved to tape. Then, when the incremental backups on disk are backed up to tape, it is more likely that tape streaming can be sustained because all blocks of the incremental backup are copied to tape. There is no possibility of delay due to time required for RMAN to locate changed blocks in the datafiles.

      4.4.1 Incremental Backup Algorithm

      Each data block in a datafile contains a system change number (SCN), which is the SCN at which the most recent change was made to the block. During an incremental backup, RMAN reads the SCN of each data block in the input file and compares it to the checkpoint SCN of the parent incremental backup. If the SCN in the input data block is greater than or equal to the checkpoint SCN of the parent, then RMAN copies the block.
      Note that if you enable the block change tracking feature, RMAN can refer to the change tracking file to identify changed blocks in datafiles without scanning the full contents of the datafile. Once enabled, block change tracking does not alter how you take or use incremental backups, other than offering increased performance. See "Improving Incremental Backup Performance: Change Tracking" for more details about enabling block change tracking.

      4.4.1.1 Level 0 and Level 1 Incremental Backups

      Incremental backups can be either level 0 or level 1. A level 0 incremental backup, which is the base for subsequent incremental backups, copies all blocks containing data, backing the datafile up into a backup set just as a full backup would. The only difference between a level 0 incremental backup and a full backup is that a full backup is never included in an incremental strategy.
      A level 1 incremental backup can be either of the following types:
      • A differential backup, which backs up all blocks changed after the most recent incremental backup at level 1 or 0
      • A cumulative backup, which backs up all blocks changed after the most recent incremental backup at level 0
      Incremental backups are differential by default.
      Note:
      Cumulative backups are preferable to differential backups when recovery time is more important than disk space, because during recovery each differential backup must be applied in succession. Use cumulative incremental backups instead of differential, if enough disk space is available to store cumulative incremental backups.
      The size of the backup file depends solely upon the number of blocks modified and the incremental backup level.

      4.4.1.2 Differential Incremental Backups

      In a differential level 1 backup, RMAN backs up all blocks that have changed since the most recent cumulative or differental incremental backup, whether at level 1 or level 0. RMAN determines which level 1 backup occurred most recently and backs up all blocks modified after that backup. If no level 1 is available, RMAN copies all blocks changed since the level 0 backup.
      The following command performs a level 1 differential incremental backup of the database:
      RMAN> BACKUP INCREMENTAL LEVEL 1 DATABASE;
      
      
      If no level 0 backup is available, then the behavior depends upon the compatibility mode setting. If compatibility is >=10.0.0, RMAN copies all blocks changed since the file was created, and stores the results as a level 1 backup. In other words, the SCN at the time the incremental backup is taken is the file creation SCN. If compatibility <10.0.0, RMAN generates a level 0 backup of the file contents at the time of the backup, to be consistent with the behavior in previous releases.
      Figure 4-1 Differential Incremental Backups (Default)
      Description of Figure 4-1 follows
      Description of "Figure 4-1 Differential Incremental Backups (Default)"
      In the example shown in Figure 4-1, the following occurs:
      • Sunday
        An incremental level 0 backup backs up all blocks that have ever been in use in this database.
      • Monday - Saturday
        On each day from Monday through Saturday, a differential incremental level 1 backup backs up all blocks that have changed since the most recent incremental backup at level 1 or 0. So, the Monday backup copies blocks changed since Sunday level 0 backup, the Tuesday backup copies blocks changed since the Monday level 1 backup, and so forth.
      • The cycle is repeated for the next week.

      4.4.1.3 Cumulative Incremental Backups

      In a cumulative level 1 backup, RMAN backs up all the blocks used since the most recent level 0 incremental backup. Cumulative incremental backups reduce the work needed for a restore by ensuring that you only need one incremental backup from any particular level. Cumulative backups require more space and time than differential backups, however, because they duplicate the work done by previous backups at the same level.
      The following command performs a cumulative level 1 incremental backup of the database:
      BACKUP INCREMENTAL LEVEL 1 CUMULATIVE DATABASE; # blocks changed since level 0
      
      
      Figure 4-2 Cumulative Incremental Backups
      Description of Figure 4-2 follows
      Description of "Figure 4-2 Cumulative Incremental Backups"
      In the example shown in Figure 4-2, the following occurs:
      • Sunday
        An incremental level 0 backup backs up all blocks that have ever been in use in this database.
      • Monday - Saturday
        A cumulative incremental level 1 backup copies all blocks changed since the most recent level 0 backup. Because the most recent level 0 backup was created on Sunday, the level 1 backup on each day Monday through Saturday backs up all blocks changed since the Sunday backup.
      • The cycle is repeated for the next week.

      4.4.1.4 Basic Incremental Backup Strategy

      Choose a backup scheme according to an acceptable MTTR (mean time to recover). For example, you can implement a three-level backup scheme so that a full or level 0 backup is taken monthly, a cumulative level 1 is taken weekly, and a differential level 1 is taken daily. In this scheme, you never have to apply more than a day's worth of redo for complete recovery.
      When deciding how often to take full or level 0 backups, a good rule of thumb is to take a new level 0 whenever 50% or more of the data has changed. If the rate of change to your database is predictable, then you can observe the size of your incremental backups to determine when a new level 0 is appropriate. The following query displays the number of blocks written to a backup set for each datafile with at least 50% of its blocks backed up:
      SELECT FILE#, INCREMENTAL_LEVEL, COMPLETION_TIME, BLOCKS, DATAFILE_BLOCKS 
        FROM V$BACKUP_DATAFILE 
        WHERE INCREMENTAL_LEVEL > 0 
        AND BLOCKS / DATAFILE_BLOCKS > .5 
        ORDER BY COMPLETION_TIME;
      
      
      Compare the number of blocks in differential or cumulative backups to a base level 0 backup. For example, if you only create level 1 cumulative backups, then when the most recent level 1 backup is about half of the size of the base level 0 backup, take a new level 0.

      4.4.2 Making Incremental Backups: BACKUP INCREMENTAL

      After starting RMAN, run the BACKUP INCREMENTAL command at the RMAN prompt. This example makes a level 0 incremental backup of the database:
      BACKUP INCREMENTAL LEVEL 0 DATABASE;
      
      
      This example makes a differential level 1 backup of the SYSTEM tablespace and datafile tools01.dbf. It will only back up those data blocks changed since the most recent level 1 or level 0 backup:
      BACKUP INCREMENTAL LEVEL 1
        TABLESPACE SYSTEM
        DATAFILE 'ora_home/oradata/trgt/tools01.dbf';
      
      
      This example makes a cumulative level 1 backup of the tablespace users, backing up all blocks changed since the most recent level 0 backup.
      BACKUP INCREMENTAL LEVEL = 1 CUMULATIVE
        TABLESPACE users;
      

      4.4.3 Incrementally Updated Backups: Rolling Forward Image Copy Backups

      Oracle's Incrementally Updated Backups feature lets you avoid the overhead of taking full image copy backups of datafiles, while providing the same recovery advantages as image copy backups.
      At the beginning of a backup strategy, RMAN creates an image copy backup of the datafile. Then, at regular intervals, such as daily, level 1 incremental backups are taken, and applied to the image copy backup, rolling it forward to the point in time when the level 1 incremental was created.
      During restore and recovery of the database, RMAN can restore from this incrementally updated copy and then apply changes from the redo log, with the same results as restoring the database from a full backup taken at the SCN of the most recently applied incremental level 1 backup.
      A backup strategy based on incrementally updated backups can help minimize time required for media recovery of your database. For example, if you run scripts to implement this strategy daily, then at recovery time, you never have more than one day of redo to apply.

      4.4.3.1 Incrementally Updated Backups: A Basic Example

      To create incremental backups for use in an incrementally updated backups strategy, you must use the BACKUP... FOR RECOVER OF COPY WITH TAG form of the BACKUP command. How the command works is best understood in the context of an example script that would implement the strategy.
      This script, run on a regular basis, is all that is required to implement a strategy based on incrementally updated backups:
      RUN {
         RECOVER COPY OF DATABASE WITH TAG 'incr_update';
         BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_update'
             DATABASE;
         }
      
      
      The syntax used in the script does not, however, make it clear how the strategy works. To understand the script and the strategy, it is necessary to understand the effects of these two commands when no datafile copies or incremental backups exist.
      • The BACKUP INCREMENTAL LEVEL 1... FOR RECOVER OF COPY WITH TAG... command does not actually always create a level 1 incremental backup. If there is no level 0 image copy backup of an particular datafile, then executing this command creates an image copy backup of the datafile on disk with the specified tag instead of creating the level 1 backup.
        Note:
        Even when the BACKUP INCREMENTAL LEVEL 1 ... FOR RECOVER OF COPY command is used with DEVICE TYPE SBT to create a backup on tape, the first time it is used it creates the image copy on disk, and does not write any backup on tape. Subsequent incremental level 1 backups can be created on tape once the image copy is on disk.
        Thus, the first time the script runs, it creates the image copy of the datafile needed to begin the cycle of incremental updates. In the second run and all subsequent runs, it produces level 1 incremental backups of the datafile.
      • The RECOVER COPY OF DATABASE WITH TAG... command causes RMAN to apply any available incremental level 1 backups to a set of datafile copies with the specified tag.
        If there is no incremental backup or no datafile copy, the command generates a message but does not generate an error.
        The first time the script runs, this command has no effect, because there is neither a datafile copy nor a level 1 incremental backup.
        The second time the script runs, there is a datafile copy (created by the first BACKUP command), but no incremental level 1 backup, so again, the command has no effect.
        On the third run and all subsequent runs, there is a datafile copy and a level 1 incremental from the previous run, so the level 1 incremental is applied to the datafile copy, bringing the datafile copy up to the checkpoint SCN of the level 1 incremental.
      Note also the following details about how this example works:
      • Each time a datafile is added to the database, an image copy of the new datafile is created the next time the script runs. The time after that, the first level 1 incremental for that datafile is created, and on all subsequent runs the new datafile is processed like any other datafile.
      • Tags must be used to identify the incremental level 0 datafile copies created for use in this strategy, so that they do not interfere with other backup strategies you implement. If you have multiple incremental backup strategies in effect, RMAN cannot unambiguously create incremental level 1 backups unless you tag level 0 backups.
        The incremental level 1 backups to apply to those image copies are selected based upon the checkpoint SCNs of the image copy datafiles and the available incremental level 1 backups. (The tag used on the image copy being recovered is not a factor in the selection of the incremental level backups.)
      In practice, you would schedule the example script to run once each day, possibly at midnight. On a typical night (that is, after the first two nights), when the script completed the following files would be available for a point-in-time recovery:
      • An image copy of the database, as of the checkpoint SCN of the preceding run of the script, 24 hours earlier
      • An incremental backup for the changes since the checkpoint SCN of preceding run
      • Archived redo logs including all changes between the checkpoint SCN of the image copy and the current time
      If, at some point during the following 24 hours, you need to restore and recover your database from this backup, for either complete or point-in-time recovery, you can restore the datafiles from the incrementally updated datafile copies, and apply changes from the most recent incremental level 1 and the redo logs to reach the desired SCN. At most, you will have 24 hours of redo to apply, which limits how long point-in-time recovery will take.
      See Also:
      Oracle Database 2 Day DBA to see how this technique is used in the Oracle-suggested backup strategy in Enterprise Manager.

      4.4.3.2 Incrementally Updated Backups: A One Week Example

      The basic example can be extended to provide fast recoverability to a window greater than 24 hours. Alter the RECOVER COPY... WITH TAG to perform incomplete recovery of the datafile copies to the point in time in the past where you want your window of recoverability to begin. This example shows how to maintain a seven day window:
      RUN {
         RECOVER COPY OF DATABASE WITH TAG 'incr_update' 
             UNTIL TIME 'SYSDATE - 7';
         BACKUP INCREMENTAL LEVEL 1 FOR RECOVER OF COPY WITH TAG 'incr_update'
             DATABASE;
         }
      
      
      The effect of the script is as follows:
      • On the first night the RECOVER COPY... UNTIL TIME statement has no effect, and the BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates the incremental level 0 copy.
      • On the second through seventh nights, the RECOVER COPY... UNTIL TIME statement has no effect because TIME 'SYSDATE - 7' is still a time in the future. The BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates differetial incremental level 1 backups containing the block changes for the previous day.
      • On the eighth and all subsequent nights night, the RECOVER COPY... UNTIL TIME statement applies the level 1 incremental from seven days ago to the copy of the database. The BACKUP INCREMENTAL... FOR RECOVER OF COPY statement creates an incremental backup containing the changes for the previous day.
      As with the basic example, you have fast recoverability to any point in time between the SCN of the datafile copies and the present, using block changes from the incremental backups and individual changes from the redo logs. Because you have the daily level 1 incrementals, you still never need to apply more than one day of redo.

      4.4.4 Improving Incremental Backup Performance: Change Tracking

      RMAN's change tracking feature for incremental backups improves incremental backup performance by recording changed blocks in each datafile in a change tracking file. If change tracking is enabled, RMAN uses the change tracking file to identify changed blocks for incremental backup, thus avoiding the need to scan every block in the datafile.
      After enabling change tracking, the first level 0 incremental backup still has to scan the entire datafile, as the change tracking file does not yet reflect the status of the blocks. Subsequent incremental backup that use this level 0 as parent will take advantage of the change tracking file.
      Using change tracking in no way changes the commands used to perform incremental backups, and the change tracking files themselves generally require little maintenance after initial configuration.
      Change tracking is disabled by default, because it does introduce some minimal performance overhead on your database during normal operations. However, the benefits of avoiding full datafile scans during backup are considerable, especially if only a small percentage of data blocks are changed between backups. If your backup strategy involves incremental backups, then you should enable change tracking.
      One change tracking file is created for the whole database. By default, the change tracking file is created as an Oracle managed file in DB_CREATE_FILE_DEST. You can also specify the name of the block change tracking file, placing it in any location you choose.
      Note:
      In a Real Applications Clusters (RAC) environment, the change tracking file must be located on shared storage accessible from all nodes in the cluster.
      Oracle saves enough change-tracking information to enable incremental backups to be taken using any of the 8 most recent incremental backups as its parent.
      Although RMAN does not support backup and recovery of the change-tracking file itself, if the whole database or a subset needs to be restored and recovered, then recovery has no user-visible effect on change tracking. After the restore and recovery, the change tracking file is cleared, and starts recording block changes again. The next incremental backup after any recovery is able to use change-tracking data.

      4.4.4.1 Enabling and Disabling Change Tracking

      You can enable or disable change tracking when the database is either open or mounted. To alter the change tracking setting, you must use SQL*Plus to connect to the target database with administrator privileges.
      To store the change tracking file in the database area, set DB_CREATE_FILE_DEST in the target database. Then issue the following SQL statement to enable change tracking:
      SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING;
      
      
      You can also create the change tracking file in a location you choose yourself, using the following SQL statement:
      SQL> ALTER DATABASE ENABLE BLOCK CHANGE TRACKING 
      
      USING FILE '/mydir/rman_change_track.f' REUSE;
      
      
      The REUSE option tells Oracle to overwrite any existing file with the specified name.
      To disable change tracking, use this SQL statement:
      SQL> ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
      
      
      If the change tracking file was stored in the database area, then it is deleted when you disable change tracking.

      4.4.4.2 Checking Whether Change Tracking is Enabled

      From SQL*Plus, you can query V$BLOCK_CHANGE_TRACKING.STATUS to determine whether change tracking is enabled, and if it is, query V$BLOCK_CHANGE_TRACKING.FILENAME to display the filename.

      4.4.4.3 Moving the Change Tracking File

      If you need to move the change tracking file, the ALTER DATABASE RENAME FILE command updates the control file to refer to the new location. The process outlined in this section describes how to change the location of the change tracking file while preserving its contents.
      To relocate the change tracking file:
      1. If necessary, determine the current name of the change tracking file:
        SELECT filename 
        FROM V$BLOCK_CHANGE_TRACKING;
        
        
      2. Shut down the database. For example:
        SHUTDOWN IMMEDIATE
        
        
      3. Using host operating system commands, move the change tracking file to its new location.
      4. Mount the database and move the change tracking file to a location that has more space. For example:
        ALTER DATABASE RENAME FILE     'ora_home/dbs/change_trk.f' TO '/new_disk/change_trk.f'; 
        
        
      5. Open the database:
        ALTER DATABASE OPEN;
        
        
      If you cannot shut down the database, then you must disable change tracking and re-enable it at the new location, as in the following example:
      ALTER DATABASE DISABLE BLOCK CHANGE TRACKING;
      ALTER DATABASE ENABLE BLOCK CHANGE TRACKING USING FILE 'new_location';
      
      
      If you choose this method, you will lose the contents of the change tracking file. Until the next time you complete a level 0 incremental backup, RMAN will have to scan the entire file.

      4.4.4.4 Estimating Size of the Change Tracking File on Disk

      The size of the change tracking file is proportional to the size of the database and the number of enabled threads of redo. The size is not related to the frequency of updates to the database. Typically, the space required for block change tracking is approximately 1/30,000 the size of the data blocks to be tracked. Note, however, the following two factors that may cause the file to be larger than this estimate suggests:
      • To avoid overhead of allocating space as your database grows, the change tracking file size starts at 10MB, and new space is allocated in 10MB incremenents. Thus, for any database up to approximately 300GB the file size is no smaller than 10MB, for up to approximately 600GB the file size is no smaller than 20MB, and so on.
      • For each datafile, a minimum of 320K of space is allocated in the change tracking file, regardless of the size of the file. Thus, if you have a large number of relatively small datafiles, the change tracking file is larger than for databases with a smaller number of larger datafiles containing the same data