Friday, October 17, 2014

Oracle 11g Rman Backup Committed Undo. Why?

Oracle 11g - Rman Backup Committed Undo ? Why?


We already know what undo data is used for. When a transaction changes a block, the past image of the block is kept it the undo segments. The data is kept there even if the transaction is committed because some long running  query that started before the block is changed can ask for the block that was changed and committed. This query should get the past image of the block—the pre-commit image, not the current one. Therefore undo data is kept undo segments even after the commit. The data is flushed out of the undo segment in course of time, to make room for the newly inserted undo data.

When the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed, since they are already in the redo log streams, or even in the datafiles (provided the dirty blocks have been cleaned out from buffer and written to the disk) and can be recovered from there. So, why bother backing up the committed undo data?

In Oracle Database 11g, RMAN does the smart thing: it bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup (and the recovery as well). In many databases, especially OLTP ones where the transaction are committed more frequently and the undo data stays longer in the undo  segments, most of the undo data is actually committed. Thus RMAN has to backup only a few blocks from the undo tablespaces.

The best part is that we  needn’t do anything to achieve this optimization; Oracle does it by itself.

Oracle 11g - Rman Backup Committed Undo ? Why?

Oracle 11g - Rman Backup Committed Undo ? Why?


We already know what undo data is used for. When a transaction changes a block, the past image of the block is kept it the undo segments. The data is kept there even if the transaction is committed because some long running  query that started before the block is changed can ask for the block that was changed and committed. This query should get the past image of the block—the pre-commit image, not the current one. Therefore undo data is kept undo segments even after the commit. The data is flushed out of the undo segment in course of time, to make room for the newly inserted undo data.

When the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed, since they are already in the redo log streams, or even in the datafiles (provided the dirty blocks have been cleaned out from buffer and written to the disk) and can be recovered from there. So, why bother backing up the committed undo data?

In Oracle Database 11g, RMAN does the smart thing: it bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup (and the recovery as well). In many databases, especially OLTP ones where the transaction are committed more frequently and the undo data stays longer in the undo  segments, most of the undo data is actually committed. Thus RMAN has to backup only a few blocks from the undo tablespaces.

The best part is that we  needn’t do anything to achieve this optimization; Oracle does it by itself.

Oracle 11g - Rman Backup Committed Undo ? Why?

Oracle 11g - Rman Backup Committed Undo ? Why?


We already know what undo data is used for. When a transaction changes a block, the past image of the block is kept it the undo segments. The data is kept there even if the transaction is committed because some long running  query that started before the block is changed can ask for the block that was changed and committed. This query should get the past image of the block—the pre-commit image, not the current one. Therefore undo data is kept undo segments even after the commit. The data is flushed out of the undo segment in course of time, to make room for the newly inserted undo data.

When the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed, since they are already in the redo log streams, or even in the datafiles (provided the dirty blocks have been cleaned out from buffer and written to the disk) and can be recovered from there. So, why bother backing up the committed undo data?

In Oracle Database 11g, RMAN does the smart thing: it bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup (and the recovery as well). In many databases, especially OLTP ones where the transaction are committed more frequently and the undo data stays longer in the undo  segments, most of the undo data is actually committed. Thus RMAN has to backup only a few blocks from the undo tablespaces.

The best part is that we  needn’t do anything to achieve this optimization; Oracle does it by itself.

Oracle 11g - Rman Backup Committed Undo ? Why?

Oracle 11g - Rman Backup Committed Undo ? Why?


We already know what undo data is used for. When a transaction changes a block, the past image of the block is kept it the undo segments. The data is kept there even if the transaction is committed because some long running  query that started before the block is changed can ask for the block that was changed and committed. This query should get the past image of the block—the pre-commit image, not the current one. Therefore undo data is kept undo segments even after the commit. The data is flushed out of the undo segment in course of time, to make room for the newly inserted undo data.

When the RMAN backup runs, it backs up all the data from the undo tablespace. But during recovery, the undo data related to committed transactions are no longer needed, since they are already in the redo log streams, or even in the datafiles (provided the dirty blocks have been cleaned out from buffer and written to the disk) and can be recovered from there. So, why bother backing up the committed undo data?

In Oracle Database 11g, RMAN does the smart thing: it bypasses backing up the committed undo data that is not required in recovery. The uncommitted undo data that is important for recovery is backed up as usual. This reduces the size and time of the backup (and the recovery as well). In many databases, especially OLTP ones where the transaction are committed more frequently and the undo data stays longer in the undo  segments, most of the undo data is actually committed. Thus RMAN has to backup only a few blocks from the undo tablespaces.

The best part is that we  needn’t do anything to achieve this optimization; Oracle does it by itself.

How To Identify Database Idle Sessions

How To Identify Database Idle Sessions


The below scripts will identify the Database Idle Session .When on firing the below the scripts, it will prompt for the number of minutes the session is idle for.

SQL> select 
sid, username, status,
 to_char(logon_time,’dd-mm-yy hh:mi:ss’) “LOGON”,
floor(last_call_et/3600)||’:'||  floor(mod(last_call_et,3600)/60)||’:'||
mod(mod(last_call_et,3600),60) “IDLE”,
program
from   v$session
where  type=’USER’
and   (LAST_CALL_ET / 60) > &minutes
order by last_call_et;

Difference Between OBSOLETE AND EXPIRED Backup

Difference Between OBSOLETE AND EXPIRED Backup


RMAN considers backups of datafiles and control files as obsolete, that is, no longer needed for recovery, according to criteria that we specify in the CONFIGURE command. We can then use the REPORT OBSOLETE  command to view obsolete files and DELETE OBSOLETE to delete them .
For ex  :  we set our retention policy to redundancy 2. this means we always want to keep at least 2 backup, after 2 backup, if we take an another backup oldest one become obsolete because there is 3 backup and we want to keep 2. if our flash recovery area is full then obsolete backups can be overwrite.

A status of "expired" means that the backup piece or backup set is not found in the backup destination or missing .Since backup info is hold in our controlfile and catalog . Our controlfile thinks that there is a backup under a directory with a name but someone delete this file from operating system. We can run crosscheck command to check if these files are exist and if rman found a file is missing then mark that backup record as expired which means is no more exists.

Sunday, October 12, 2014

Tuning individual Oracle SQL statements

Tuning individual Oracle SQL statements

For a complete understanding of SQL tuning steps, try the Advanced Oracle SQL Tuning, a 3-Day or 5-day on-site Oracle course.

Tuning individual Oracle SQL statements

The acronym SQL stands for Structured Query Language. SQL is an industry standard database query language that was adopted in the mid-1980s. It should not be confused with commercial products such as Microsoft SQL Server or open source products such as MySQL, both of which use the acronym as part of the title of their products.

Do this before you start individual SQL statement tuning

This broad-brush approach can save thousands of hours of tedious SQL tuning because you can hundreds of queries at once.  Remember, you MUST do this first, else later changes to the optimizer parameters or statistics may un-tune your SQL.
 
Remember, you must ALWAYS start with system-level SQL tuning, else later changes might undo your tuned execution plans:
  • Optimize the server kernel - You must always tune your disk and network I/O subsystem (RAID, DASD bandwidth, network) to optimize the I/O time, network packet size and dispatching frequency.
     
  • Adjusting your optimizer statistics - You must always collect and store optimizer statistics to allow the optimizer to learn more about the distribution of your data to take more intelligent execution plans.  Also, histograms can hypercharge SQL in cases of determining optimal table join order, and when making access decisions on skewed WHERE clause predicates.
     
  • Adjust optimizer parameters - Optimizer optimizer_mode, optimizer_index_caching, optimizer_index_cost_adj.
     
  • Optimize your instance - Your choice of db_block_size, db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance.
     
  • Tune your SQL Access workload with physical indexes and materialized views - Just as the 10g SQLAccess advisor recommends missing indexes and missing materialized views, you should always optimize your SQL workload with indexes, especially function-based indexes, a Godsend for SQL tuning. 
11g Note:  The Oracle 11g SQL Performance Analyzer (SPA), is primarily designed to speed up the holistic SQL tuning process.

Once you create a workload (called a SQL Tuning Set, or STS), Oracle will repeatedly execute the workload, using sophisticated predictive models (using a regression testing approach) to accurately identify the salient changes to SQL execution plans, based on your environmental changes. Using SPA, we can predict the impact of system changes on a workload, and we can forecast changes in response times for SQL after making any change, like parameter changes, schema changes, hardware changes, OS changes, or Oracle upgrades.  For details, see the book Oracle 11g New Features.
Once the environment, instance, and objects have been tuned, the Oracle administrator can focus on what is probably the single most important aspect of tuning an Oracle database: tuning the individual SQL statements. In this final article in my series on Oracle tuning, I will share some general guidelines for tuning individual SQL statements to improve Oracle performance.

Oracle SQL tuning goals

Oracle SQL tuning is a phenomenally complex subject. Entire books have been written about the nuances of Oracle SQL tuning; however, there are some general guidelines that every Oracle DBA follows in order to improve the performance of their systems.  Again, see the book "Oracle Tuning: The Definitive Reference", for complete details.

The goals of SQL tuning focus on improving the execution plan to fetch the rows with the smallest number of database "touches" (LIO buffer gets and PIO physical reads). 

  • Remove unnecessary large-table full-table scans—Unnecessary full-table scans cause a huge amount of unnecessary I/O and can drag-down an entire database. The tuning expert first evaluates the SQL based on the number of rows returned by the query. The most common tuning remedy for unnecessary full-table scans is adding indexes. Standard b-tree indexes can be added to tables, and bitmapped and function-based indexes can also eliminate full-table scans. In some cases, an unnecessary full-table scan can be forced to use an index by adding an index hint to the SQL statement.
     
  • Cache small-table full-table scans—In cases where a full-table scan is the fastest access method, the administrator should ensure that a dedicated data buffer is available for the rows.  In Oracle8 and beyond, a small table can be cached by forcing it into the KEEP pool.
     
  • Verify optimal index usage—Oracle sometimes has a choice of indexes, and the tuning professional must examine each index and ensure that Oracle is using the proper index.
     
  • Materialize your aggregations and summaries for static tables - One features of the Oracle 10g SQLAccess advisor is recommendations for new indexes and suggestions for materialized views.  Materialized views pre-join tables and pre-summarize data, a real silver bullet for data mart reporting databases where the data is only updated daily. Again, see the book "Oracle Tuning: The Definitive Reference", for complete details on SQL tuning with materialized views.
These are the goals of SQL tuning in a nutshell. However, they are deceptively simple, and to effectively meet them, we need to have a through understanding of the internals of Oracle SQL. Let's begin with an overview of the Oracle SQL optimizers.

Oracle SQL optimizers

One of the first things the Oracle DBA looks at is the default optimizer mode for the database. The Oracle initialization parameters offer many cost-based optimizer modes as well as the deprecated yet useful rule-based hint:
The cost-based optimizer uses 'statistics' that are collected from the table using the 'analyze table' command. Oracle uses these metrics about the tables in order to intelligently determine the most efficient way of servicing the SQL query. It is important to recognize that in many cases, the cost-based optimizer may not make the proper decision in terms of the speed of the query. The cost-based optimizer is constantly being improved, but there are still many cases in which the rule-based optimizer will result in faster Oracle queries.
Prior to Oracle 10g, Oracle's default optimizer mode was called 'choose.' In the choose optimizer mode, Oracle will execute the rule-based optimizer if there are no statistics present for the table; it will execute the cost-based optimizer if statistics are present. The danger with using the choose optimizer mode is that problems can occur in cases where one Oracle table in a complex query has statistics and the other tables do not. 
Starting in Oracle 10g, the default optimizer mode is all_rows, favoring full-table scans over index access.  The all_rows optimizer mode is designed to minimize computing resources and it favors full-table scans.  Index access (first_rows_n) adds additional I/O overhead, but they return rows faster, back to the originating query:

Full-table scans touch all data blocks
Hence, many OLTP shops will choose first_rows, first_rows_100 or first_rows_10, asking Oracle to use indexes to reduce block touches:

Index scans return rows fast by doing additional I/O
Note:  Staring in Oracle9i release 2, the Oracle performance tuning guide says that the first_rows optimizer mode has been deprecated and to use first_rows_n instead.
When only some tables contain CBO statistics, Oracle will use the cost-based optimization and estimate statistics for the other tables in the query at runtime. This can cause significant slowdown in the performance of the individual query.
In sum, the Oracle database administrator will always try changing the optimizer mode for queries as the very first step in Oracle tuning. The foremost tenet of Oracle SQL tuning is avoiding the dreaded full-table scan. One of the hallmarks of an inefficient SQL statement is the failure of the SQL statement to use all of the indexes that are present within the Oracle database in order to speed up the query.

Of course, there are times when a full-table scan is appropriate for a query, such as when you are doing aggregate operations such as a sum or an average, and the majority of the rows within the Oracle table must be read to get the query results. The task of the SQL tuning expert is to evaluate each full-table scan and see if the performance can be improved by adding an index.

In most Oracle systems, a SQL statement will be retrieving only a small subset of the rows within the table. The Oracle optimizers are programmed to check for indexes and to use them whenever possible to avoid excessive I/O. However, if the formulation of a query is inefficient, the cost-based optimizer becomes confused about the best access path to the data, and the cost-based optimizer will sometimes choose to do a full-table scan against the table. Again, the general rule is for the Oracle database administrator to interrogate the SQL and always look for full-table scans. 
For the full story, see my book "Oracle Tuning: The Definitive Reference" for details on choosing the right optimizer mode.
A strategic plan for Oracle SQL tuning

Many people ask where they should start when tuning Oracle SQL. Tuning Oracle SQL is like fishing. You must first fish in the Oracle library cache to extract SQL statements and rank the statements by their amount of activity.

Step 1:  Identify high-impact SQL

The SQL statements will be ranked according the number of executions and will be tuned in this order. The executions column of the v$sqlarea view and the stats$sql_summary or the dba_hist_sql_summary table can be used to locate the most frequently used SQL.  Note that we can display SQL statements by:
  • Rows processed:  Queries that process a large number of rows will have high I/O and may also have impact on the TEMP tablespace.
     
  • Buffer gets:  High buffer gets may indicate a resource-intensive query.
     
  • Disk reads:  High disk reads indicate a query that is causing excessive I/O.
     
  • Memory KB:  The memory allocation of a SQL statement is useful for identifying statements that are doing in-memory table joins.
     
  • CPU secs:  This identifies the SQL statements that use the most processor resources.
     
  • Sorts:  Sorts can be a huge slowdown, especially if they're being done on a disk in the TEMP tablespace.
     
  • Executions:  The more frequently executed SQL statements should be tuned first, since they will have the greatest impact on overall performance.

Step 2:  Determine the execution plan for SQL

As each SQL statement is identified, it will be 'explained' to determine its existing execution plan. There are a host of third-party tools on the market that show the execution plan for SQL statements. The most common way of determining the execution plan for a SQL statement is to use Oracle's explain plan utility. By using explain plan, the Oracle DBA can ask Oracle to parse the statement and display the execution class path without actually executing the SQL statement.

To see the output of an explain plan, you must first create a 'plan table.' Oracle provides a script in $ORACLE_HOME/rdbms/admin called utlxplan.sql. Execute utlxplan.sql and create a public synonym for the plan_table:

 
sqlplus > @utlxplan
Table created.
 
sqlplus > create public synonym plan_table for sys.plan_table;
Synonym created.

Most relational databases use an explain utility that takes the SQL statement as input, runs the SQL optimizer, and outputs the access path information into a plan_table, which can then be interrogated to see the access methods. Listing 1 runs a complex query against a database.

EXPLAIN PLAN SET STATEMENT_ID = 'test1' FOR
SET STATEMENT_ID = 'RUN1'
INTO plan_table
FOR
SELECT   'T'||plansnet.terr_code, 'P'||detplan.pac1
|| detplan.pac2 || detplan.pac3, 'P1', sum(plansnet.ytd_d_ly_tm),
 sum(plansnet.ytd_d_ty_tm),
 sum(plansnet.jan_d_ly),
 sum(plansnet.jan_d_ty),
FROM plansnet, detplan
WHERE
    plansnet.mgc = detplan.mktgpm
AND
    detplan.pac1 in ('N33','192','195','201','BAI',
    'P51','Q27','180','181','183','184','186','188',
    '198','204','207','209','211')
GROUP BY 'T'||plansnet.terr_code, 'P'||detplan.pac1 || detplan.pac2 || detplan.pac3;
This syntax is piped into the SQL optimizer, which will analyze the query and store the plan information in a row in the plan table identified by RUN1. Please note that the query will not execute; it will only create the internal access information in the plan table. The plan tables contains the following fields:
  • operation:  The type of access being performed. Usually table access, table merge, sort, or index operation
     
  • options:  Modifiers to the operation, specifying a full table, a range table, or a join
     
  • object_name:  The name of the table being used by the query component
     
  • Process ID:  The identifier for the query component
     
  • Parent_ID:  The parent of the query component. Note that several query components may have the same parent.
Now that the plan_table has been created and populated, you may interrogate it to see your output by running the following query in Listing 2.
plan.sql - displays contents of the explain plan table
SET PAGES 9999;
SELECT  lpad(' ',2*(level-1))||operation operation,
        options,
        object_name,
        position
FROM plan_table
START WITH id=0
AND
statement_id = 'RUN1'
CONNECT BY prior id = parent_id
AND
statement_id = 'RUN1';
Listing 3 shows the output from the plan table shown in Listing 1. This is the execution plan for the statement and shows the steps and the order in which they will be executed.
SQL> @list_explain_plan
 
OPERATION
-------------------------------------------------------------------------------------
OPTIONS                           OBJECT_NAME                    POSITION
------------------------------ -------------------------------------------------------
SELECT STATEMENT
SORT
GROUP BY                                                      1
       CONCATENATION                                   1
NESTED LOOPS                                    1
TABLE ACCESS FULL         PLANSNET                   1
TABLE ACCESS BY ROWID     DETPLAN                    2
          INDEX RANGE SCAN       DETPLAN_INDEX5             1
NESTED LOOPS

From this output, we can see the dreaded TABLE ACCESS FULL on the PLANSNET table. To diagnose the reason for this full-table scan, we return to the SQL and look for any plansnet columns in the WHERE clause. There, we see that the plansnet column called 'mgc' is being used as a join column in the query, indicating that an index is necessary on plansnet.mgc to alleviate the full-table scan.

While the plan table is useful for determining the access path to the data, it does not tell the entire story. The configuration of the data is also a consideration. The SQL optimizer is aware of the number of rows in each table (the cardinality) and the presence of indexes on fields, but it is not aware of data distribution factors such as the number of expected rows returned from each query component.
 

Step 3:  Tune the SQL statement

For those SQL statements that possess a sub-optimal execution plan, the SQL will be tuned by one of the following methods:
 

  • Adding SQL 'hints' to modify the execution plan
     
  • Rewriting the SQL in PL/SQL. For certain queries this can result in more than a 20x performance improvement. The SQL would be replaced with a call to a PL/SQL package that contained a stored procedure to perform the query.
Using hints to tune Oracle SQL

Among the most common tools for tuning SQL statements are hints. A hint is a directive that is added to the SQL statement to modify the access path for a SQL query.
Troubleshooting tip!  For testing, you can quickly test the effect of another optimizer parameter value at the query level without using an 'alter session' command, using the new opt_param SQL hint:
select /*+ opt_param('optimizer_mode','first_rows_10') */ col1, col2 . . .
select /*+ opt_param('optimizer_index_cost_adj',20) */ col1, col2 . .
Oracle publishes many dozens of SQL hints, and hints become increasingly more complicated through the various releases of Oracle and on into Oracle.
Note:  Hints are only used for de-bugging and you should adjust your optimizer statistics to make the CBO replicate the hinted SQL.  Let's look at the most common hints to improve tuning:
  • Self-order the table joins - If you find that Oracle is joining the tables together in a sub-optimal order, you can use the ORDERED hint to force the tables to be joined in the order that they appear in the FROM clause.  See
     
  • Try a first_rows_n hint. Oracle has two cost-based optimizer modes, first_rows_n and all_rows. The first_rows mode will execute to begin returning rows as soon as possible, whereas the all_rows mode is designed to optimize the resources on the entire query before returning rows.  
    SELECT /*+ first_rows */
     

    A case study in SQL tuning

    One of the historic problems with SQL involves formulating SQL queries. Simple queries can be written in many different ways, each variant of the query producing the same result, but with widely different access methods and query speeds.

    For example, a simple query such as 'What students received an A last semester'' can be written in three ways, as shown in below, each returning an identical result.
  •  
    A standard join:
     
    SELECT *
    FROM STUDENT, REGISTRATION
    WHERE
        STUDENT.student_id = REGISTRATION.student_id
    AND
        REGISTRATION.grade = 'A';
     
    A nested query:

    SELECT *
    FROM STUDENT
    WHERE
        student_id =
        (SELECT student_id
            FROM REGISTRATION
            WHERE
           grade = 'A'
        );

    A correlated subquery:

    SELECT *
    FROM STUDENT
    WHERE
        0 <
        (SELECT count(*)
            FROM REGISTRATION
            WHERE
            grade = 'A'
            AND
            student_id = STUDENT.student_id
        );
    Let's wind up with a review of the basic components of a SQL query and see how to optimize a query for remote execution.

    Tips for writing more efficient SQL
    Space doesn't permit me to discuss every detail of Oracle tuning, but I can share some general rules for writing efficient SQL in Oracle regardless of the optimizer that is chosen. These rules may seem simplistic but following them in a diligent manner will generally relieve more than half of the SQL tuning problems that are experienced:

    • Rewrite complex subqueries with temporary tables - Oracle created the global temporary table (GTT) and the SQL WITH operator to help divide-and-conquer complex SQL sub-queries (especially those with with WHERE clause subqueries, SELECT clause scalar subqueries and FROM clause in-line views).  Tuning SQL with temporary tables (and materializations in the WITH clause) can result in amazing performance improvements.
       
    • Use minus instead of EXISTS subqueries - Some say that using the minus operator instead of NOT IN and NOT Exists will result in a faster execution plan.
       
    • Use SQL analytic functions - The Oracle analytic functions can do multiple aggregations (e.g. rollup by cube) with a single pass through the tables, making them very fast for reporting SQL.
       
    • Re-write NOT EXISTS and NOT EXISTS subqueries as outer joins - In many cases of NOT queries (but ONLY where a column is defined as NULL), you can re-write the uncorrelated subqueries into outer joins with IS NULL tests.  Note that this is a non-correlated sub-query, but it could be re-written as an outer join.
    select book_key from book
    where
    book_key NOT IN (select book_key from sales);
    Below we combine the outer join with a NULL test in the WHERE clause without using a sub-query, giving a faster execution plan.
    select b.book_key from book b, sales s
    where
       b.book_key = s.book_key(+)
    and
       s.book_key IS NULL;
    • Index your NULL values - If you have SQL that frequently tests for NULL, consider creating an index on NULL values.  To get around the optimization of SQL queries that choose NULL column values (i.e. where emp_name IS NULL), we can create a function-based index using the null value built-in SQL function to index only on the NULL columns. 
       
    • Leave column names alone - Never do a calculation on an indexed column unless you have a matching function-based index (a.k.a. FBI).  Better yet, re-design the schema so that common where clause predicates do not need transformation with a BIF:
    where salary*5            > :myvalue
    where substr(ssn,7,4)     = "1234"
    where to_char(mydate,mon) = "january"
    • Avoid the use of NOT IN or HAVING. Instead, a NOT EXISTS subquery may run faster (when appropriate).
       
    • Avoid the LIKE predicate = Always replace a "like" with an equality, when appropriate.
       
    • Never mix data types - If a WHERE clause column predicate is numeric, do not to use quotes. For char index columns, always use quotes. There are mixed data type predicates:
    where cust_nbr = "123"
    where substr(ssn,7,4) = 1234
    • Use decode and case - Performing complex aggregations with the "decode" or "case" functions can minimize the number of times a table has to be selected.
       
    • Don't fear full-table scans - Not all OLTP queries are optimal when they uses indexes.  If your query will return a large percentage of the table rows, a full-table scan may be faster than an index scan.  This depends on many factors, including your configuration (values for db_file_multiblock_read_count, db_block_size), query parallelism and the number of table/index blocks in the buffer cache.
       
    • Use those aliases - Always use table aliases when referencing columns.
    Also, see these related SQL tuning notes:

    SQL Performance Analyzer in Oracle Database 11g Release 1

    SQL Performance Analyzer in Oracle Database 11g Release 1

    The concept of SQL tuning sets, along with the DBMS_SQLTUNE package to manipulate them, was introduced in Oracle 10g as part of the Automatic SQL Tuning functionality. Oracle 11g makes further use of SQL tuning sets with the SQL Performance Analyzer, which compares the performance of the statements in a tuning set before and after a database change. The database change can be as major or minor as you like, such as:
    • Database, operating system, or hardware upgrades.
    • Database, operating system, or hardware configuration changes.
    • Database initialization parameter changes.
    • Schema changes, such as adding indexes or materialized views.
    • Refreshing optimizer statistics.
    • Creating or changing SQL profiles.
    Unlike Database Replay, the SQL Performance Analyzer does not try and replicate the workload on the system. It just plugs through each statement gathering performance statistics.
    The SQL Performance Analyzer can be run manually using the DBMS_SQLPA package or using Enterprise Manager. This article gives an overview of both methods.

    Setting Up the Test

    The SQL performance analyzer requires SQL tuning sets, and SQL tuning sets are pointless unless they contain SQL, so the first task should be to issue some SQL statements. We are only trying to demonstrate the technology, so the example can be really simple. The following code creates a test user called SPA_TEST_USER.
    CONN / AS SYSDBA
    
    CREATE USER spa_test_user IDENTIFIED BY spa_test_user
      QUOTA UNLIMITED ON users;
    
    GRANT CONNECT, CREATE TABLE TO spa_test_user;
    Next, connect to the test user and create a test table called MY_OBJECTS using a query from the ALL_OBJECTS view.
    CONN spa_test_user/spa_test_user
    
    CREATE TABLE my_objects AS
      SELECT * FROM all_objects;
    
    EXEC DBMS_STATS.gather_table_stats(USER, 'MY_OBJECTS', cascade => TRUE);
    This schema represents our "before" state. Still logged in as the test user, issue the following statements.
    SELECT COUNT(*) FROM my_objects WHERE object_id <= 100;
    SELECT object_name FROM my_objects WHERE object_id = 100;
    SELECT COUNT(*) FROM my_objects WHERE object_id <= 1000;
    SELECT object_name FROM my_objects WHERE object_id = 1000;
    SELECT COUNT(*) FROM my_objects WHERE object_id BETWEEN 100 AND 1000;
    Notice, all statements make reference to the currently unindexed OBJECT_ID column. Later we will be indexing this column to create our changed "after" state.
    The select statements are now in the shared pool, so we can start creating an SQL tuning set.

    Creating SQL Tuning Sets using the DBMS_SQLTUNE Package

    The DBMS_SQLTUNE package contains procedures and functions that allow us to create, manipulate and drop SQL tuning sets. The first step is to create an SQL tuning set called spa_test_sqlset using the CREATE_SQLSET procedure.
    CONN / AS SYSDBA
    
    EXEC DBMS_SQLTUNE.create_sqlset(sqlset_name => 'spa_test_sqlset');
    Next, the SELECT_CURSOR_CACHE table function is used to retrieve a cursor containing all SQL statements that were parsed by the SPA_TEST_USER schema and contain the word "my_objects". The resulting cursor is loaded into the tuning set using the LOAD_SQLSET procedure.
    DECLARE
      l_cursor  DBMS_SQLTUNE.sqlset_cursor;
    BEGIN
      OPEN l_cursor FOR
         SELECT VALUE(a)
         FROM   TABLE(
                  DBMS_SQLTUNE.select_cursor_cache(
                    basic_filter   => 'sql_text LIKE ''%my_objects%'' and parsing_schema_name = ''SPA_TEST_USER''',
                    attribute_list => 'ALL')
                ) a;
                                                   
     
      DBMS_SQLTUNE.load_sqlset(sqlset_name     => 'spa_test_sqlset',
                               populate_cursor => l_cursor);
    END;
    /
    The DBA_SQLSET_STATEMENTS view allows us to see which statements have been associated with the tuning set.
    SELECT sql_text
    FROM   dba_sqlset_statements
    WHERE  sqlset_name = 'spa_test_sqlset';
    
    SQL_TEXT
    --------------------------------------------------------------------------------
    SELECT object_name FROM my_objects WHERE object_id = 100
    SELECT COUNT(*) FROM my_objects WHERE object_id <= 100
    SELECT COUNT(*) FROM my_objects WHERE object_id BETWEEN 100 AND 1000
    SELECT COUNT(*) FROM my_objects WHERE object_id <= 1000
    SELECT object_name FROM my_objects WHERE object_id = 1000
    
    5 rows selected.
    
    SQL>
    Now we have an SQL tuning set, we can start using the SQL performance analyzer.

    Running the SQL Performance Analyzer using the DBMS_SQLPA Package

    The DBMS_SQLPA package is the PL/SQL API used to manage the SQL performance ananlyzer. The first step is to create an analysis task using the CREATE_ANALYSIS_TASK function, passing in the SQL tuning set name and making a note of the resulting task name.
    CONN / AS SYSDBA
    
    VARIABLE v_task VARCHAR2(64);
    EXEC :v_task :=  DBMS_SQLPA.create_analysis_task(sqlset_name => 'spa_test_sqlset');
    
    PL/SQL procedure successfully completed.
    
    SQL> PRINT :v_task
     
    V_TASK
    --------------------------------------------------------------------------------
    TASK_122
    
    SQL>
    Next, use the EXECUTE_ANALYSIS_TASK procedure to execute the contents of the SQL tuning set against the current state of the database to gather information about the performance before any modifications are made. This analysis run is named before_change.
    BEGIN
      DBMS_SQLPA.execute_analysis_task(
        task_name       => :v_task,
        execution_type  => 'test execute',
        execution_name  => 'before_change');
    END;
    /
    Now we have the "before" performance information, we need to make a change so we can test the "after" performance. For this example we will simply add an index to the test table on the OBJECT_ID column. In a new SQL*Plus session create the index using the following statements.
    CONN spa_test_user/spa_test_user
    
    CREATE INDEX my_objects_index_01 ON my_objects(object_id);
    
    EXEC DBMS_STATS.gather_table_stats(USER, 'MY_OBJECTS', cascade => TRUE);
    Now, we can return to our original session and test the performance after the database change. Once again use the EXECUTE_ANALYSIS_TASK procedure, naming the analysis task "after_change".
    BEGIN
      DBMS_SQLPA.execute_analysis_task(
        task_name       => :v_task,
        execution_type  => 'test execute',
        execution_name  => 'after_change');
    END;
    /
    Once the before and after analysis tasks are complete, we must run a comparison analysis task. The following code explicitly names the analysis tasks to compare using name-value pairs in the EXECUTION_PARAMS parameter. If this is ommited, the latest two analysis runs are compared.
    BEGIN
      DBMS_SQLPA.execute_analysis_task(
        task_name        => :v_task,
        execution_type   => 'compare performance', 
        execution_params => dbms_advisor.arglist(
                              'execution_name1', 
                              'before_change', 
                              'execution_name2', 
                              'after_change')
        );
    END;
    /
    With this final analysis run complete, we can check out the comparison report using the REPORT_ANALYSIS_TASK function. The function returns a CLOB containing the report in 'TEXT', 'XML' or 'HTML' format. Its usage is shown below.
    Note. Oracle 11gR2 also includes an 'ACTIVE' format that looks more like the Enterprise Manager output.
    SET PAGESIZE 0
    SET LINESIZE 1000
    SET LONG 1000000
    SET LONGCHUNKSIZE 1000000
    SET TRIMSPOOL ON
    SET TRIM ON
    
    SPOOL /tmp/execute_comparison_report.htm
    
    SELECT DBMS_SQLPA.report_analysis_task(:v_task, 'HTML', 'ALL')
    FROM   dual;
    
    SPOOL OFF
    An example of this file for each available type is shown below.
    • TEXT
    • HTML
    • XML
    • ACTIVE - Active HTML available in 11gR2 requires a download of Javascript libraries from an Oracle website, so must be used on a PC connected to the internet.

    Creating SQL Tuning Sets using Enterprise Manager

    Click on the "SQL Tuning Sets" link towards the bottom of the "Performance" tab.
    SQL Tuning Sets Link
    On the "SQL Tuning Sets" screen, click the "Create" button.
    SQL Tuning Sets
    Enter a name for the SQL tuning set and click the "Next" button.
    Options
    Select the "Load SQL statements one time only" option, select the "Cursor Cache" as the data source, then click the "Next" button.
    Load Methods
    Set the appropriate values for the "Parsing Schema Name" and "SQL Text" filter attributes, remove any extra attributes by clicking their remove icons, then click the "Next" button.
    Filter Options
    Accept the immediate schedule by clicking the "Next" button.
    Schedule
    Assuming the review information looks correct, click the "Submit" button.
    Review
    The "SQL Tuning Sets" screen shows the confirmation of the tuning set creation and the scheduled job to populate it.
    Confirmation
    Once the population job completes, clicking on the SQL tuning set displays its contents.
    SQL Tuning Set Contents
    Now we have an SQL tuning set, we can start using the SQL performance analyzer.

    Running the SQL Performance Analyzer using Enterprise Manager

    Click the "SQL Performance Analayzer" link on the "Software and Support" tab.
    SQL Performance Analyzer Link
    Click the "Guided Workflow" link on the "SQL Performance Analayzer" screen.
    Guided Workflow
    Click the execute icon on the first step to create the SQL Performance Analyzer task.
    Create SPA Task Icon
    Enter a name for the SPA task, select the SQL tuning set to associate with it, then click the "Create" button.
    Create SPA Task
    When the status of the previous step becomes a green tick, click the execute icon on the second step to capture the SQL tuning set performance information of the "before" state.
    Replay Before Icon
    Enter a "Replay Trial Name" of "before_change", check the "Trial environment established" checkbox, then click the "Submit" button.
    Replay Before
    When the status of the previous step becomes a green tick, click the execute icon on the third step to capture the SQL tuning set performance information of the "after" state.
    Replay After Icon
    Alter the state of the database by creating an index on the OBJECT_ID column of the test table.
    CONN spa_test_user/spa_test_user@prod
    
    CREATE INDEX my_objects_index_01 ON my_objects(object_id);
    
    EXEC DBMS_STATS.gather_table_stats(USER, 'MY_OBJECTS', cascade => TRUE);
    Enter a "Replay Trial Name" of "after_change", check the "Trial environment established" checkbox, then click the "Submit" button.
    Replay After
    When the status of the previous step becomes a green tick, click the execute icon on the forth step to run a comparison analysis task.
    Compare Icon
    Accept the default "Trial 1 Name" and "Trial 2 Name" settings by clicking the "Submit" button.
    Compare
    When the status of the previous step becomes a green tick, click the execute icon on the fifth step to view the comparison report.
    Report Icon
    The resulting page contains the comparison report for the SQL Performance Analyzer task.
    Report
    Clicking on a specific SQL ID displays the statement specific results, along with the before and after execution plans.
    SQL Report

    Optimizer Upgrade Simulation

    The SQL Performance Analyzer allows you to test the affects of optimizer version changes on SQL tuning sets. Click the "Optimizer Upgrade Simulation" link on the "SQL Performance Analyzer" page.
    Optimizer Upgrade Simulation Link
    Enter a task name, select the two optimizer versions to compare, then click the "Submit" button.
    Optimizer Upgrade Simulation
    The task is listed in the "SQL Performance Analyzer Tasks" section. Refresh the page intermittently until the task status becomes a green tick, then click on the task name.
    Task List
    The resulting screen shows details of the selected task. Click on the "Comparison Report" classes icon allows you to view the comparison report.
    Task Details

    Parameter Change

    The SQL Performance Analyzer provides a shortcut for setting up tests of initialization parameter changes on SQL tuning sets. Click the "Parameter" link on the "SQL Performance Analyzer" page.
    Parameter Change Link
    Enter a task name and the parameter you wish to test. Enter the base and changed value, then click the "Submit" button.
    Parameter Change
    The task is listed in the "SQL Performance Analyzer Tasks" section. Refresh the page intermittently until the task status becomes a green tick, then click on the task name.
    Task List
    The resulting screen shows details of the selected task. Click on the "Comparison Report" classes icon allows you to view the comparison report.
    Task Details

    Transferring SQL Tuning Sets

    In the examples listed above, the tests have been performed on the same system. In reality you are more likely to want to create a tuning set on your production system, then run the SQL Performance Analyzer against it on a test system. Fortunately, the DBMS_SQLTUNE package allows you to transport SQL tuning sets by storing them in a staging table.
    First, create the staging table using the CREATE_STGTAB_SQLSET procedure.
    CONN sys/password@prod AS SYSDBA
    
    BEGIN
      DBMS_SQLTUNE.create_stgtab_sqlset(table_name      => 'SQLSET_TAB',
                                        schema_name     => 'SPA_TEST_USER',
                                        tablespace_name => 'USERS');
    END;
    /
    Next, use the PACK_STGTAB_SQLSET procedure to export SQL tuning set into the staging table.
    BEGIN
      DBMS_SQLTUNE.pack_stgtab_sqlset(sqlset_name          => 'SPA_TEST_SQLSET',
                                      sqlset_owner         => 'SYS',
                                      staging_table_name   => 'SQLSET_TAB',
                                      staging_schema_owner => 'SPA_TEST_USER');
    END;
    /
    Once the SQL tuning set is packed into the staging table, the table can be transferred to the test system using Datapump, Export/Import or via a database link. Once on the test system, the SQL tuning set can be imported using the UNPACK_STGTAB_SQLSET procedure.
    BEGIN
      DBMS_SQLTUNE.unpack_stgtab_sqlset(sqlset_name          => '%',
                                        sqlset_owner         => 'SYS',
                                        replace              => TRUE,
                                        staging_table_name   => 'SQLSET_TAB',
                                        staging_schema_owner => 'SPA_TEST_USER');
    END;
    /
    The SQL tuning set can now be used with the SQL Performance Analyzer on the test system.
    For more information see:

    Oracle SQL tuning - Tune individual SQL statements

    Oracle SQL tuning - Tune individual SQL statements



    For a complete understanding of SQL tuning steps, try the Advanced Oracle SQL Tuning, a 3-Day or 5-day on-site Oracle course.

    Tuning individual Oracle SQL statements

    The acronym SQL stands for Structured Query Language. SQL is an industry standard database query language that was adopted in the mid-1980s. It should not be confused with commercial products such as Microsoft SQL Server or open source products such as MySQL, both of which use the acronym as part of the title of their products.

    Do this before you start individual SQL statement tuning

    This broad-brush approach can save thousands of hours of tedious SQL tuning because you can hundreds of queries at once.  Remember, you MUST do this first, else later changes to the optimizer parameters or statistics may un-tune your SQL.
     
    Remember, you must ALWAYS start with system-level SQL tuning, else later changes might undo your tuned execution plans:
    • Optimize the server kernel - You must always tune your disk and network I/O subsystem (RAID, DASD bandwidth, network) to optimize the I/O time, network packet size and dispatching frequency.
       
    • Adjusting your optimizer statistics - You must always collect and store optimizer statistics to allow the optimizer to learn more about the distribution of your data to take more intelligent execution plans.  Also, histograms can hypercharge SQL in cases of determining optimal table join order, and when making access decisions on skewed WHERE clause predicates.
       
    • Adjust optimizer parameters - Optimizer optimizer_mode, optimizer_index_caching, optimizer_index_cost_adj.
       
    • Optimize your instance - Your choice of db_block_size, db_cache_size, and OS parameters (db_file_multiblock_read_count, cpu_count, &c), can influence SQL performance.
       
    • Tune your SQL Access workload with physical indexes and materialized views - Just as the 10g SQLAccess advisor recommends missing indexes and missing materialized views, you should always optimize your SQL workload with indexes, especially function-based indexes, a Godsend for SQL tuning. 
    11g Note:  The Oracle 11g SQL Performance Analyzer (SPA), is primarily designed to speed up the holistic SQL tuning process.

    Once you create a workload (called a SQL Tuning Set, or STS), Oracle will repeatedly execute the workload, using sophisticated predictive models (using a regression testing approach) to accurately identify the salient changes to SQL execution plans, based on your environmental changes. Using SPA, we can predict the impact of system changes on a workload, and we can forecast changes in response times for SQL after making any change, like parameter changes, schema changes, hardware changes, OS changes, or Oracle upgrades.  For details, see the book Oracle 11g New Features.

    Once the environment, instance, and objects have been tuned, the Oracle administrator can focus on what is probably the single most important aspect of tuning an Oracle database: tuning the individual SQL statements. In this final article in my series on Oracle tuning, I will share some general guidelines for tuning individual SQL statements to improve Oracle performance.

    Oracle SQL tuning goals

    Oracle SQL tuning is a phenomenally complex subject. Entire books have been written about the nuances of Oracle SQL tuning; however, there are some general guidelines that every Oracle DBA follows in order to improve the performance of their systems.  Again, see the book "Oracle Tuning: The Definitive Reference", for complete details.

    The goals of SQL tuning focus on improving the execution plan to fetch the rows with the smallest number of database "touches" (LIO buffer gets and PIO physical reads). 

    • Remove unnecessary large-table full-table scans—Unnecessary full-table scans cause a huge amount of unnecessary I/O and can drag-down an entire database. The tuning expert first evaluates the SQL based on the number of rows returned by the query. The most common tuning remedy for unnecessary full-table scans is adding indexes. Standard b-tree indexes can be added to tables, and bitmapped and function-based indexes can also eliminate full-table scans. In some cases, an unnecessary full-table scan can be forced to use an index by adding an index hint to the SQL statement.
       
    • Cache small-table full-table scans—In cases where a full-table scan is the fastest access method, the administrator should ensure that a dedicated data buffer is available for the rows.  In Oracle8 and beyond, a small table can be cached by forcing it into the KEEP pool.
       
    • Verify optimal index usage—Oracle sometimes has a choice of indexes, and the tuning professional must examine each index and ensure that Oracle is using the proper index.
       
    • Materialize your aggregations and summaries for static tables - One features of the Oracle 10g SQLAccess advisor is recommendations for new indexes and suggestions for materialized views.  Materialized views pre-join tables and pre-summarize data, a real silver bullet for data mart reporting databases where the data is only updated daily. Again, see the book "Oracle Tuning: The Definitive Reference", for complete details on SQL tuning with materialized views.
    These are the goals of SQL tuning in a nutshell. However, they are deceptively simple, and to effectively meet them, we need to have a through understanding of the internals of Oracle SQL. Let's begin with an overview of the Oracle SQL optimizers.

    Oracle SQL optimizers

    One of the first things the Oracle DBA looks at is the default optimizer mode for the database. The Oracle initialization parameters offer many cost-based optimizer modes as well as the deprecated yet useful rule-based hint:
    The cost-based optimizer uses 'statistics' that are collected from the table using the 'analyze table' command. Oracle uses these metrics about the tables in order to intelligently determine the most efficient way of servicing the SQL query. It is important to recognize that in many cases, the cost-based optimizer may not make the proper decision in terms of the speed of the query. The cost-based optimizer is constantly being improved, but there are still many cases in which the rule-based optimizer will result in faster Oracle queries.
    Prior to Oracle 10g, Oracle's default optimizer mode was called 'choose.' In the choose optimizer mode, Oracle will execute the rule-based optimizer if there are no statistics present for the table; it will execute the cost-based optimizer if statistics are present. The danger with using the choose optimizer mode is that problems can occur in cases where one Oracle table in a complex query has statistics and the other tables do not. 
    Starting in Oracle 10g, the default optimizer mode is all_rows, favoring full-table scans over index access.  The all_rows optimizer mode is designed to minimize computing resources and it favors full-table scans.  Index access (first_rows_n) adds additional I/O overhead, but they return rows faster, back to the originating query:

    Full-table scans touch all data blocks
    Hence, many OLTP shops will choose first_rows, first_rows_100 or first_rows_10, asking Oracle to use indexes to reduce block touches:

    Index scans return rows fast by doing additional I/O
    Note:  Staring in Oracle9i release 2, the Oracle performance tuning guide says that the first_rows optimizer mode has been deprecated and to use first_rows_n instead.
    When only some tables contain CBO statistics, Oracle will use the cost-based optimization and estimate statistics for the other tables in the query at runtime. This can cause significant slowdown in the performance of the individual query.
    In sum, the Oracle database administrator will always try changing the optimizer mode for queries as the very first step in Oracle tuning. The foremost tenet of Oracle SQL tuning is avoiding the dreaded full-table scan. One of the hallmarks of an inefficient SQL statement is the failure of the SQL statement to use all of the indexes that are present within the Oracle database in order to speed up the query.

    Of course, there are times when a full-table scan is appropriate for a query, such as when you are doing aggregate operations such as a sum or an average, and the majority of the rows within the Oracle table must be read to get the query results. The task of the SQL tuning expert is to evaluate each full-table scan and see if the performance can be improved by adding an index.

    In most Oracle systems, a SQL statement will be retrieving only a small subset of the rows within the table. The Oracle optimizers are programmed to check for indexes and to use them whenever possible to avoid excessive I/O. However, if the formulation of a query is inefficient, the cost-based optimizer becomes confused about the best access path to the data, and the cost-based optimizer will sometimes choose to do a full-table scan against the table. Again, the general rule is for the Oracle database administrator to interrogate the SQL and always look for full-table scans. 
    For the full story, see my book "Oracle Tuning: The Definitive Reference" for details on choosing the right optimizer mode.
    A strategic plan for Oracle SQL tuning

    Many people ask where they should start when tuning Oracle SQL. Tuning Oracle SQL is like fishing. You must first fish in the Oracle library cache to extract SQL statements and rank the statements by their amount of activity.

    Step 1:  Identify high-impact SQL

    The SQL statements will be ranked according the number of executions and will be tuned in this order. The executions column of the v$sqlarea view and the stats$sql_summary or the dba_hist_sql_summary table can be used to locate the most frequently used SQL.  Note that we can display SQL statements by:
    • Rows processed:  Queries that process a large number of rows will have high I/O and may also have impact on the TEMP tablespace.
       
    • Buffer gets:  High buffer gets may indicate a resource-intensive query.
       
    • Disk reads:  High disk reads indicate a query that is causing excessive I/O.
       
    • Memory KB:  The memory allocation of a SQL statement is useful for identifying statements that are doing in-memory table joins.
       
    • CPU secs:  This identifies the SQL statements that use the most processor resources.
       
    • Sorts:  Sorts can be a huge slowdown, especially if they're being done on a disk in the TEMP tablespace.
       
    • Executions:  The more frequently executed SQL statements should be tuned first, since they will have the greatest impact on overall performance.

    Step 2:  Determine the execution plan for SQL

    As each SQL statement is identified, it will be 'explained' to determine its existing execution plan. There are a host of third-party tools on the market that show the execution plan for SQL statements. The most common way of determining the execution plan for a SQL statement is to use Oracle's explain plan utility. By using explain plan, the Oracle DBA can ask Oracle to parse the statement and display the execution class path without actually executing the SQL statement.

    To see the output of an explain plan, you must first create a 'plan table.' Oracle provides a script in $ORACLE_HOME/rdbms/admin called utlxplan.sql. Execute utlxplan.sql and create a public synonym for the plan_table:

     
    sqlplus > @utlxplan
    Table created.
     
    sqlplus > create public synonym plan_table for sys.plan_table;
    Synonym created.

    Most relational databases use an explain utility that takes the SQL statement as input, runs the SQL optimizer, and outputs the access path information into a plan_table, which can then be interrogated to see the access methods. Listing 1 runs a complex query against a database.

    EXPLAIN PLAN SET STATEMENT_ID = 'test1' FOR
    SET STATEMENT_ID = 'RUN1'
    INTO plan_table
    FOR
    SELECT   'T'||plansnet.terr_code, 'P'||detplan.pac1
    || detplan.pac2 || detplan.pac3, 'P1', sum(plansnet.ytd_d_ly_tm),
     sum(plansnet.ytd_d_ty_tm),
     sum(plansnet.jan_d_ly),
     sum(plansnet.jan_d_ty),
    FROM plansnet, detplan
    WHERE
        plansnet.mgc = detplan.mktgpm
    AND
        detplan.pac1 in ('N33','192','195','201','BAI',
        'P51','Q27','180','181','183','184','186','188',
        '198','204','207','209','211')
    GROUP BY 'T'||plansnet.terr_code, 'P'||detplan.pac1 || detplan.pac2 || detplan.pac3;
    This syntax is piped into the SQL optimizer, which will analyze the query and store the plan information in a row in the plan table identified by RUN1. Please note that the query will not execute; it will only create the internal access information in the plan table. The plan tables contains the following fields:
    • operation:  The type of access being performed. Usually table access, table merge, sort, or index operation
       
    • options:  Modifiers to the operation, specifying a full table, a range table, or a join
       
    • object_name:  The name of the table being used by the query component
       
    • Process ID:  The identifier for the query component
       
    • Parent_ID:  The parent of the query component. Note that several query components may have the same parent.
    Now that the plan_table has been created and populated, you may interrogate it to see your output by running the following query in Listing 2.
    plan.sql - displays contents of the explain plan table
    SET PAGES 9999;
    SELECT  lpad(' ',2*(level-1))||operation operation,
            options,
            object_name,
            position
    FROM plan_table
    START WITH id=0
    AND
    statement_id = 'RUN1'
    CONNECT BY prior id = parent_id
    AND
    statement_id = 'RUN1';
    Listing 3 shows the output from the plan table shown in Listing 1. This is the execution plan for the statement and shows the steps and the order in which they will be executed.
    SQL> @list_explain_plan
     
    OPERATION
    -------------------------------------------------------------------------------------
    OPTIONS                           OBJECT_NAME                    POSITION
    ------------------------------ -------------------------------------------------------
    SELECT STATEMENT
    SORT
    GROUP BY                                                      1
           CONCATENATION                                   1
    NESTED LOOPS                                    1
    TABLE ACCESS FULL         PLANSNET                   1
    TABLE ACCESS BY ROWID     DETPLAN                    2
              INDEX RANGE SCAN       DETPLAN_INDEX5             1
    NESTED LOOPS

    From this output, we can see the dreaded TABLE ACCESS FULL on the PLANSNET table. To diagnose the reason for this full-table scan, we return to the SQL and look for any plansnet columns in the WHERE clause. There, we see that the plansnet column called 'mgc' is being used as a join column in the query, indicating that an index is necessary on plansnet.mgc to alleviate the full-table scan.

    While the plan table is useful for determining the access path to the data, it does not tell the entire story. The configuration of the data is also a consideration. The SQL optimizer is aware of the number of rows in each table (the cardinality) and the presence of indexes on fields, but it is not aware of data distribution factors such as the number of expected rows returned from each query component.
     

    Step 3:  Tune the SQL statement

    For those SQL statements that possess a sub-optimal execution plan, the SQL will be tuned by one of the following methods:
     

    • Adding SQL 'hints' to modify the execution plan
       
    • Re-write SQL with Global Temporary Tables

    • Rewriting the SQL in PL/SQL. For certain queries this can result in more than a 20x performance improvement. The SQL would be replaced with a call to a PL/SQL package that contained a stored procedure to perform the query.
    Using hints to tune Oracle SQL

    Among the most common tools for tuning SQL statements are hints. A hint is a directive that is added to the SQL statement to modify the access path for a SQL query.
    Troubleshooting tip!  For testing, you can quickly test the effect of another optimizer parameter value at the query level without using an 'alter session' command, using the new opt_param SQL hint:
    select /*+ opt_param('optimizer_mode','first_rows_10') */ col1, col2 . . .
    select /*+ opt_param('optimizer_index_cost_adj',20) */ col1, col2 . .
    Oracle publishes many dozens of SQL hints, and hints become increasingly more complicated through the various releases of Oracle and on into Oracle.
    Note:  Hints are only used for de-bugging and you should adjust your optimizer statistics to make the CBO replicate the hinted SQL.  Let's look at the most common hints to improve tuning:
    • Mode hints:  first_rows_10, first_rows_100
    • Oracle leading and ordered hints  Also see how to tune table join order with histograms
       
    • Dynamic sampling: dynamic_sampling
       
    • Oracle SQL undocumented tuning hints - Guru's only

    • The cardinality hint
       
  • Self-order the table joins - If you find that Oracle is joining the tables together in a sub-optimal order, you can use the ORDERED hint to force the tables to be joined in the order that they appear in the FROM clause.  See
     
  • Try a first_rows_n hint. Oracle has two cost-based optimizer modes, first_rows_n and all_rows. The first_rows mode will execute to begin returning rows as soon as possible, whereas the all_rows mode is designed to optimize the resources on the entire query before returning rows.  
    SELECT /*+ first_rows */
     

    A case study in SQL tuning

    One of the historic problems with SQL involves formulating SQL queries. Simple queries can be written in many different ways, each variant of the query producing the same result, but with widely different access methods and query speeds.

    For example, a simple query such as 'What students received an A last semester'' can be written in three ways, as shown in below, each returning an identical result.
  •  
    A standard join:
     
    SELECT *
    FROM STUDENT, REGISTRATION
    WHERE
        STUDENT.student_id = REGISTRATION.student_id
    AND
        REGISTRATION.grade = 'A';
     
    A nested query:

    SELECT *
    FROM STUDENT
    WHERE
        student_id =
        (SELECT student_id
            FROM REGISTRATION
            WHERE
           grade = 'A'
        );

    A correlated subquery:

    SELECT *
    FROM STUDENT
    WHERE
        0 <
        (SELECT count(*)
            FROM REGISTRATION
            WHERE
            grade = 'A'
            AND
            student_id = STUDENT.student_id
        );
    Let's wind up with a review of the basic components of a SQL query and see how to optimize a query for remote execution.

    Tips for writing more efficient SQL
    Space doesn't permit me to discuss every detail of Oracle tuning, but I can share some general rules for writing efficient SQL in Oracle regardless of the optimizer that is chosen. These rules may seem simplistic but following them in a diligent manner will generally relieve more than half of the SQL tuning problems that are experienced:

    • Rewrite complex subqueries with temporary tables - Oracle created the global temporary table (GTT) and the SQL WITH operator to help divide-and-conquer complex SQL sub-queries (especially those with with WHERE clause subqueries, SELECT clause scalar subqueries and FROM clause in-line views).  Tuning SQL with temporary tables (and materializations in the WITH clause) can result in amazing performance improvements.
       
    • Use minus instead of EXISTS subqueries - Some say that using the minus operator instead of NOT IN and NOT Exists will result in a faster execution plan.
       
    • Use SQL analytic functions - The Oracle analytic functions can do multiple aggregations (e.g. rollup by cube) with a single pass through the tables, making them very fast for reporting SQL.
       
    • Re-write NOT EXISTS and NOT EXISTS subqueries as outer joins - In many cases of NOT queries (but ONLY where a column is defined as NULL), you can re-write the uncorrelated subqueries into outer joins with IS NULL tests.  Note that this is a non-correlated sub-query, but it could be re-written as an outer join.
    select book_key from book
    where
    book_key NOT IN (select book_key from sales);

    Below we combine the outer join with a NULL test in the WHERE clause without using a sub-query, giving a faster execution plan.
    select b.book_key from book b, sales s
    where
       b.book_key = s.book_key(+)
    and
       s.book_key IS NULL;

    • Index your NULL values - If you have SQL that frequently tests for NULL, consider creating an index on NULL values.  To get around the optimization of SQL queries that choose NULL column values (i.e. where emp_name IS NULL), we can create a function-based index using the null value built-in SQL function to index only on the NULL columns. 
       
    • Leave column names alone - Never do a calculation on an indexed column unless you have a matching function-based index (a.k.a. FBI).  Better yet, re-design the schema so that common where clause predicates do not need transformation with a BIF:
    where salary*5            > :myvalue
    where substr(ssn,7,4)     = "1234"
    where to_char(mydate,mon) = "january"

    • Avoid the use of NOT IN or HAVING. Instead, a NOT EXISTS subquery may run faster (when appropriate).
       
    • Avoid the LIKE predicate = Always replace a "like" with an equality, when appropriate.
       
    • Never mix data types - If a WHERE clause column predicate is numeric, do not to use quotes. For char index columns, always use quotes. There are mixed data type predicates:
    where cust_nbr = "123"
    where substr(ssn,7,4) = 1234

    • Use decode and case - Performing complex aggregations with the "decode" or "case" functions can minimize the number of times a table has to be selected.
       
    • Don't fear full-table scans - Not all OLTP queries are optimal when they uses indexes.  If your query will return a large percentage of the table rows, a full-table scan may be faster than an index scan.  This depends on many factors, including your configuration (values for db_file_multiblock_read_count, db_block_size), query parallelism and the number of table/index blocks in the buffer cache.
       
    • Use those aliases - Always use table aliases when referencing columns.
    Also, see these related SQL tuning notes: