Tuesday, April 29, 2008

Oracle Database Cloning using DBCA

The "Template Management" section of the Database Configuration Assistant (DBCA) can be used to clone databases. The following method creates a clone of an existing database including both the structure and the data:

  1. Start the Database Configuration Assistant (DBCA).
  2. On the "Welcome" screen click the "Next" button.
  3. On the "Operations" screen select the "Manage Templates" option and click the "Next" button.
  4. On the "Template Management" screen select the "Create a database template" option and select the "From and existing database (structure as well as data. you can also chose structure only )" sub-option then click the "Next" button.
  5. On the "Source database" screen select the relevant database instance and click the "Next" button.
  6. On the "Template properties" screen enter a suitable name and description for the template, confirm the location for the template files and click the "Next" button.
  7. On the "Location of database related files" screen choose either to maintain the file locations or to convert to OFA structure (recommended) and click the "Finish" button.
  8. On the "Confirmation" screen click the "OK" button.
  9. Wait while the Database Configuration Assistant progress screen gathers information about the source database, backs up the database and creates the template.

By default the template files are located in the ORACLE_HOME/assistants/dbca/templates" directory.

Now move the template into your destination machine and start DBCA. install database using that template

Force logging/nologging mode

You can create tables and indexes specifying that the database create them with the NOLOGGING option. When you create a table or index as NOLOGGING, the database does not generate redo log records for the operation. Thus, you cannot recover objects created with NOLOGGING, even if you are running in ARCHIVELOG mode. With respect to the NOLOGGING option, you can get three benefits:

1. Space is saved in the redo log files
2. The time it takes to create the table is decreased
3. Performance improves for parallel creation of large tables


Note: NOLOGGING can be overriden at tablespace level using alter tablespace ... force logging. NOLOGGING has no effect if the database is in force logging mode which can be controlled with (alter database force [no] logging mode).

SQL> set timing on;
SQL> create table sales_logging as select * from sales;
Table created.
Elapsed: 00:00:25.24
SQL> create table sales_nologging NOLOGGING as select * from sales;
Table created.
Elapsed: 00:00:06.59

For just over 900,000 rows, the time difference is around 18 seconds.


Let's suppose a table is created using the NOLOGGING option, regardless of how NOLOGGING is being invoked (in a CREATE statement using NOLOGGING, or in a tablespace with NOLOGGING set). What is the end result of creating a table, inserting data, committing, followed by a delete statement and a rollback statement? Does NOLOGGING mean the DML is not recorded and that you cannot rollback because there was nothing logged in the redo logs?

SQL> create table test (id number) nologging;
Table created.
SQL> insert into test values (1);
1 row created.
SQL> commit;
Commit complete.
SQL> delete from test;
1 row deleted.
SQL> rollback;
Rollback complete.
SQL> select * from test;
ID
----------
1

The answer is no, that is not what NOLOGGING means.

NOLOGGING Means

"The NOLOGGING clause also specifies that subsequent direct loads using SQL*Loader and direct load INSERT operations are not logged. Subsequent DML statements (UPDATE, DELETE, and conventional path insert) are unaffected by the NOLOGGING attribute of the table and generate redo."


*** As demonstrated, using the NOLOGGING option can be a time saver, but it can also put you at risk if you do not use it wisely. If you create a table with NOLOGGING, but cannot afford to lose the data, the first step after the data load is complete is to take a backup. If a good part of your loading data into a database work revolves around using SQL*Loader loading data into stage tables, make the tables (or tablespace) NOLOGGING and save yourself some time

Monday, April 28, 2008

Changing Archive Log Destination

In this article you will learn how to change the destination for archived redo log files. Sometime the location where archive redo log is full and you can not access the database.there are two way to this:

1.Temporarily Changing the Destination Using SQL*Plus

If you are automatically archiving, you can use the following command to override the destination specified by the LOG_ARCHIVE_DEST. This command does not change the value in the initialization parameter file. This change is only valid until you restart the instance.

>sqlplus / as sysdba

see current location

sql> archive log list;

Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle/app/oracle/product/10.2.0/db_1/dbs/arch
Oldest online log sequence 9285
Next log sequence to archive 9287
Current log sequence 9287

To change the location

sql>ARCHIVE LOG START '/oracle2/arch';

To Verify your changes:

sql> archive log list;

Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle2/arch
Oldest online log sequence 9285
Next log sequence to archive 9287
Current log sequence 9287


Permanently Changing the Destination Using SQL*Plus


To permanently change the destination, you must change the initialization parameter. You can change it dynamically with the ALTER SYSTEM command as shown below:

Note: LOG_ARCHIVE_DEST has been deprecated in favor of LOG_ARCHIVE_DEST_n for Enterprise Edition users. If you do not have Enterprise Edition or you have not specified any LOG_ARCHIVE_DEST_n parameters, LOG_ARCHIVE_DEST is valid.


> sqlplus / as sysdba

Issue the ALTER SYSTEM command to update the value of the LOG_ARCHIVE_DEST_n parameter in memory and in your SPFILE:

sql> ALTER SYSTEM SET log_archive_dest ='/oradata2/arch' scope=both;

To Verify your changes:

sql> archive log list;

Database log mode Archive Mode
Automatic archival Enabled
Archive destination /oracle2/arch
Oldest online log sequence 9285
Next log sequence to archive 9287
Current log sequence 9287

IN ORACLE 10g

To see archive log status
>sqlplus / as sysdba

SQL> archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 132
Next log sequence to archive 134
Current log sequence 134

To see the physical archive location

SQL> show parameter DB_RECOVERY_FILE_DEST

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /backup/oracle/oradata/flash_r
ecovery_area/
db_recovery_file_dest_size big integer 2G


To change the size of archive log

SQL> alter system SET DB_RECOVERY_FILE_DEST_SIZE = 10G SCOPE=BOTH SID='orca';

System altered.

SQL> show parameter DB_RECOVERY_FILE_DEST

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /backup/oracle/oradata/flash_r
ecovery_area/
db_recovery_file_dest_size big integer 10G
SQL>

To change the Physical Location:

SQL> ALTER SYSTEM SET DB_RECOVERY_FILE_DEST ='/backup/oracle/flash_recovery_area/' SCOPE=spfile;

System altered.

SQL> shutdown immediate
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
SQL> startup
ORACLE instance started.

Total System Global Area 1258291200 bytes
Fixed Size 2020448 bytes
Variable Size 218106784 bytes
Database Buffers 1023410176 bytes
Redo Buffers 14753792 bytes
Database mounted.
Database opened.
SQL>
SQL> show parameter DB_RECOVERY_FILE_DEST

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_recovery_file_dest string /backup/oracle/flash_recovery_
area/
db_recovery_file_dest_size big integer 10G



Flashback Part #2 Setup and Maintenance

Limitations of Flashback Database
  1. Flashback Database can only undo changes to a datafile made by an Oracle database. It cannot be used to repair media failures, or to recover from a deletion of datafiles.
  2. You cannot use Flashback Database to undo a shrink datafile operation.
  3. If the database control file is restored from backup or re-created, all accumulated flashback log information is discarded. You cannot use FLASHBACK DATABASE to return to a point in time before the restore or re-creation of a control file.
  4. When using Flashback Database with a target time at which a NOLOGGING operation was in progress, block corruption is likely in the database objects and datafiles affected by the NOLOGGING operation. For example, if you perform a direct-path INSERT operation in NOLOGGING mode, and that operation runs from 9:00 to 9:15 on April 3, 2005, and you later need to use Flashback Database to return to the target time 09:07 on that date, the objects and datafiles updated by the direct-path INSERT may be left with block corruption after the Flashback Database operation completes. If possible, avoid using Flashback Database with a target time or SCN that coincides with a NOLOGGING operation. Also, perform a full or incremental backup of the affected datafiles immediately after any NOLOGGING operation to ensure recoverability to points in time after the operation. If you expect to use Flashback Database to return to a point in time during an operation such as a direct-path INSERT, consider performing the operation in LOGGING mode.
Requirements for Enabling Flashback Database

Your database must be running in ARCHIVELOG mode, because archived logs are used in the Flashback Database operation. You must have a flash recovery area enabled, because flashback logs can only be stored in the flash recovery area. For Real Application Clusters databases, the flash recovery area must be stored in a clustered file system or in ASM.

Enabling Logging for Flashback Database

To enable logging for Flashback Database, set the DB_FLASHBACK_RETENTION_TARGET initialization parameter and issue the ALTER DATABASE FLASHBACK ON statement.

1. Start SQL*Plus and ensure that the database is mounted, but not open.
SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;

2. Optionally, set theDB_FLASHBACK_RETENTION_TARGET to the length of the desired flashback window in minutes:
SQL> ALTER SYSTEM SET DB_FLASHBACK_RETENTION_TARGET=4320;
# 4320 = 3 days (By default DB_FLASHBACK_RETENTION_TARGET is set to one day (1440 minutes)).

3. Enable the Flashback Database feature for the whole database:
SQL> ALTER DATABASE FLASHBACK ON;

By default, flashback logs are generated for all permanent tablespaces. If you wish, you can reduce overhead by disabling flashback logging for specific tablespaces:
SQL> ALTER TABLESPACE tbs_3 FLASHBACK OFF;

You can re-enable flashback logging for a tablespace later with this command: SQL> ALTER TABLESPACE tbs_3 FLASHBACK ON;

Note: if you disable Flashback Database for a tablespace, then you must take its datafiles offline before running FLASHBACK DATABASE.

You can disable flashback logging for the entire database with this command: SQL> ALTER DATABASE FLASHBACK OFF;

Creating Normal and Guaranteed Restore Points

To create normal or guaranteed restore points, use the CREATE RESTORE POINT statement in SQL*Plus, providing a name for the restore point and specifying whether it is to be a guaranteed restore point or a normal one (the default). The database can be open or mounted when creating restore points. If it is mounted, then it must have been shut down cleanly.
SQL> CREATE RESTORE POINT before_upgrade;
Restore point created.
SQL> CREATE RESTORE POINT before_upgrade GUARANTEE FLASHBACK DATABASE;
Restore point created.

Listing Restore Points

To see a list of the currently defined restore points, use the V$RESTORE_POINT control file view, by means of the following query:

SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V$RESTORE_POINT;

You can also use the following query to view only the guaranteed restore points:

SQL> SELECT NAME, SCN, TIME, DATABASE_INCARNATION#, GUARANTEE_FLASHBACK_DATABASE, STORAGE_SIZE FROM V$RESTORE_POINT WHERE GUARANTEE_FLASHBACK_DATABASE='YES';

For normal restore points, STORAGE_SIZE is zero. For guaranteed restore points, STORAGE_SIZE indicates the amount of disk space in the flash recovery area used to retain logs required to guarantee FLASHBACK DATABASE to that restore point.

Dropping Restore Points

When you are satisfied that you do not need an existing restore point, or when you want to create a new restore point with the name of an existing restore point, you can drop the restore point, using the DROP RESTORE POINT SQL*Plus statement.

SQL> DROP RESTORE POINT before_app_upgrade;
Restore point dropped.

The same statement is used to drop both normal and guaranteed restore points. Guaranteed restore points never age out of the control file. They remain until they are explicitly dropped.

Monitoring Space Usage For Guaranteed Restore Points

When guaranteed restore points are defined on your database, you should monitor the amount of space used in your flash recovery area for files required to meet the guarantee. Use the query for viewing guaranteed restore points in "Listing Restore Points" and refer to the STORAGE_SIZE column to determine the space required for files related to each guaranteed restore point. To see the total usage of your flash recovery area, use V$RECOVERY_FILE_DEST and V$FLASH_RECOVERY_AREA_USAGE".

Sizing the Flash Recovery Area to Include Flashback Logs

When using Flashback Database, you must add extra space to the flash recovery area to hold the flashback logs, along with the other files stored in the flash recovery area. The setting of the DB_FLASHBACK_RETENTION_TARGET initialization parameter determines, indirectly, how much flashback log data the database retains. The size of flashback logs generated by the database for a given time period can vary considerably, however, depending on the specific database workload. If more blocks are affected by database updates during a given interval, then more disk space is used by the flashback log data generated for that interval.


Estimating Disk Space Requirements for Flashback Database Logs

The V$FLASHBACK_DATABASE_LOG view can help you estimate how much space to add to your flash recovery area for flashback logs. After you have enabled logging for Flashback Database and set a flashback retention target, allow the database to run under a normal workload for a while, to generate a representative sample of flashback logs.
SQL> SELECT ESTIMATED_FLASHBACK_SIZE FROM V$FLASHBACK_DATABASE_LOG;

The result is an estimate of the disk space needed to meet the current flashback retention target, based on the database workload since Flashback Database was enabled. Add the amount of disk space specified in $FLASHBACK_DATABASE_LOG.ESTIMATED_FLASHBACK_SIZE to your flash recovery area size, to hold the expected database flashback logs.

Managing Space For Flashback Logs in the Flash Recovery Area

You cannot manage the flashback logs in the flash recovery area directly, other than by setting the flashback retention target or using guaranteed restore points. You can, however, manage flash recovery area space as a whole, in order to maximize space available for retention of flashback database logs.



Determining the Current Window for Flashback Database

When flashback logging is enabled, the earliest SCN in the flashback database window can be determined by querying
V$FLASHBACK_DATABASE_LOG.OLDEST_FLASHBACK_SCN and V$FLASHBACK_DATABASE_LOG.OLDEST_FLASHBACK_TIME.

SQL> SELECT OLDEST_FLASHBACK_SCN, OLDEST_FLASHBACK_TIME FROM V$FLASHBACK_DATABASE_LOG;

Related Topics:

  1. Flashback Part #1 Basics
  2. Features of Oracle Flashback

Flashback Part #1 Basics

Flashback Database

Oracle Flashback Database, accessible from both RMAN and SQL*Plus , lets you quickly recover the entire database from logical data corruptions or user errors. It is similar to conventional point in time recovery in its effects, allowing you to return a database to its state at a time in the recent past. Flashback Database is, however, much faster than point-in-time recovery, because it does not require restoring datafiles from backup and it requires applying fewer changes from the archived redo logs.

Flashback Database uses its own logging mechanism, creating flashback logs which are stored in the flash recovery area. You can only use Flashback Database if flashback logs are available. Therefore, you must set up your database in advance to create flashback logs if you want to take advantage of Flashback Database feature.

To enable Flashback Database, you set up a flash recovery area, and set a flashback retention target , to specify how far back into the past you want to be able to restore your database with Flashback Database.

From that time on, at regular intervals, the database copies images of each altered block in every datafile into the flashback logs. These block images can later be reused to reconstruct the datafile contents as of any moment at which logs were captured. When a database is restored to its state at some past target time using Flashback Database, each block changed since that time is restored from the copy of the block in the flashback logs most immediately prior to the desired target time. The redo log is then used to re-apply changes since the time that block was copied to the flashback logs.

Flashback Database Window

The range of SCNs for which there is currently enough flashback log data to support the FLASHBACK DATABASE command is called the flashback database window. If space in the flash recovery area is low, then flashback logs may be deleted to free space for files required by the configured retention policy. The result is that the flashback database window can be shorter than the flashback retention target, depending upon the size of the flash recovery area, other backups that must be retained and how much flashback logging data is needed.


Normal Restore Points

Creating a normal restore point assigns the restore point name to a specific point in time or SCN, as a kind of alias you can use with commands that recognize a RESTORE POINT clause as a shorthand for specifying an SCN. Before performing any operation that you may have to reverse, you can create a normal restore point. The name of the restore point and the SCN are recorded in the control file. Then, if you later need to use Flashback Database, Flashback Table, or point-in-time recovery, you can refer to the target time using the name of the restore point instead of a time expression or SCN.

Normal restore points are very lightweight. The control file can maintain a record of thousands of normal restore points with no significant impact upon database performance. Normal restore points eventually age out of the control file if not manually deleted, so they require no ongoing maintenance.


Guaranteed Restore Points

Like normal restore points, guaranteed restore points can be used as aliases for SCNs in recovery operations. However, they also provide specific functionality related to the use of the Flashback Database feature.

Creating a guaranteed restore point at a particular SCN enforces the requirement that you can perform a Flashback Database operation to return your database to its state at that SCN, even if flashback logging is not enabled for your database. If flashback logging is enabled, creating a guaranteed restore point enforces the retention of flashback logs required for Flashback Database back to any point in time after the creation of the earliest guaranteed restore point.

A guaranteed restore point can be used to revert a whole database to a known good state days or weeks ago, as long as there is enough disk space in flash recovery area to store the needed logs.

Logging for Flashback Database and Guaranteed Restore Points

The logging for Flashback Database and guaranteed restore points is based upon capturing images of datafile blocks before changes are applied, so that these images can be used to return the datafiles to their previous state when a FLASHBACK DATABASE command is executed. The chief differences between normal flashback logging and logging for guaranteed restore points are related to when blocks are logged and whether the logs can be
deleted in response to space pressure in the flash recovery area. These differences affect space usage for logs and database performance.


Requirements for Using Guaranteed Restore Points

  1. The COMPATIBLE initialization parameter must be set to 10.2 or greater.
  2. The database must be running in ARCHIVELOG mode. The FLASHBACK DATABASE operation used to return your database to a guaranteed restore point requires the use of archived redo logs from around the time of the restore point.
  3. A flash recovery area must be configured. Guaranteed restore points use a mechanism similar to flashback logging, and as with flashback logging, Oracle must store the required logs in the flash recovery area.
  4. If flashback database is not enabled, then the database must be mounted, not open, when creating the first guaranteed restore point (or if all previously created guaranteed restore points have been dropped).

Related Topics:

  1. Flashback Part #2 Setup and Maintenance
  2. Features of Oracle Flashback

Sunday, April 27, 2008

Automatic Shared Memory Management

Automatic Shared Memory Management puts Oracle in control of allocating memory within the SGA. The SGA_TARGET parameter sets the amount of memory available to the SGA. This parameter can be altered dynamically up to a maximum of the SGA_MAX_SIZE parameter value. Provided the STATISTICS_LEVEL is set to TYPICAL or ALL and the SGA_TARGET is set to a value other than "0" Oracle will control the memory pools which would otherwise be controlled by the following parameters:

  1. BD_CACHE_SIZE (default block size)
  2. SHARED_POOL_SIZE
  3. LARGE_POOL_SIZE
  4. JAVA_POOL_SIZE

If these parameters are set to a non-zero value they represent the minimum size for the pool. These minimum values may be necessary if you experience application errors when certain pool sizes drop below a specific threshold.
The following parameters must be set manually and take memory from the quota allocated by the SGA_TARGET parameter:
  1. DB_KEEP_CACHE_SIZE
  2. DB_RECYCLE_CACHE_SIZE
  3. DB_nK_CACHE_SIZE(non-default block size)
  4. STREAMS_POOL_SIZE
  5. LOG_BUFFER

Enable Automatic Shared Memory Management

suppose we want to set sga_max_size = 1.5 GB and sga_target =1.2 GB

sql> shutdown immediate;
database shutdown.
sql> startup mount;
database mounted.
sql> alter system set SGA_MAX_SIZE = 1500M scope=spfile;
database altered.
sql> shutdown immediate;

database shutdown.
sql> startup ;
database open.
sql> alter system set SGA_TARGET = 800M scope=both;
database altered.


From now oracle dynamically manage the size of SHARED_POOL_SIZE,Buffer Cache, LARGE_POOL_SIZE,JAVA_POOL_SIZE and if needed oracle increase sga_target upto 1.5BG .

Wednesday, April 23, 2008

Use Multiple blocksizes in Oracle

The introduction of Oracle 9i brought an amazing amount of complexity to the Oracle database engine. Oracle introduced many new internal features, including bitmap free lists, redo log based replication, dynamic SGA, and perhaps the most important feature of all, the ability to support multiple block sizes.

Multiple blocksizes are indeed beneficial in large databases to eliminate superfluous I/O and isolate critical objects into a separate data buffer cache.
This allows large tables and indexes to have a larger block size than smaller objects. This is especially useful in hybrid databases where DSS transactions benefit from large block sizes, whilst OLTP operations are best suited to smaller block sizes. It also means that tablespaces can be transported between databases with differing block sizes. The database is created with a standard block size and up to 5 none-standard block sizes.

The DB_BLOCK_SIZE is used to specify the standard block size which is used for the SYSTEM and TEMPORARY tablespaces. All subsequent tablespaces will also be assigned this block size unless specified otherwise. To see default block size of the Database :
SQL> conn system/password as sysdba;

SQL> show parameter DB_BLOCK_SIZE

NAME TYPE VALUE
------------------------------------ ----------- ------------------------------
db_block_size integer 8192


The DB_CACHE_SIZE parameter, rather than DB_BLOCK_BUFFERS, is used to define the size of the buffer cache for the standard block size. This can be set to any size, but it will be rounded to the nearest whole granule. If the total SGA is less then 128M the granule size is 4M, greater than 128M and it becomes 16M. The DB_CACHE_SIZE must be at least 1 granule in size and defaults to 48M. An individual buffer cache must be defined for each non-standard block size used. These are set up with the following parameters which default to a size of 0


DB_2K_CACHE_SIZE = 0M
DB_4K_CACHE_SIZE = 0M
DB_8K_CACHE_SIZE = 0M
DB_16K_CACHE_SIZE = 0M

DB_32K_CACHE_SIZE = 0M


Each cache that is used must have at least 1 granule assigned to it. The instance must be restarted before changes to these parameters take effect. The DB_nK_CACHE_SIZE parameters can only be set to zero if there are no online tablespace with a nK block size.

The following example shows whole the process of setting up a 16K buffer cache and defining a tablespace to use it:

SQL> CONN sys/password AS SYSDBA
Connected.
SQL> ALTER SYSTEM SET DB_16K_CACHE_SIZE=50M SCOPE=SPFILE;
System altered.

SQL> SHUTDOWN IMMEDIATE
Database closed.
Database dismounted.
ORACLE instance shut down.

SQL> STARTUP
ORACLE instance started.
Total System Global Area 167772160 bytes
Fixed Size 787968 bytes
Variable Size 78641664 bytes
Database Buffers 88080384 bytes
Redo Buffers 262144 bytes
Database mounted.
Database opened.

SQL> CREATE TABLESPACE test_16k_ts DATAFILE '/u01/oradata/TSH1/test_16k_ts01.dbf' SIZE 100M BLOCKSIZE 16M;
Tablespace created.
SQL>

The KEEP and RECYCLE buffer caches can only be defined for the standard block size. In 8i these were defined using the BUFFER_POOL_KEEP and BUFFER_POOL_RECYCLE parameters, with their memory was taken from the total DB_BLOCK_BUFFERS. In 9i the parameters have been changed to DB_KEEP_CACHE_SIZE and DB_RECYCLE_CACHE_SIZE, with their memory allocations being totally separate to the main buffer cache.

A number of rules apply to the use of variable block sizes:

  1. All partitions of a partitioned object must reside in tablespaces with the same block size.
  2. All temporary tablespaces must be of the standard block size.
  3. Index Organized Table Overflow and out-of-line LOB segments can be stored in a tablespace with a different block size that that of the base table.


Sunday, April 20, 2008

Gather Database Statistics

Whenever a valid SQL statement is processed Oracle has to decide how to retrieve the necessary data. This decision can be made using one of two methods:
  1. Rule Based Optimizer (RBO) - This method is used if the server has no internal statistics relating to the objects referenced by the statement. This method is no longer favoured by Oracle and will be desupported in future releases.
  2. Cost Based Optimizer (CBO) - This method is used if internal statistics are present. The CBO checks several possible execution plans and selects the one with the lowest cost, where cost relates to system resources.

If new objects are created, or the amount of data in the database changes the statistics will no longer represent the real state of the database so the CBO decision process may be seriously impaired. The mechanisms and issues relating to maintenance of internal statistics are explained below:
  1. Analyze Statement
  2. DBMS_UTILITY
  3. DBMS_STATS
  4. Scheduling Stats
  5. Transfering Stats

Analyze Statement

The ANALYZE statement can be used to gather statistics for a specific table, index or cluster. The statistics can be computed exactly, or estimated based on a specific number of rows, or a percentage of rows:

ANALYZE TABLE employees COMPUTE STATISTICS;
ANALYZE INDEX employees_pk COMPUTE STATISTICS;

ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 100 ROWS;
ANALYZE TABLE employees ESTIMATE STATISTICS SAMPLE 15 PERCENT;


DBMS_UTILITY

The DBMS_UTILITY package can be used to gather statistics for a whole schema or database. Both methods follow the same format as the analyze statement:

EXEC DBMS_UTILITY.analyze_schema('SCOTT','COMPUTE');
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_rows => 100);
EXEC DBMS_UTILITY.analyze_schema('SCOTT','ESTIMATE', estimate_percent => 15);

EXEC DBMS_UTILITY.analyze_database('COMPUTE');
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_rows => 100);
EXEC DBMS_UTILITY.analyze_database('ESTIMATE', estimate_percent => 15);



DBMS_STATS

The DBMS_STATS package was introduced in Oracle 8i and is Oracles preferred method of gathering object statistics. Oracle list a number of benefits to using it including parallel execution, long term storage of statistics and transfer of statistics between servers. Once again, it follows a similar format to the other methods:

EXEC DBMS_STATS.gather_database_stats;
EXEC DBMS_STATS.gather_database_stats(estimate_percent => 15);

EXEC DBMS_STATS.gather_schema_stats('SCOTT');
EXEC DBMS_STATS.gather_schema_stats('SCOTT', estimate_percent => 15);
EXEC DBMS_STATS.gather_schema_stats(
ownname => 'SCOTT',
options => 'GATHER',
cascade => true,

estimate_percent => dbms_stats.auto_sample_size
);

exec dbms_stats.gather_table_stats( ownname => 'SCOTT',
tabname => 'EMP', cascade => true);

EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.gather_table_stats('SCOTT', 'EMPLOYEES', estimate_percent => 15);

EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK');
EXEC DBMS_STATS.gather_index_stats('SCOTT', 'EMPLOYEES_PK', estimate_percent => 15);

This package also gives you the ability to delete statistics:

EXEC DBMS_STATS.delete_database_stats;
EXEC DBMS_STATS.delete_schema_stats('SCOTT');
EXEC DBMS_STATS.delete_table_stats('SCOTT', 'EMPLOYEES');
EXEC DBMS_STATS.delete_index_stats('SCOTT', 'EMPLOYEES_PK');

For details Please go there
Or see the document: Oracle® Database PL/SQL Packages and Types Reference 10g Release 2 (10.2)


Scheduling Stats

Scheduling the gathering of statistics using DBMS_Job is the easiest way to make sure they are always up to date:

SET SERVEROUTPUT ON
DECLARE
l_job NUMBER;
BEGIN

DBMS_JOB.submit(l_job,
'BEGIN DBMS_STATS.gather_schema_stats(''SCOTT''); END;',
SYSDATE,
'SYSDATE + 1');
COMMIT;
DBMS_OUTPUT.put_line('Job: ' || l_job);
END;
/

The above code sets up a job to gather statistics for SCOTT for the current time every day. You can list the current jobs on the server using the DBS_JOBS and DBA_JOBS_RUNNING views.

Existing jobs can be removed using:

EXEC DBMS_JOB.remove(X);
COMMIT;

Where 'X' is the number of the job to be removed.



Transfering Stats

It is possible to transfer statistics between servers allowing consistent execution plans between servers with varying amounts of data. First the statistics must be collected into a statistics table. In the following examples the statistics for the APPSCHEMA user are collected into a new table, STATS_TABLE, which is owned by DBASCHEMA:

SQL> EXEC DBMS_STATS.create_stat_table('DBASCHEMA','STATS_TABLE');
SQL> EXEC DBMS_STATS.export_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');

This table can then be transfered to another server using your preferred method (Export/Import, SQLPlus Copy etc.) and the stats imported into the data dictionary as follows:

SQL> EXEC DBMS_STATS.import_schema_stats('APPSCHEMA','STATS_TABLE',NULL,'DBASCHEMA');
SQL> EXEC DBMS_STATS.drop_stat_table('DBASCHEMA','STATS_TABLE');

Issues
  1. Exclude dataload tables from your regular stats gathering, unless you know they will be full at the time that stats are gathered.
  2. Gathering statistics can be very resource intensive for the server so avoid peak workload times or gather stale stats only.
  3. Even if scheduled, it may be necessary to gather fresh statistics after database maintenance or large data loads.

Thursday, April 17, 2008

Enabling ARCHIVELOG Mode

OVERVIEW

Most of the High Availability features of Oracle require you to enable ARCHIVELOG mode for your database.When you enable this mode redo logs will be archived instead of overwritten. The archivelogs are stored in a seperate place usually can backed up regularly by your standard filesystem backup system. Archive logs are utilized by RMAN, Data Guard, Flashback and many others.

If you are going to enable archivelog mode on a production database, I recommend shutting down the database and take a cold backup (Keeping a "final noarchivelog mode backup" which is to be a good and excepted practice).


Enabling Archive Mode

Enabling archive mode is simple, set the parameter LOG_ARCHIVE_DEST then connect to your database in mounted but closed mode (startup mount) and alter the database.

Lets start by checking the current archive mode of your DB.

SQL> SELECT LOG_MODE FROM SYS.V$DATABASE;

LOG_MODE
------------
NOARCHIVELOG

So we're in NOARCHIVELOG mode and we need to change.

SQL> CONN / AS SYSDBA;
SQL>ALTER SYSTEM set LOG_ARCHIVE_DEST = "c:\oracle\admin\test\archive"
scope = both;
System altered.

SQL> Archive Log List;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination c:\oracle\admin\test\archive
Oldest online log sequence 161
Next log sequence to archive 163
Current log sequence 163

You can specify as many as 10 diffrent archive log destinations by using the paramters log_archive_dest_1 through log_archive_dest_10. Remember, if you run out of space in your archive log destination the database will shut down/Hang!

SQL> shutdown immediate;
ORACLE instance shutdown.
SQL> startup mount
ORACLE instance started.

Total System Global Area 184549376 bytes
Fixed Size 1300928 bytes
Variable Size 157820480 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
Database mounted.

SQL> alter database archivelog;
Database altered.

SQL> alter database open;
Database altered.

There are several system views that can provide you with information reguarding archives, such as:

V$DATABASE : Identifies whether the database is in ARCHIVELOG or NOARCHIVELOG mode and whether MANUAL (archiving mode) has been specified.
V$ARCHIVED_LOG: Displays historical archived log information from the control file. If you use a recovery catalog, the RC_ARCHIVED_LOG view contains similar information.
V$ARCHIVE_DEST: Describes the current instance, all archive destinations, and the current value, mode, and status of these destinations.
V$ARCHIVE_PROCESSES: Displays information about the state of the various archive processes for an instance.
V$BACKUP_REDOLOG:Contains information about any backups of archived logs. If you use a recovery catalog, the RC_BACKUP_REDOLOG contains similar information.
V$LOG:Displays all redo log groups for the database and indicates which need to be archived.
V$LOG_HISTORY:Contains log history information such as which logs have been archived and the SCN range for each archived log.
Disable ARCHIVELOG Mode

Disabling archive mode is simple, connect to your database in mounted but closed mode (startup mount) and alter the database.

SQL> shutdown immediate;
ORACLE instance shutdown.
SQL> startup mount
ORACLE instance started.

Total System Global Area 184549376 bytes
Fixed Size 1300928 bytes
Variable Size 157820480 bytes
Database Buffers 25165824 bytes
Redo Buffers 262144 bytes
Database mounted.

SQL>
alter database noarchivelog;
Database altered.
SQL> alter database open; Database altered.





Tuesday, April 15, 2008

Changing Character Sets Of Database

When computer systems process characters, they use numeric codes instead of the graphical representation of the character. For example, when the database stores the letter A, it actually stores a numeric code that is interpreted by software as the letter. These numeric codes are especially important in a global environment because of the potential need to convert data between different character sets.

Character set currently used

To see which character set currently used by the database, please do the following:

SQL> conn / as sysdba

SQL> SELECT view_name FROM dba_views WHERE view_name LIKE '%NLS%';

VIEW_NAME
------------------------------
V_$NLS_PARAMETERS
V_$NLS_VALID_VALUES
GV_$NLS_PARAMETERS
GV_$NLS_VALID_VALUES
NLS_SESSION_PARAMETERS
NLS_INSTANCE_PARAMETERS
NLS_DATABASE_PARAMETERS
EXU9NLS

8 rows selected.

To see details

SQL> select * from nls_database_parameters;

PARAMETER VALUE
------------------------------ ----------------------------------------
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA
NLS_NUMERIC_CHARACTERS .,
NLS_CHARACTERSET WE8ISO8859P1
NLS_CALENDAR GREGORIAN
NLS_DATE_FORMAT DD-MON-RR
NLS_DATE_LANGUAGE AMERICAN
NLS_SORT BINARY
NLS_TIME_FORMAT HH.MI.SSXFF AM
NLS_TIMESTAMP_FORMAT DD-MON-RR HH.MI.SSXFF AM
NLS_TIME_TZ_FORMAT HH.MI.SSXFF AM TZR
NLS_TIMESTAMP_TZ_FORMAT DD-MON-RR HH.MI.SSXFF AM TZR
NLS_DUAL_CURRENCY $
NLS_COMP BINARY
NLS_LENGTH_SEMANTICS BYTE
NLS_NCHAR_CONV_EXCP FALSE
NLS_NCHAR_CHARACTERSET AL16UTF16
NLS_RDBMS_VERSION 10.2.0.2.0

20 rows selected.

Changing Character Sets

Note: Depending on the character sets involved this may result in data loss. So please try this in test environments before proceeding in production.

SQL> CONN / AS SYSDBA

SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP RESTRICT;
SQL>ALTER SYSTEM SET JOB_QUEUE_PROCESSES=0;
SQL>ALTER SYSTEM SET AQ_TM_PROCESSES=0;
SQL>ALTER DATABASE CHARACTER SET WE8MSWIN1252;

If the above fails:

SQL>ALTER DATABASE CHARACTER SET INTERNAL_USE WE8MSWIN1252;
SQL>SHUTDOWN IMMEDIATE;
SQL>STARTUP;

done ! Now see what we have done:

SQL>SELECT * FROM gv_$nls_parameters;



Sunday, April 13, 2008

Manually configure the Oracle 10g EM dbconsole

Overview

When you choose to create a preconfigured database during the Oracle 10g installation, you can select the Oracle Enterprise Manager (OEM) interface that you want to use to manage the database. The following options are available:

Database Grid Control
This option is available only if an Oracle Management Agent is installed on the system. When the Installer detects an Oracle Management Agent on the system, it allows you to choose this option and specify the Oracle Management Service that you want to use to manage the database. If an Oracle Management Agent is not installed, you must choose to use Database Control to manage the database. However, if you install Oracle Management Agent after you install Oracle Database, you can then use Grid Control to manage this database.

Database Control

This option is selected by default if an Oracle Management Agent is not installed on the system. However, even if a Management Agent is installed, you can still choose to configure Database Control to manage the database.

Custom installation

If you choose the Custom installation type or the Advanced database configuration option during the installation, the Installer does not display the OEM setup screens. Instead, it runs the Database Configuration Assistant (DBCA) in interactive mode, which enables you to create a custom database.

DBCA also enables you to specify the Oracle Enterprise Manager interface that you want to use. Furthermore, you can also use DBCA after the installation to configure Database Control for a database that was not previously configured to use it.

However, if you decide to setup your own Database, you must install the Database Control manually.

Setup your own Database and manually install the Database Control

The Database Control relies on various underlying technologies to discover, monitor, and administer the Oracle Database environment.

From the Database Control, you can monitor and administer a single Oracle Database instance.

The Database Control Framework consists of the Database Control and its underlying technologies:

A local version of the Oracle Management Service designed to work with the local database or clustered database.

A local Oracle Management Repository installed in the local database and designed to store management data for the Database Control.

The following steps have to be performed.



Create the Database Control Repository and setup the OC4J Application Server

Make sure, that you can connect to the Repository Database (Test it with SQL*Plus). Examples for Setup Files can be found here:

Windows
LISTENER.ORA
SQLNET.ORA
TNSNAMES.ORA

Linux
listener.ora
sqlnet.ora
tnsnames.ora

Now start the Oracle EM dbconsole Build Script ($ORACLE_HOME/bin/emca for Linux and $ORACLE_HOME\Bin\emca.bat for Windows).

$ emctl stop dbconsole -- stop EM
$ emca -repos drop -- drop EM repository
$ emca -repos create -- create EM repository
$ emca -config dbcontrol db -- configure dbcontrol
$emca -r auto configure

$ emctl stop dbconsole -- stop EM
$emca -repos recreate -- recreate EM repository
$emca -deconfig dbcontrol db -- deconfigure dbcontrol
$emca -config dbcontrol db -- configure dbcontrol


Renaming / Moving Data Files, Control Files, and Online Redo Logs

Renaming / Moving Data Files, Control Files, and Online Redo Logs

*** Be sure that your database is in archivelog mode.
Overview



Once data files, online redo log files and control files has been created in the
database, it may be necessary to move them in order to better manage, size or I/O requirements. It also necessary when the file system (drive) has reached 85%-95% used. There are several methods used by DBAs for moving datafiles, online redo log files and control files. In all of these methods, operating system commands are used to move the files while the Oracle commands serve primarily to reset the pointers to those files.

There are two methods for moving / renaming physical database files within Oracle. The first is to shut the database down, move (or rename) the file(s) using O/S commands, and finally, use the ALTER DATABASE command to reset the pointers to those files within Oracle.

The second method can be done while the database is running and uses the ALTER TABLESPACE command. The tablespace will need to be taken offline during the time the file(s) are being moved or renamed. Once the files are moved (or renamed), use the ALTER TABLESPACE command to reset the pointers within Oracle and finally, bring the tablespace back online. This method only applies to datafiles whose tablespaces do not include SYSTEM, ROLLBACK or TEMPORARY segments.

Following is an example of how to manipulate datafiles in a tablespace using both the alter database method and the alter tablespace method. All examples will use an Oracle9i databse (9.2.0.5.0) running on Sun Solaris 2.9.

Moving Datafiles while the Instance is Mounted

Moving or renaming a datafile while the database is in the MOUNT stage requires the use of the ALTER DATABASE command. When using the ALTER DATABASE method to move datafiles, the datafile is moved after the instance is shut down. A summary of the steps involved follows:
  1. Shutdown the instance
  2. Use operating system commands to move or rename the files(s).
  3. Mount the database and use the ALTER DATABASE to rename the file within the database.
  4. Opening the Database

% sqlplus / as sysdba

SQL> shutdown immediate;

SQL> !mv /u05/app/oradata/ORA920/indx01.dbf /u06/app/oradata/ORA920/indx01.dbf

SQL> startup mount;

SQL> alter database rename file '/u05/app/oradata/ORA920/indx01.dbf' to '/u06/app/oradata/ORA920/indx01.dbf';

Do not disconnect after this step. Stay logged in and proceed to open the database!

SQL> alter database open;

SQL> exit;


Moving Datafiles while the Instance is Open

Moving or renaming a datafile while the database is in the 'OPEN' stage requires the use of the ALTER TABLESPACE command. When using the ALTER TABLESPACE method to move datafiles, the datafile is moved while the instance is running. A summary of the steps involved follows:
  1. Take the tablespace OFFLINE.
  2. Use operating system commands to move or rename the file(s).
  3. Use the ALTER TABLESPACE command to rename the file within the database.
  4. Bring the tablespace back ONLINE.

NOTE: This method can only be used for non-SYSTEM tablespaces. It also cannot be used for tablespaces that contain active ROLLBACK segments or TEMPORARY segments.

% sqlplus "/ as sysdba"

SQL> alter tablespace INDX offline;

SQL> !mv /u05/app/oradata/ORA920/indx01.dbf /u06/app/oradata/ORA920/indx01.dbf

SQL> alter tablespace INDX
2 rename datafile '/u05/app/oradata/ORA920/indx01.dbf' to '/u06/app/oradata/ORA920/indx01.dbf';

Do not disconnect after this step. Stay logged in and proceed to bring the tablespace back online!

SQL> alter tablespace INDX online;

SQL> exit


Moving Online Redo Log Files

Online redo log files may be moved while the database is shutdown. Once renamed (or moved) the DBA should use the ALTER DATABASE command to update the data dictionary. A summary of the steps involved follows:
  1. Shutdown the instance
  2. Use operating system commands to move the datafile.
  3. Mount the database and use ALTER DATABASE to rename the log file within the database.
  4. Opening the Database

% sqlplus "/ as sysdba"

SQL> shutdown immediate;
SQL> !mv /u06/app/oradata/ORA920/redo_g03a.log /u03/app/oradata/ORA920/redo_g03a.log
SQL> !mv /u06/app/oradata/ORA920/redo_g03b.log /u04/app/oradata/ORA920/redo_g03b.log
SQL> !mv /u06/app/oradata/ORA920/redo_g03c.log /u05/app/oradata/ORA920/redo_g03c.log

SQL> startup mount;

SQL> alter database rename file '/u06/app/oradata/ORA920/redo_g03a.log' to
2 '/u03/app/oradata/ORA920/redo_g03a.log';
SQL> alter database rename file '/u06/app/oradata/ORA920/redo_g03b.log' to
2 '/u04/app/oradata/ORA920/redo_g03b.log';
SQL> alter database rename file '/u06/app/oradata/ORA920/redo_g03c.log' to
2 '/u05/app/oradata/ORA920/redo_g03c.log';

Do not disconnect after this step. Stay logged in and proceed to open the database!

SQL> alter database open;
SQL> exit


Moving Control Files

The following method can be used to move or rename a control file(s). A summary of the steps involved follows:
  1. Shutdown the Instance
  2. Move the Control File
  3. Edit the init.ora
  4. Startup the Instance

% sqlplus "/ as sysdba"

SQL> shutdown immediate

SQL> !mv /u06/app/oradata/ORA920/control01.ctl /u03/app/oradata/ORA920/control01.ctl
SQL> !mv /u06/app/oradata/ORA920/control02.ctl /u04/app/oradata/ORA920/control02.ctl
SQL> !mv /u06/app/oradata/ORA920/control03.ctl /u05/app/oradata/ORA920/control03.ctl

Within the init.ora file, there will be an entry for the "control_files" parameter. Edit this entry to reflect the change(s) made to the physical control file(s) moved in the previous example.

...
control_files = (/u03/app/oradata/ORA920/control01.ctl,
/u04/app/oradata/ORA920/control02.ctl,
/u05/app/oradata/ORA920/control03.ctl)
...


SQL> startup open
SQL> exit

Thursday, April 10, 2008

Avoid Sorting done by 'ORDER BY' clause

Sorting is mostly expensive, if you have an index, you can use it to avoid sorting ! Is NOT NULL constraint required? The following TEST CASES will answer your question.

TASTCASE 1:

Step1: Create a table EMP as

SQL> conn system/password as sysdba;
Connected.
SQL> CREATE TABLE HASAN.EMP AS SELECT * FROM SCOTT.EMP;
Table created.
SQL> conn hasan/password
Connected.

Step2: Query the EMP

SQL> set autotrace on explain;
SQL> select * from EMP ORDER BY EMPNO;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30
7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30
7566 JONES MANAGER 7839 02-APR-81 2975 20
7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30
7698 BLAKE MANAGER 7839 01-MAY-81 2850 30
7782 CLARK MANAGER 7839 09-JUN-81 2450 10
7788 SCOTT ANALYST 7566 19-APR-87 3000 20
7839 KING PRESIDENT 17-NOV-81 5000 10
7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30
7876 ADAMS CLERK 7788 23-MAY-87 1100 20
7900 JAMES CLERK 7698 03-DEC-81 950 30
7902 FORD ANALYST 7566 03-DEC-81 3000 20
7934 MILLER CLERK 7782 23-JAN-82 1300 10


14 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 2007178810

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 4 (25) | 00:00:01 |
| 1 | SORT ORDER BY | | 14 | 1218 | 4 (25) | 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 14 | 1218 | 3 (0) | 00:00:01 |
---------------------------------------------------------------------------

Observation: Full Table Scan of EMP and SORTING IS DOING WITH EMPNO

Step3: Create a index TESTEMPIDX on EMPNO OF TESTEMP table as

SQL> create index emp_empno_indx on emp(empno);
Index created.
SQL> select * from emp order by empno;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

---------- ---------- --------- ---------- --------- ---------- ---------- ----------

7369 SMITH CLERK 7902 17-DEC-80 800 20

7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7934 MILLER CLERK 7782 23-JAN-82 1300 10

14 rows selected.

Execution Plan
----------------------------------------------------------
Plan hash value: 150391907

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 4 (25)| 00:00:01 |
| 1 | SORT ORDER BY | | 14 | 1218 | 4 (25)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 14 | 1218 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------

Observation: Full Table Scan of EMP and SORTING (ORDERBY)

Step4: Create a NOT NULL Constraint on EMPNO of EMP table as

SQL> ALTER TABLE EMP MODIFY (EMPNO CONSTRAINT NOT_NULL_CONS NOT NULL);
Table altered.
SQL> desc emp;

Name Null? Type
----------------- -------- ------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)


SQL> select * from emp order by empno;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

---------- ---------- --------- ---------- --------- ---------- ---------- ----------

7369 SMITH CLERK 7902 17-DEC-80 800 20
7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7934 MILLER CLERK 7782 23-JAN-82 1300 10


14 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1969959806

----------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 14 | 1218 | 2 (0)| 00:00:01 |
| 2 | INDEX FULL SCAN | EMP_EMPNO_INDX | 14 | | 1 (0)| 00:00:01 |
----------------------------------------------------------------------------------------------


Observation: INDEX TESTEMPIDX of TESTEMP is USING, TABLE ACCESS BY INDEX ROWID AND NOSORTING

Learning:

1) Order by column should have NOT NULL constraint.
2) Index column should be in the Order by Clause

TESTCASE 2 :

SQL> desc emp;

Name Null? Type
----------------- -------- ------------
EMPNO NOT NULL NUMBER(4)
ENAME VARCHAR2(10)
JOB VARCHAR2(9)
MGR NUMBER(4)
HIREDATE DATE
SAL NUMBER(7,2)
COMM NUMBER(7,2)
DEPTNO NUMBER(2)

SQL> select index_name,column_name from user_ind_columns where table_name='EMP';

INDEX_NAME COLUMN_NAME
------------- --------------
EMP_EMPNO_INDX EMPNO


Step1: QUERY the EMP as

SQL> SELECT * FROM EMP ORDER BY ENAME;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

---------- ---------- --------- ---------- --------- ---------- ---------- ----------

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7934 MILLER CLERK 7782 23-JAN-82 1300 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7369 SMITH CLERK 7902 17-DEC-80 800 20

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30


14 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 150391907

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 4 (25)| 00:00:01 |
| 1 | SORT ORDER BY | | 14 | 1218 | 4 (25)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 14 | 1218 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------


Observation: After adding ENAME in Order by clause, index TESTEMPIDX is not using

Step 2: Create a Composite index on EMPNO and ENAME and run a SQL again as

SQL> CREATE INDEX TESTEMPIDX2 ON EMP(EMPNO,ENAME);
Index created.
SQL> SELECT * FROM EMP ORDER BY EMPNO,ENAME;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

---------- ---------- --------- ---------- --------- ---------- ---------- ----------

7369 SMITH CLERK 7902 17-DEC-80 800 20

7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7934 MILLER CLERK 7782 23-JAN-82 1300 10


14 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 1356242066

-------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
-------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 2 (0)| 00:00:01 |
| 1 | TABLE ACCESS BY INDEX ROWID| EMP | 14 | 1218 | 2 (0)| 00:00:01 |
| 2 | INDEX FULL SCAN | TESTEMPIDX2 | 14 | | 1 (0)| 00:00:01 |
-------------------------------------------------------------------------------------------

Observation: INDEX TESTEMPIDX2 of EMP is USING, TABLE ACCESS BY INDEX ROWID AND NOSORTING

Step3: alternate the order of EMPNO and ENAME and run a SQL again as

SQL> SELECT * FROM EMP ORDER BY ENAME,EMPNO;

EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO

---------- ---------- --------- ---------- --------- ---------- ---------- ----------

7876 ADAMS CLERK 7788 23-MAY-87 1100 20

7499 ALLEN SALESMAN 7698 20-FEB-81 1600 300 30

7698 BLAKE MANAGER 7839 01-MAY-81 2850 30

7782 CLARK MANAGER 7839 09-JUN-81 2450 10

7902 FORD ANALYST 7566 03-DEC-81 3000 20

7900 JAMES CLERK 7698 03-DEC-81 950 30

7566 JONES MANAGER 7839 02-APR-81 2975 20

7839 KING PRESIDENT 17-NOV-81 5000 10

7654 MARTIN SALESMAN 7698 28-SEP-81 1250 1400 30

7934 MILLER CLERK 7782 23-JAN-82 1300 10

7788 SCOTT ANALYST 7566 19-APR-87 3000 20

7369 SMITH CLERK 7902 17-DEC-80 800 20

7844 TURNER SALESMAN 7698 08-SEP-81 1500 0 30

7521 WARD SALESMAN 7698 22-FEB-81 1250 500 30


14 rows selected.


Execution Plan
----------------------------------------------------------
Plan hash value: 150391907

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 1218 | 4 (25)| 00:00:01 |
| 1 | SORT ORDER BY | | 14 | 1218 | 4 (25)| 00:00:01 |
| 2 | TABLE ACCESS FULL| EMP | 14 | 1218 | 3 (0)| 00:00:01 |
---------------------------------------------------------------------------

Observation: Full Table Scan of EMP and SORTING (ORDER BY) (THE INDEX TESTEMPIDX2 is not used !)

Conclusion:

1. All the Order by column should be in composite index

2. Order by columns should be of the same order as of composite index

3. At least one of the composite columns should have NOT NULL constraint.