Wednesday, December 28, 2011

SPM - SQL PLAN MANAGER

As a DBA some I feel that Oracle is living thing ! for example query performance. Out of no where a gentle looking SQL statement may pop up your top time consuming query list. A sql query which gives efficient performance in 3 months ago (Or even week ago), now giving horrible performance that threatening you cool database performance :)

Yes, we all DBA know that a gentle statement can screwed up due to
- Re-guthering optimizer statistics
- Change in optimizes parameters
- Change in Schema? Object structure (i.e, metadata definition)


Though Optimizer always try to choose list expensive plan for each SQL statement execution, Some time It (optimizer) choose expensive plans and raise annoying situations for peace loving DBAs :)

Well Oracle 11g come up with a new feature that can handle such weird situation. The simplest solution we can think is a mechanism that only allow trusted SQL plans to be executed and reject untrusted plans. O Yes Oracle 11g gives us a manager named SPM that do the same thing.

SQL PLAN MANAGER (SPM) has three major components

  • SQL Plan Baseline Capture - Create SQL Plan Baseline for trusted (accepted) SQL plans.
  • SQL Plan Baseline Selection - Conform that only accepted SQl plans are used when an SQL statement is executed.
  • SQL Plan Baseline Evolution - Evaluated all SQL Plans (old & new) for each SQL statement. The new/old plan will only be excepted if it gives better or equal performance compared with existing trusted plan otherwise rejected. (If no trusted plan exist then current plan will be accepted)


To enable this cool feature we just do the followings

ALTER SYSTEM set optimizer_capture_sql_plan_baselines= TRUE scope=both; [Default value is false]
ALTER SYSTEM set optimizer_user_sql_plan_baselines= TRUE scope=both; [Default value is true, so just check the parameter value]

You are done ! Keep an eye in DBA_SQL_PLAN_BASELINES view and enjoying SPM's magic

Tuesday, December 27, 2011

TABLE_EXISTS_ACTION

Those who are familiar with oracle data pump may know very well about TABLE_EXISTS_ACTION import (impdp) parameter.

TABLE_EXISTS_ACTION = {SKIP | APPEND | TRUNCATE | REPLACE}

This parameter is used when you import a table which is already exists in import schema. The default value is 'SKIP', so if you not use this parameter and impdp found that the table which to be imported is already exist then impdp skip this table from import list.

Now you may interested about rest of the three values-

APPEND - The import will be done if the table does not have any Primary key or Unique key constraints. If such constraint exists then you need to ensure that append operation does not violate Primary key or Unique key constraints (that means it does not occur data duplication).

TRUNCATE - If the table is not a parent table ( i.e, No other table references it by creating foreign key constraint) then it truncate the existing data and load data from dump file. Otherwise data will not be loaded.

REPLACE - This is the most tricky value of TABLE_EXISTS_ACTION parameter. If the importing table is a parent table ( i.e, other table references it by creating foreign key constraint ) then the foreign key will be deleted from child table. All existing data will be replaced with imported data.

Wednesday, October 19, 2011

ORA-08102

ORA-08102: index key not found, obj# 290, file 1, block 4353


Cause: Internal error: possible inconsistency in index.

Action: Send trace file to your customer support representative, along with information on reproducing the error


Well I do not have access to oracle support (if you need it, you need to buy that service). So first of all, I tried to identify which index cause that problem.


SQL>select OWNER,OBJECT_NAME,OBJECT_ID, OBJECT_TYPE from dba_objects x where x.object_id=290;                                                                                                         

OWNER       OBJECT_NAME      OBJECT_ID   OBJECT_TYPE
-------- ------------------- ---------- ------------

SYS         I_JOB_NEXT            290    INDEX

Guess what ? it's an index own by 'SYS' user. So I issued index rebuild command and it didn't work. As the index lies in system tablespace, I didn't want to move it to other tablespace. Thus I droped and created the index again. So first gather the index DDL from database

SQL> set long 1000
SQL> set pagesize 0
SQL> select DBMS_METADATA.GET_DDL('INDEX','I_JOB_NEXT') from DUAL;
output:
CREATE INDEX "SYS"."I_JOB_NEXT" ON "SYS"."JOB$" ("NEXT_DATE")
PCTFREE 10 INITRANS 2 MAXTRANS 255 COMPUTE STATISTICS
STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645
PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1
BUFFER_POOL DEFAULT FLASH_CACHE DEFAULT CELL_FLASH_CACHE DEFAULT)
TABLESPACE "SYSTEM"

SQL> DROP INDEX SYS.I_JOB_NEXT;
SQL> CREATE INDEX "SYS"."I_JOB_NEXT" ON "SYS"."JOB$" ("NEXT_DATE") TABLESPACE "SYSTEM";

It works. Previously, I thought that 'REBUILD' command internaly drop an object and recreate it. But, now I need to explore 'How rebuild command works ?'.

Sunday, October 16, 2011

ORA-01591: lock held by in-doubt distributed transaction string

Cause: Trying to access resource that is locked by a dead two-phase commit transaction that is in prepared state.

Action: DBA should query the DBA_PENDING_TRANSACTIONS and DBA_2PC_PENDING, and attempt to repair network connection(s) to coordinator and commit point. If timely repair is not possible, DBA should contact end user for rollback/commit the transection.

sqlplus / as sysdba;

SQL> SELECT * FROM DBA_2PC_PENDING;

SQL> ROLLBACK FORCE LOCAL_TRAN_ID;

SQL> commit;

SQL> alter system enable distributed recovery;

SQL> execute dbms_transaction.purge_lost_db_entry('LOCAL_TRAN_ID');

SQL > commit;

For more details plese go through this document

Monday, July 25, 2011

Table Comparison and Synchronisation

If you are managing more than one databases for same application, it is very common that you need to compare one table ( say scott.employee) in two different databases. I think, we had previous experiences to do such job :) . The simplest way is just minus two tables. Like

1. Two different tables In same DB - SELECT * FROM table1 minus SELECT * FROM table2;
2. Same table in two different DB - SELECT * FROM table1 minus SELECT * FROM table2@remote_db;

But this simple task can be time consuming if your tables are very big (say 1000000+ records). If targeted tables do not contain BLOB,CLOB or Long data type columns, you can easily use DBMS_COMPARISON, a new package introduce in Oracle 11g, to compare them. More over this package also provide Synchronization to remove miss match records in targeted tables. For details please go through Oracle Documentations.

Here we are going to compare and synchronise 'employee' table in two different DB.

Requirement: A database link between host (i.e, where these script will be run) and remote DB.

Step One: Create comparison Task

BEGIN
DBMS_COMPARISON.CREATE_COMPARISON(
comparison_name => 'compare_employee'
, schema_name => 'SCOTT'
, object_name => 'EMPLOYEE'
, dblink_name => 'db_link_to_remote'
, remote_schema_name=>'SCOTT'
, remote_object_name=>'EMPLOYEE'
);
END;
/

If you wish to compare two different table in same DB, set dblink_name => NULL, object_name => 'table1' and remote_object_name=>'table2'

Step Two: Execute comparison Task

DECLARE
consistent BOOLEAN;
scan_info DBMS_COMPARISON.COMPARISON_TYPE;
BEGIN
consistent := DBMS_COMPARISON.COMPARE( comparison_name =>'compare_employee', scan_info=> scan_info, perform_row_dif => TRUE);
DBMS_OUTPUT.PUT_LINE('Scan ID: '||scan_info.scan_id);
IF consistent=TRUE THEN DBMS_OUTPUT.PUT_LINE('No differences found.');
ELSE
DBMS_OUTPUT.PUT_LINE('Differences were found.');
END IF;
END;
/

Step Three: See how may difference you got

col COMPARISON_NAME format a15;
col SCHEMA_NAME format a15;
col OBJECT_NAME format a15;


SELECT s.scan_id ,c.COMPARISON_NAME,c.SCHEMA_NAME,c.OBJECT_NAME ,s.CURRENT_DIF_COUNT
FROM USER_COMPARISON c,USER_COMPARISON_SCAN_SUMMARY s
WHERE c.COMPARISON_NAME = s.COMPARISON_NAME AND s.scan_id = 514; (this number is a out put of step two)


Step Four: Synchronise Remote table with host table.

DECLARE
scan_info DBMS_COMPARISON.COMPARISON_TYPE;
BEGIN
DBMS_COMPARISON.CONVERGE (comparison_name => 'compare_employee',
scan_id => 506,
scan_info => scan_info, --this number is a out put of step two
converge_options => DBMS_COMPARISON.CMP_CONVERGE_LOCAL_WINS);
DBMS_OUTPUT.PUT_LINE('Local Rows Merged: '||scan_info.loc_rows_merged);
DBMS_OUTPUT.PUT_LINE('Remote Rows Merged: '||scan_info.rmt_rows_merged);
DBMS_OUTPUT.PUT_LINE('Local Rows Deleted: '||scan_info.loc_rows_deleted);
DBMS_OUTPUT.PUT_LINE('Remote Rows Deleted: '||scan_info.rmt_rows_deleted);
END;
/

You can also Synchronise Host table by using Remote table, please visit given link.

I hope you all will enjoy this exiting new package of Oracle 11g.

Monday, July 11, 2011

Traveling past with Log Miner

Some times you may wish to see what happen in recent past. In a beautiful morning, you may discover that a misshape occure in your database :(. Unfortunately you did not enabled any auditing options earlier.

If your database is in archivelog mod and supplemental log is enabled then 'Log Miner' can save the day ! Oracle Log Miner is also capable of mining redo log, archive log of remote DB. Please see Oracle Utility Document for details.

In this post, we only consider a scenario in which Source DB and Mining DB is same. Lets describe how you can do this


STEP 1 : Create Log Miner User

CREATE USER logminer_adm identified by logminer_adm;

GRANT resource,connect to logminer_adm;
GRANT EXECUTE_CATALOG_ROLE,DBA to logminer_adm;


Step 2 : Enable Supplemental Login

Check whether supplemental login is enabled

SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;

If not, then you can't mine all transections. So enable it for future and follow the remaining steps to mine a subset of all transections.

ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;



Step 3: Add Archieve Log Files to Log Miner

ALTER SESSION SET NLS_DATE_FORMAT = 'DD-MON-YYYY HH24:MI:SS';


EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'/u/archivelog/2011_07_10/o1_mf_1_470_71m4jv90_.arc', OPTIONS => DBMS_LOGMNR.NEW);
EXECUTE DBMS_LOGMNR.ADD_LOGFILE(LOGFILENAME =>'/u/archivelog/2011_07_10/o1_mf_1_471_71m4kn09_.arc', OPTIONS => DBMS_LOGMNR.ADDFILE);


You can add as much files as you need

Step 4: Start Log Miner

EXECUTE DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'10-JUL-2011 19:50:00',ENDTIME => '10-JUL-2011 19:55:00',OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG
+DBMS_LOGMNR.CONTINUOUS_MINE);

Step 5: View What Happen Last Night :)

SELECT SEG_OWNER ,OPERATION, SQL_REDO, SQL_UNDO
FROM V$LOGMNR_CONTENTS
WHERE SEG_OWNER = 'SYS';


Step 6: Stop Log Miner Session

EXECUTE DBMS_LOGMNR.END_LOGMNR;

Saturday, July 2, 2011

DATA PUMP: Network mode import

Some time we need to copy one schema from one DB machine to another machine or copy one schema as schema (i.e. remap schema) in same DB. The simplest way is to

(a) export required schema, move dump to remote DB and import [copy in 2 DB]

(b) export required schema and import the dump with schema replace option [copy in same DB]


Scenario ONE

Suppose the source schema is too small (say 1-3 GB) the above process is time consuming and labours compare with task requirements.

Scenario TOW

Suppose the source schema is too big (say 100 GB+) and there is not enough disk space to store the 100G size dump.

In above cases you can use the option of network import. But you need to consider that network import is a slower process because all the data should be travel in connection cable if your target is coping a schema in two different DB machine.

Lets describe how we can achieve this. we move user_info schema from source db to remote db.



STEP 1: grant exp_full_database role to socre shcema [IN REMOTE DB]
conn system/pass;

GRANT EXP_FULL_DATABASE to user_info;

STEP 2: [IN DESTINATION DB] Create destination user and grant necessary roles

conn system/pass;
CREATE USER user_info
IDENTIFIED BY pass
DEFAULT TABLESPACE USER
TEMPORARY TABLESPACE TEMP;

GRANT CONNECT , RESOURCE TO user_info;

STEP 3: [IN DESTINATION DB] grant read/write on dump directory

conn system/pass;

GRANT read,write on DIRECTORY dump_directory to user_info;

Network import does not requer any dump file. This directory is only requer to write the import log file.

STEP 4: [IN DESTINATION DB] create public DB Link

conn system/pass;
CREATE PUBLIC DATABASE LINK SATURN
connect to user_info identified by pass
using '(DESCRIPTION=(
ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=remote_db)
(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=orcl.oracle.com)
(server=DEDICATED)))';

STEP 5: [IN DESTINATION DB MACHINE] execute impdp

impdp user_info/pass directory=dump_dir network_link=SATURN logfile=net_import_proddev.log EXCLUDE=GRANT,STATISTICS,SYNONYM,DB_LINK REMAP_SCHEMA=USER_INFO:USER_INFO

Wednesday, May 4, 2011

Multiplex online Redo Logfile

It is important to have multiple (at list two) copy of each online redo logfile. To avoid the disaster of disk failure, it is recommended to place each member of a specific online redo logfile group in different disk.

To view to current online redo logfile group members

SELECT GROUP#,STATUS,MEMBER FROM V$LOGFILES;

Suppose we have disk01 (mount point /u01 ) that contains current online redolog files .To add member in redo logfile group 1,2 and 3 in disk02 ( mount point /u02 )

ALTER DATABASE ADD LOGFILE MEMBER '/u02/redo01.log' TO GROUP 1;
ALTER DATABASE ADD LOGFILE MEMBER '/u02/redo02.log' TO GROUP 2;
ALTER DATABASE ADD LOGFILE MEMBER '/u02/redo03.log' TO GROUP 3;

Monday, March 21, 2011

Track Database Growth

Some time it become very important to monitor your database growth specially in test database machines. Usually these machines have very limited disk space.
Thus, tracking the consumption of disk space is one of the frequent tasks in administrative checklist. Here is a SQL script that gives you the current size of database and average disk space consumed in each day.


SELECT b.tsname tablespace_name
, MAX(b.used_size_mb) cur_used_size_mb
, round(AVG(inc_used_size_mb),2)avg_increas_mb
FROM (
SELECT a.days,a.tsname
, used_size_mb
, used_size_mb - LAG (used_size_mb,1) OVER ( PARTITION BY a.tsname ORDER BY a.tsname,a.days) inc_used_size_mb
FROM (
SELECT TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY') days
,ts.tsname
,MAX(round((tsu.tablespace_usedsize* dt.block_size )/(1024*1024),2)) used_size_mb
FROM DBA_HIST_TBSPC_SPACE_USAGE tsu
, DBA_HIST_TABLESPACE_STAT ts

,DBA_HIST_SNAPSHOT sp, DBA_TABLESPACES dt
WHERE tsu.tablespace_id= ts.ts# AND tsu.snap_id = sp.snap_id
AND ts.tsname = dt.tablespace_name AND sp.begin_interval_time > sysdate-7
GROUP BY TO_CHAR(sp.begin_interval_time,'MM-DD-YYYY'), ts.tsname
ORDER BY ts.tsname, days
) a
) b GROUP BY b.tsname ORDER BY b.tsname;

Wednesday, January 26, 2011

ORA-14519 Conflicting tablespace blocksizes while importing partition tables

ORA-14519: Conflicting tablespace blocksizes for table : Tablespace T2 block size 16384 [partition specification] conflicts with previously specified/implied tablespace T1 block size 8192 [object-level default]

Well :) the error massage clearly indicate that I am trying to create a partition table in more than one tablespaces where the blocksize of the tablespaces are different. We all know that table partitions can not be placed in different blocksize tablespaces. Though I know this and not did this mistake before, I get this error when I am trying to make a replica schema in local test database.


I took dump by using EXPDP without any error and get the above error when try to import it by using REMAP_SCHEMA option of IMPDP. It is really annoying to get such error that can not be explainable. All of my table partitions are placed in a single tablespace but the error message says that I am trying to place table partitions in two tablespaces where their blocksize is different ! It sounds simply horrible.

But at the end of a long research :) I have found the mistake.The user's default tablespace is T1 [8k blocksize] and I create the table in tablespace T2 [16K blocksize] just like this -

CREATE TABLE TEST (ID NUMBER (10), TEXT VARCHAR2(50)

) PARTITION BY RANGE (ID) (

PARTITION TEST_PART_1 VALUES LESS THAN (100001) TABLESPACE T2 ,

PARTITION "TEST_PART_2" VALUES LESS THAN (MAXVALUE) TABLESPACE T2

);


This script create 2 table partitions of TEST table in T2 tablespace but unfortunately the default tablespace attribute of TEST table set to T1 (As I did not mansion default table space of table TEST when I create it. So ORACLE set this attribute same as user's default tablespace). See! Sometime ORACLE does something smartly that can cause stupid scenario. IF you want to see this unbelievable thing, just generate the table's SQL. You will see this


CREATE TABLE TEST (ID NUMBER (10),TEXT VARCHAR2(50)

) TABLESPACE T1

PARTITION BY RANGE (ID) (

PARTITION TEST_PART_1 VALUES LESS THAN (100001) TABLESPACE T2 ,

PARTITION "TEST_PART_2" VALUES LESS THAN (MAXVALUE) TABLESPACE T2

);


This sturdiness of oracle cause ORA-14519 when I tryed to import TEST table because oracle can't create a partition table in different blocksize tablespaces.

So The solution is simple; mention default tablespace when you create a partition table. Just like this


CREATE TABLE TEST (

ID NUMBER (10),

TEXT VARCHAR2(50)

) TABLESPACE T2

PARTITION BY RANGE (ID) (

PARTITION TEST_PART_1 VALUES LESS THAN (100001) TABLESPACE T2 ,

PARTITION "TEST_PART_2" VALUES LESS THAN (MAXVALUE) TABLESPACE T2

);

[ It is not mandatory that all tablespace should be same (t2 in this case). You can put 3 different tablespace but their blocksize should be same.]


But, what is the solution for already created partition tables ? Well, the solution is very simple, just change the default tablespace attribute of your partition tables that are already created. Here is a script to identify the problematic tables.


SELECT * FROM (

SELECT x.table_name, x.def_tablespace_name

, (SELECT block_size from user_tablespaces where tablespace_name=x.def_tablespace_name) df

, x.tablespace_name

, (SELECT block_size from user_tablespaces where tablespace_name = x.tablespace_name) tb

FROM (

SELECT distinct t.table_name, t.def_tablespace_name, p.tablespace_name

FROM user_part_tables t, user_tab_partitions p

WHERE t.table_name = p.table_name

) x ) k WHERE k.tb <> k.df;


Then reset the default tablespace attribute

ALTER TABLE TEST MODIFY default attributes TABLESPACE T2;

Now take dump and apply it without facing ORA-14519 !