Rank dense_rank

In this article i will looking into rank and dense_rank analytic functions. I will be using the same setup as last article except that i will not be inserting any extra values having sal as null ;)

drop table emp purge;

create table emp 
(empno number(4) not null, 
 ename varchar2(10), 
 job varchar2(9), 
 mgr number(4), 
 hiredate date, 
 sal number(7, 2), 
 comm number(7, 2), 
 deptno number(2) 
); 

insert into emp values (7369, 'smith', 'clerk', 7902, 
to_date('17-dec-1980', 'dd-mon-yyyy'), 800, null, 20); 
insert into emp values (7499, 'allen', 'salesman', 7698, 
to_date('20-feb-1981', 'dd-mon-yyyy'), 1600, 300, 30); 
insert into emp values (7521, 'ward', 'salesman', 7698, 
to_date('22-feb-1981', 'dd-mon-yyyy'), 1250, 500, 30); 
insert into emp values (7566, 'jones', 'manager', 7839, 
to_date('2-apr-1981', 'dd-mon-yyyy'), 2975, null, 20); 
insert into emp values (7654, 'martin', 'salesman', 7698, 
to_date('28-sep-1981', 'dd-mon-yyyy'), 1250, 1400, 30); 
insert into emp values (7698, 'blake', 'manager', 7839, 
to_date('1-may-1981', 'dd-mon-yyyy'), 2850, null, 30); 
insert into emp values (7782, 'clark', 'manager', 7839, 
to_date('9-jun-1981', 'dd-mon-yyyy'), 2450, null, 10); 
insert into emp values (7788, 'scott', 'analyst', 7566, 
to_date('09-dec-1982', 'dd-mon-yyyy'), 3000, null, 20); 
insert into emp values (7839, 'king', 'president', null, 
to_date('17-nov-1981', 'dd-mon-yyyy'), 5000, null, 10); 
insert into emp values (7844, 'turner', 'salesman', 7698, 
to_date('8-sep-1981', 'dd-mon-yyyy'), 1500, 0, 30); 
insert into emp values (7876, 'adams', 'clerk', 7788, 
to_date('12-jan-1983', 'dd-mon-yyyy'), 1100, null, 20); 
insert into emp values (7900, 'james', 'clerk', 7698, 
to_date('3-dec-1981', 'dd-mon-yyyy'), 950, null, 30); 
insert into emp values (7902, 'ford', 'analyst', 7566, 
to_date('3-dec-1981', 'dd-mon-yyyy'), 3000, null, 20); 
insert into emp values (7934, 'miller', 'clerk', 7782, 
to_date('23-jan-1982', 'dd-mon-yyyy'), 1300, null, 10);

commit;

Now that we have our set up up first lets try the rank function

select empno
	, ename
	, deptno
	, sal
	, rank() over (partition by deptno order by sal) "rank"
from   emp
where
	deptno  = 30
;

     EMPNO ENAME          DEPTNO        SAL       rank
---------- ---------- ---------- ---------- ----------
      7900 JAMES              30        950          1
      7521 WARD               30       1250          2
      7654 MARTIN             30       1250          2
      7844 TURNER             30       1500          4
      7499 ALLEN              30       1600          5
      7698 BLAKE              30       2850          6

Now here in this output we can see that WARD and MARTIN are having the same sal and hence given the rank 2 and the next employee TURNER has been given the rank 4. Now we also have the option to give the TURNER a rank as 3 as using following

select empno
	, ename
	, deptno
	, sal
	, dense_rank() over (partition by deptno order by sal) "rank"
from   emp
where
	deptno  = 30
;

     EMPNO ENAME          DEPTNO        SAL       rank
---------- ---------- ---------- ---------- ----------
      7900 JAMES              30        950          1
      7521 WARD               30       1250          2
      7654 MARTIN             30       1250          2
      7844 TURNER             30       1500          3
      7499 ALLEN              30       1600          4
      7698 BLAKE              30       2850          5

Cheers!!!
Jagdeep Sangwan

first_value last_value

In this post i will be looking at two analytic functions first_value and last_value which are useful for finding smallest value and highest value in a partitioned window in Oracle Analytic functions.

First lets create the setup, this is same emp table as in SCOTT schema found i have added three more rows having the salary null in deptno 10

drop table emp purge;

create table emp 
(empno number(4) not null, 
 ename varchar2(10), 
 job varchar2(9), 
 mgr number(4), 
 hiredate date, 
 sal number(7, 2), 
 comm number(7, 2), 
 deptno number(2) 
); 

insert into emp values (7369, 'smith', 'clerk', 7902, 
to_date('17-dec-1980', 'dd-mon-yyyy'), 800, null, 20); 
insert into emp values (7499, 'allen', 'salesman', 7698, 
to_date('20-feb-1981', 'dd-mon-yyyy'), 1600, 300, 30); 
insert into emp values (7521, 'ward', 'salesman', 7698, 
to_date('22-feb-1981', 'dd-mon-yyyy'), 1250, 500, 30); 
insert into emp values (7566, 'jones', 'manager', 7839, 
to_date('2-apr-1981', 'dd-mon-yyyy'), 2975, null, 20); 
insert into emp values (7654, 'martin', 'salesman', 7698, 
to_date('28-sep-1981', 'dd-mon-yyyy'), 1250, 1400, 30); 
insert into emp values (7698, 'blake', 'manager', 7839, 
to_date('1-may-1981', 'dd-mon-yyyy'), 2850, null, 30); 
insert into emp values (7782, 'clark', 'manager', 7839, 
to_date('9-jun-1981', 'dd-mon-yyyy'), 2450, null, 10); 
insert into emp values (7788, 'scott', 'analyst', 7566, 
to_date('09-dec-1982', 'dd-mon-yyyy'), 3000, null, 20); 
insert into emp values (7839, 'king', 'president', null, 
to_date('17-nov-1981', 'dd-mon-yyyy'), 5000, null, 10); 
insert into emp values (7844, 'turner', 'salesman', 7698, 
to_date('8-sep-1981', 'dd-mon-yyyy'), 1500, 0, 30); 
insert into emp values (7876, 'adams', 'clerk', 7788, 
to_date('12-jan-1983', 'dd-mon-yyyy'), 1100, null, 20); 
insert into emp values (7900, 'james', 'clerk', 7698, 
to_date('3-dec-1981', 'dd-mon-yyyy'), 950, null, 30); 
insert into emp values (7902, 'ford', 'analyst', 7566, 
to_date('3-dec-1981', 'dd-mon-yyyy'), 3000, null, 20); 
insert into emp values (7934, 'miller', 'clerk', 7782, 
to_date('23-jan-1982', 'dd-mon-yyyy'), 1300, null, 10);
insert into emp values (9001, 'aaaa', null, 7782, 
to_date('23-jan-1983', 'dd-mon-yyyy'), null, null, 10);
insert into emp values (9002, 'bbbb', null, 7782, 
to_date('23-jan-1984', 'dd-mon-yyyy'), null, null, 10);
insert into emp values (9003, 'bbbb', null, 7782, 
to_date('23-jan-1985', 'dd-mon-yyyy'), null, null, 10);

commit;

Now lets get the lowest salary for each department

select
  ename
  , deptno
  , first_value(sal) over (partition by deptno order by sal) first_sal
from
  emp
;

ENAME          DEPTNO  FIRST_SAL
---------- ---------- ----------
MILLER             10       1300
CLARK              10       1300
KING               10       1300
BBBB               10       1300
BBBB               10       1300
AAAA               10       1300
SMITH              20        800
ADAMS              20        800
JONES              20        800
SCOTT              20        800
FORD               20        800
JAMES              30        950
WARD               30        950
MARTIN             30        950
TURNER             30        950
ALLEN              30        950
BLAKE              30        950

Here Oracle has given us the lowest salary for each department although there are some null values for deptno 10.
Now lets say i want to retrieve the highest salary for each department then

select
  ename
  , deptno
  , last_value(sal) over (partition by deptno order by sal) first_sal
from
  emp
;

ENAME          DEPTNO  FIRST_SAL
---------- ---------- ----------
MILLER             10       1300
CLARK              10       2450
KING               10       5000
BBBB               10
BBBB               10
AAAA               10
SMITH              20        800
ADAMS              20       1100
JONES              20       2975
SCOTT              20       3000
FORD               20       3000
JAMES              30        950
WARD               30       1250
MARTIN             30       1250
TURNER             30       1500
ALLEN              30       1600
BLAKE              30       2850

So this time we didn’t get the output what we wanted as the default windowing clause is “RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW”, which means the current row will always be the last value. Lets alter the window clause to “RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING”

select
  ename
  , deptno
  , last_value(sal) over (partition by deptno order by sal
	range between unbounded preceding and unbounded following) highest_sal
from
  emp
;

ENAME          DEPTNO HIGHEST_SAL
---------- ---------- -----------
MILLER             10
CLARK              10
KING               10
BBBB               10
BBBB               10
AAAA               10
SMITH              20        3000
ADAMS              20        3000
JONES              20        3000
SCOTT              20        3000
FORD               20        3000
JAMES              30        2850
WARD               30        2850
MARTIN             30        2850
TURNER             30        2850
ALLEN              30        2850
BLAKE              30        2850

So now we have the required results for deptno where sal is not null now for the deptno having sal as null we can use

select
  ename
  , deptno
  , last_value(sal ignore nulls) over (partition by deptno order by sal
	range between unbounded preceding and unbounded following) highest_sal
from
  emp
;

ENAME          DEPTNO HIGHEST_SAL
---------- ---------- -----------
MILLER             10        5000
CLARK              10        5000
KING               10        5000
BBBB               10        5000
BBBB               10        5000
AAAA               10        5000
SMITH              20        3000
ADAMS              20        3000
JONES              20        3000
SCOTT              20        3000
FORD               20        3000
JAMES              30        2850
WARD               30        2850
MARTIN             30        2850
TURNER             30        2850
ALLEN              30        2850
BLAKE              30        2850

So now we have the required results. We can also use the first_value function instead of all these workarounds of last_value functions as following to retrieve the highest salary in each deptno

select
  ename
  , deptno
  , first_value(sal) over (partition by deptno order by sal desc nulls last) highest_sal
from
  emp
;

ENAME          DEPTNO HIGHEST_SAL
---------- ---------- -----------
KING               10        5000
CLARK              10        5000
MILLER             10        5000
BBBB               10        5000
BBBB               10        5000
AAAA               10        5000
FORD               20        3000
SCOTT              20        3000
JONES              20        3000
ADAMS              20        3000
SMITH              20        3000
BLAKE              30        2850
ALLEN              30        2850
TURNER             30        2850
MARTIN             30        2850
WARD               30        2850
JAMES              30        2850

HTH

Cheers!!!
Jagdeep Sangwan

Virtual Columns

From 11g onwards we can create a virtual column. A virtual column is not stored on disk.
Rather, oracle derives the values in a virtual column demand by computing the function
or expressions defined on the respective virtual column.

Syntax for virtual column definition is:

column_name [data_type] [generated always] as (expression) [virtual]

If the datatype is omitted, it is determined on the result of the expression. The
GENERATED ALWAYS and VIRTUAL keywords are provided for clarity only and optional

In the below example i am creating a virtual which is an expression as “n1*n2″

drop table t1 purge;
create table t1(
n1		number
, n2	number
, v1	varchar2(30)
, d1	date
, vc1	number generated always as (n1*n2) virtual
)
;

Now lets try to insert some data into this table using following:

insert into t1
select
	rownum		n1
	, mod(rownum, 100)	n2
	, lpad(rownum, '30', '*')	v1
	, sysdate + rownum /24/60 d1
	, rownum * mod(rownum, 100)	vc1
from
	dual
where
	rownum <= 1e5
;

insert into t1
            *
ERROR at line 1:
ORA-54013: INSERT operation disallowed on virtual columns

Here we get the error as we can not insert a value explicitly into a virtual column. So
for inserting data into a table having a virtual we have to give the column names explicitly
excluding the virtual columns.

insert into t1 (
	n1
	, n2
	, v1
	, d1
)
select
	rownum		n1
	, mod(rownum, 100)	n2
	, lpad(rownum, '30', '*')	v1
	, sysdate + rownum /24/60
from
	all_objects
where
	rownum <= 1e5
;

commit;

Querying the data from this table we can see the value of our virtual column:

select
	*
from
	t1
where
	rownum <= 2
;
        N1         N2 V1                             D1               VC1
---------- ---------- ------------------------------ --------- ----------
         1          1 *****************************1 02-JUL-14          1
         2          2 *****************************2 02-JUL-14          4

We can also check the expression used for generating our virtual column in DATA_DEFAULT column
of [dba|all|user]_tab_columns


col data_default format a40
select
	column_name
	, data_default
from
	user_tab_columns
where
	table_name = 'T1'
;

COLUMN_NAME                    DATA_DEFAULT
------------------------------ ----------------------------------------
N1
N2
V1
D1
VC1                            "N1"*"N2"

Now to see that actually oracle is storing the data for our virtual column, i will be
creating two tables as following:

pro *****************************************************
pro creating and populating table without virtual columns
pro *****************************************************
drop table t1 purge;
create table t1(
n1		number
, n2	number
, v1	varchar2(30)
, d1	date
)
;

insert into t1 (
	n1
	, n2
	, v1
	, d1
)
with d as(
	select
		rownum n1
	from
		all_objects
	where	
		rownum <= 1e3
)
select
	rownum		n1
	, mod(rownum, 100)	n2
	, lpad(rownum, '30', '*')	v1
	, sysdate + rownum /24/60
from
	d d1
	, d d2
where
	rownum <= 1e5
;

pro *****************************************************
pro creating and populating table with virtual columns
pro *****************************************************
drop table t2 purge;
create table t2(
n1		number
, n2	number
, v1	varchar2(30)
, d1	date
, vc1	number generated always as (n1*n2) virtual
, vc2	number as (n1+n2)
, vc3	number as (n1-n2)
)
;

insert into t2 (
	n1
	, n2
	, v1
	, d1
)
with d as(
	select
		rownum n1
	from
		all_objects
	where	
		rownum <= 1e3
)
select
	rownum		n1
	, mod(rownum, 100)	n2
	, lpad(rownum, '30', '*')	v1
	, sysdate + rownum /24/60
from
	d d1
	, d d2
where
	rownum <= 1e5
;

commit;

From below we can check which are the data defaults for our virtual columns

select
	table_name
	, column_name
	, data_default
from
	user_tab_columns
where
	table_name in ('T1', 'T2')
;

TABLE_NAME                     COLUMN_NAME                    DATA_DEFAULT
------------------------------ ------------------------------ ------------
T1                             N1
T1                             N2
T1                             V1
T1                             D1
T2                             N1
T2                             N2
T2                             V1
T2                             D1
T2                             VC1                            "N1"*"N2"
T2                             VC2                            "N1"+"N2"
T2                             VC3                            "N1"-"N2"

We can also use the view user_tab_cols to see the virtual column information as

col column_name format a10
col table_name format a11

select
	table_name
	, column_name
	, hidden_column
	, virtual_column
	, segment_column_id
	, internal_column_id
from
	user_tab_cols
where
	table_name in ('T1', 'T2')
;

TABLE_NAME  COLUMN_NAM HID VIR SEGMENT_COLUMN_ID INTERNAL_COLUMN_ID
----------- ---------- --- --- ----------------- ------------------
T1          N1         NO  NO                  1                  1
T1          N2         NO  NO                  2                  2
T1          V1         NO  NO                  3                  3
T1          D1         NO  NO                  4                  4
T2          N1         NO  NO                  1                  1
T2          N2         NO  NO                  2                  2
T2          V1         NO  NO                  3                  3
T2          D1         NO  NO                  4                  4
T2          VC1        NO  YES                                    5
T2          VC2        NO  YES                                    6
T2          VC3        NO  YES                                    7

Now lets check the size of the physical storage of these two tables

col segment_name format a12
select
	segment_name
	, bytes
	, blocks
	, extents
from
	user_segments
where
	segment_name in ('T1', 'T2')
;

SEGMENT_NAME      BYTES     BLOCKS    EXTENTS
------------ ---------- ---------- ----------
T1              6291456        768         21
T2              6291456        768         21

Here oracle has used the same physical storage for both tables although T2 has three
more columns than T1 which are defined as VIRTUAL.

We can create indexes on a virtual column which are similar to function based indexes and
they will be using the disk space to store its data.

drop index t2_vc1;
create index t2_vc1 on t2(vc1);

begin
	dbms_stats.gather_table_stats(
		user
		, 't2'
	);
end;
/

select
	segment_name
	, bytes
	, blocks
	, extents
from
	user_segments
where
	segment_name = 'T2_VC1'
;

SEGMENT_NAME      BYTES     BLOCKS    EXTENTS
------------ ---------- ---------- ----------
T2_VC1          2097152        256         17

Now lets query the data from this table and check whether the index is used or not

select
	*
from
	t2
where
	vc1 = 4
;

select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------

SQL_ID  g8kc5x3j8wgzg, child number 1
-------------------------------------
select  * from  t2 where  vc1 = 4

Plan hash value: 1966060445

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |       |       |     2 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2     |     1 |    62 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_VC1 |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("VC1"=4)

From above we can see that oracle optimizer has used our index to retrieve the data. Now
lets try one another having the same expression as defined in the virtual and see whether it
can use the index defined on virtual column

select
	*
from
	t2
where
	n1*n2 = 4
;

select * from table(dbms_xplan.display_cursor());

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------

SQL_ID  fxc0fdrmp7ugy, child number 0
-------------------------------------
select  * from  t2 where  n1*n2 = 4

Plan hash value: 1966060445

--------------------------------------------------------------------------------------
| Id  | Operation                   | Name   | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |        |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID| T2     |     1 |    62 |     3   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN          | T2_VC1 |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T2"."VC1"=4)

Here also oracle optimizer has choosen the index internally for retrieving our data as the
indexes on virtual columns are similar to function based indexes.

Invisible Indexes

There is this feature “Invisible indexes” from 11g onwards which is quite useful. These indexes are maintained like any normal index but these are not visible to the optimizer and using this we can for tuning a query, i.e. checking the plans and costs when optimizer can use this and when not.

So first lets create a setup

drop table t1 purge;

create table t1
as
select
	rownum	n1
	, dbms_random.string('l', 20) v1
	, dbms_random.string('l', 100) v2
	, sysdate + rownum/24/60 d1
from
	all_objects
where
	rownum <= 2e4
;

alter table t1 add constraint t1_pk primary key (n1);

begin
	dbms_stats.gather_table_stats(
		user
		, 't1'
	);
end;
/

so now just run a query against this table and see the actual execution plan

select
	/*+
		gather_plan_statistics
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------

SQL_ID  bmxarkqfcbctd, child number 0
-------------------------------------
select  /*+   gather_plan_statistics  */  n1  , v1 from  t1 where  v1 =
'abc'

Plan hash value: 3617692013

------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |      0 |00:00:00.01 |     354 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |      1 |      0 |00:00:00.01 |     354 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("V1"='abc')

Here optimizer is going for a full table scan.
Now lets create an invisible index on v1 column of table t1

create index t1_v1 on t1(v1) invisible;

And we run the above query again, optimizer will not be using the above index

select
	/*+
		gather_plan_statistics
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
------------------------------------------------------------------------------------

SQL_ID  bmxarkqfcbctd, child number 0
-------------------------------------
select  /*+   gather_plan_statistics  */  n1  , v1 from  t1 where  v1 =
'abc'

Plan hash value: 3617692013

------------------------------------------------------------------------------------
| Id  | Operation         | Name | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |      |      1 |        |      0 |00:00:00.01 |     354 |
|*  1 |  TABLE ACCESS FULL| T1   |      1 |      1 |      0 |00:00:00.01 |     354 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter("V1"='abc')

Here although the index is there but its invisible and optimizer is not using it for our query.
Now to use this invisible index, we can either use the hint “use_invisible_indexes” as:

select
	/*+
		gather_plan_statistics
		use_invisible_indexes
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
--------------------------------------------------------------------------------------------------------

SQL_ID  aqu1thxzkmypf, child number 0
-------------------------------------
select  /*+   gather_plan_statistics   use_invisible_indexes  */  n1  ,
v1 from  t1 where  v1 = 'abc'

Plan hash value: 2645666169

--------------------------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers | Reads  |
--------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |      1 |        |      0 |00:00:00.02 |       2 |      1 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |      1 |      1 |      0 |00:00:00.02 |       2 |      1 |
|*  2 |   INDEX RANGE SCAN          | T1_V1 |      1 |      1 |      0 |00:00:00.02 |       2 |      1 |
--------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V1"='abc')

Or we can set the parameter at session or system level as

alter session set optimizer_use_invisible_indexes=true;
select
	/*+
		gather_plan_statistics
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------

SQL_ID  bmxarkqfcbctd, child number 1
-------------------------------------
select  /*+   gather_plan_statistics  */  n1  , v1 from  t1 where  v1 =
'abc'

Plan hash value: 2645666169

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |      1 |        |      0 |00:00:00.01 |       2 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |      1 |      1 |      0 |00:00:00.01 |       2 |
|*  2 |   INDEX RANGE SCAN          | T1_V1 |      1 |      1 |      0 |00:00:00.01 |       2 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V1"='abc')

or we can give the opt_param(optimizer_user_invisible_indexes true) as

select
	/*+
		gather_plan_statistics
		opt_param(optimizer_use_invisible_indexes true)
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------

SQL_ID  8u01q9zy0c0u6, child number 0
-------------------------------------
select  /*+   gather_plan_statistics
opt_param(optimizer_use_invisible_indexes true)  */  n1  , v1 from  t1
where  v1 = 'abc'

Plan hash value: 2645666169

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |      1 |        |      0 |00:00:00.01 |       2 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |      1 |      1 |      0 |00:00:00.01 |       2 |
|*  2 |   INDEX RANGE SCAN          | T1_V1 |      1 |      1 |      0 |00:00:00.01 |       2 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V1"='abc')

And after we know that this index can enhance the performance of our query we can make this index as visible

alter index t1_v1 visible;

Now without any hint our query will to use the index

select
	/*+
		gather_plan_statistics
	*/
	n1
	, v1
from
	t1
where
	v1 = 'abc'
;

select * from table(dbms_xplan.display_cursor(null, null, 'allstats last'));

PLAN_TABLE_OUTPUT
-----------------------------------------------------------------------------------------------

SQL_ID  bmxarkqfcbctd, child number 1
-------------------------------------
select  /*+   gather_plan_statistics  */  n1  , v1 from  t1 where  v1 =
'abc'

Plan hash value: 2645666169

-----------------------------------------------------------------------------------------------
| Id  | Operation                   | Name  | Starts | E-Rows | A-Rows |   A-Time   | Buffers |
-----------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT            |       |      1 |        |      0 |00:00:00.01 |       2 |
|   1 |  TABLE ACCESS BY INDEX ROWID| T1    |      1 |      1 |      0 |00:00:00.01 |       2 |
|*  2 |   INDEX RANGE SCAN          | T1_V1 |      1 |      1 |      0 |00:00:00.01 |       2 |
-----------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("V1"='abc')

Hash Joins

Hash Joins:

What we have done in the last nested loop join example is that we have created a clustered hash table
and than joined it with a normal table and the plan we got from the same is as following:

---------------------------------------------------------------------------------------------
| Id  | Operation                    | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |              |     1 |   191 |     3   (0)| 00:00:01 |
|   1 |  NESTED LOOPS                |              |     1 |   191 |     3   (0)| 00:00:01 |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1           |     2 |   152 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_PK        |     2 |       |     2   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS HASH          | HASHED_TABLE |     1 |   115 |            |          |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T1"."N1">=1 AND "T1"."N1"<=2)
   4 - access("HT"."N1"="T1"."N2")
       filter("HT"."V1"='a')

Now what if the hashed table is big table having size of each row as larger (say 600 bytes) and
we are only interested in 20 bytes while retrieving our data, and in this scenario we may find that
table is big enough to be not able to fit into memory and as the query runs oracle has to keep
repeating random I/O’s to get them. Even if we can keep the blocks in memory there are cases that
other users are modifying and retrieving the data from same table and blocks, and we can find that
we are competing for the “cache buffer chains” latches that protect the blocks in that table.

So what we can do to enhance/tune our query? we can just create a table with the required bits from
the big table before starting the join and this is called as Hash Join and the tables appears to be
“back to front”.

Here below is the setup that i have used to test on 10.2.0.1

drop table hash_target purge;
create table hash_target(
	hash_value	number
	, status	varchar2(2)
	, v1		varchar2(10)
	, v2		varchar2(100)
)
;

insert into hash_target
select
	rownum
	, dbms_random.string('u', 2)
	, lpad(rownum, 10)
	, rpad('*', 100)
from
	all_objects
where
	rownum <= 1e3
;

create index ht_i1 on hash_target(status);
create index ht_i2 on hash_target(hash_value);

drop table t1 purge;
create table t1(
	join_value	number
	, status	varchar2(1)
	, v1		varchar2(10)
	, v2		varchar2(100)
)
;

insert into t1
select
	mod(rownum, 1000)
	, dbms_random.string('u', 1)
	, lpad(rownum, 10)
	, rpad('*', 100)
from
	all_objects
where
	rownum <= 2e4
;

create index t1_i1 on t1(status);

begin
	dbms_stats.gather_table_stats(
		user
		, 't1'
		, method_opt => 'for all columns size 1'
	);
	dbms_stats.gather_table_stats(
		user
		, 'hash_target'
		, method_opt => 'for all columns size 1'
	);
end;
/

set autotrace traceonly explain

select
	t1.v1
	, ht.v1
from
	t1
	, hash_target ht
where
	t1.status = 'A'
and	ht.hash_value = t1.join_value
and	ht.status = 'AA'
;

set autotrace off

Below is the execution plan which we have got from the above

Execution Plan
----------------------------------------------------------
Plan hash value: 227304121

--------------------------------------------------------------------------------------------
| Id  | Operation                    | Name        | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |             |     3 |   105 |   106   (1)| 00:00:02 |
|*  1 |  HASH JOIN                   |             |     3 |   105 |   106   (1)| 00:00:02 |
|   2 |   TABLE ACCESS BY INDEX ROWID| HASH_TARGET |     2 |    36 |     3   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | HT_I1       |     2 |       |     1   (0)| 00:00:01 |
|*  4 |   TABLE ACCESS FULL          | T1          |   769 | 13073 |   103   (1)| 00:00:02 |
--------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - access("HT"."HASH_VALUE"="T1"."JOIN_VALUE")
   3 - access("HT"."STATUS"='AA')
   4 - filter("T1"."STATUS"='A')

Here we can see that Oracle has first pulled out the the rowsource from hash_target table and build
a hash cluster table from it and then it has used nested loop join to retrieve the rows from t1 table
but as here it has build hash table first Oracle calls it as Hash Join. And here the first table
hash_target is called as build table and the t1 table is called as probe table as Oracle has first
build a hash table and then probed the second table to retrieve the matching rows/records.

While looking at hash join we must take into consideration whether our build table is able to fit into
memory itself or not otherwise our reading of the blocks from the disk will increase leading to increase
in physical I/O’s.

We can also use hint use_hash(table/alias_name) to guide the optimizer to use a hash join.

For more information check out Jonathan Lewis-Joins – HJ

Cheers!!!
Jagdeep Sangwan

Nulls in Bitmap

While checking a blog by Jonathan Lewis Bitmap Nulls related to bitmap indexes.

Just to mention that a bitmap index in oracle also stores the entries where the indexed column is null and can be used for accessing the data from the table using the index unlike the B-Tree indexes where oracle does not store the entries in the indexes when the indexed column(s) is/are null.

drop table t1 purge;
create table t1(
n1	number
, n2	number
, v1	varchar2(100)
)
;

insert into t1
select
	decode(rownum
		, 1
		, to_number(null)
		, rownum
	)
	, rownum
	, rpad('*', 100)
from
	all_objects
where
	rownum <= 1e4
;

insert into t1
select * from t1;

insert into t1
select * from t1;

insert into t1
select * from t1;

insert into t1
select * from t1;

insert into t1
select * from t1;

commit;

create bitmap index t1_bmi1 on t1(n1);

begin
	dbms_stats.gather_table_stats(
		user
		, 't1'
		, method_opt => 'for all columns size 1'
	);
end;
/

set autotrace traceonly explain

select * from t1 where n1 is null;

set autotrace off

On my 10g version (10.2.0.1), the plan i have got is:

Plan hash value: 132275637

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |       |       |     3 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID | T1      |     8 |   864 |     3   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE | T1_BMI1 |       |       |            |          |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("N1" IS NULL)

Here table access is TABLE ACCESS BY INDEX ROWID instead of TABLE ACCESS BY INDEX ROWID
BATCHED, which might be an improvement in 12c for table access, but surprising thing for
me is the Rows value as 1 instead of 16 (as i have ommited your last insert). And this
plan was for the first run of that script. Below are the actual values in table for nulls

JAGDEEP@css>select * from t1 where n1 is null;

        N1         N2 V1
---------- ---------- -
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *
                    1 *

After having some interaction with Jonathan Lewis about this, he pointed out the reason why oracle has the cardinality (Rows) wrong in the plan is due to the stats collection method in 10g. In 10g stats collection is via the method of sampling and those were not accuration which we can see as following

BMTC_TEST@CSS>select num_rows from user_tables where table_name = 'T1';

  NUM_ROWS
----------
    160528

BMTC_TEST@CSS>select column_name, num_nulls from user_tab_columns where table_name = 'T1';

COLUMN_NAME                     NUM_NULLS
------------------------------ ----------
N1                                      8
N2                                      0
V1                                      0

From this we can see that the it has got num_rows nearby to correct but the num_nulls is way off
and from that only it reports the cardinality in the plan. And that is the reason why we are getting
wrong cardinality in plan.
Afterwards i changed my stats collection with parameter added “estimate percent => 100″

begin
        dbms_stats.gather_table_stats(
                user
                , 't1'
                , method_opt => 'for all columns size 1'
                , estimate_percent => 100
        );
end;
/

The plan we have got is

Plan hash value: 132275637

----------------------------------------------------------------------------------------
| Id  | Operation                    | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |         |       |       |     5 (100)|          |
|   1 |  TABLE ACCESS BY INDEX ROWID | T1      |    16 |  1728 |     5   (0)| 00:00:01 |
|   2 |   BITMAP CONVERSION TO ROWIDS|         |       |       |            |          |
|*  3 |    BITMAP INDEX SINGLE VALUE | T1_BMI1 |       |       |            |          |
----------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("N1" IS NULL)

Now oracle has got the right cardinalities in the plan. Lets just check the num_rows and num_nulls
also

BMTC_TEST@CSS>select num_rows from user_tables where table_name = 'T1';

  NUM_ROWS
----------
    160000

BMTC_TEST@CSS>select column_name, num_nulls from user_tab_columns where table_name ='T1';

COLUMN_NAME                     NUM_NULLS
------------------------------ ----------
N1                                     16
N2                                      0
V1                                      0

We can now see that it has the correct values. So lession learnt from this is we must also check the
stats after we have collected the stats and the plan is not getting it correct.

Cheers!!!
Jagdeep Sangwan

Change SQL Prompt

This is just again a fast note about changing the sql prompt, i am writing this as i have been facing
the issue of installing database again and again on different machine and i have the practice of having
my sql prompt to show as

username@sid>

And constantly i forgot the actual syntax to put in the glogin.sql file. I remember the file though ha;)

So lets start where we are not having any specific sql prompt but the default one

login as: jagdeep
jagdeep@10.97.156.51's password:
Last login: Tue Jun  3 17:08:15 2014 from 10.97.157.186
[jagdeep@inggnh089lux ~]$ sqlplus jagdeep@css

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 4 17:02:31 2014

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

SQL>

Now to change it our requirement as i prefer as

username@sid>
Just go to

vi $ORACLE_HOME/sqlplus/admin/glogin.sql

and append the following at the end

set sqlprompt "_user'@'_connect_identifier>"

Just save the file and exit and now try to connect

[jagdeep@inggnh089lux admin]$ sqlplus jagdeep@css

SQL*Plus: Release 10.2.0.1.0 - Production on Wed Jun 4 17:09:34 2014

Copyright (c) 1982, 2005, Oracle.  All rights reserved.

Enter password:

Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit Production
With the Partitioning, OLAP and Data Mining options

jagdeep@CSS>

That’s it. Now here i can see from the prompt itself what is the username through which i am connected and
on to what database

Cheers!!!
Jagdeep Sangwan