Saturday, October 10, 2009

Using UNIX aliases with Oracle

A UNIX alias is a short command that is replaced with a larger command to save typing time. For example, we could create an alias to allow us to count the number of connected users on our oracle server:

root> alias numuse=’who|wc –l’

root> numuse

463

The whole point of UNIX aliases is to save typing. Here is a list on common UNIX aliases that can be added to your UNIX logon file for the UNIX Oracle user. These aliases perform common Oracle functions such as checking the Oracle alert log and transferring quickly between directories.

#*******************************
# UNIX aliases for Oracle DBAs
#*******************************
alias alert='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|more'
alias errors='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|more'
alias arch='cd $DBA/$ORACLE_SID/arch'
alias bdump='cd $DBA/$ORACLE_SID/bdump'
alias cdump='cd $DBA/$ORACLE_SID/cdump'
alias pfile='cd $DBA/$ORACLE_SID/pfile'
alias rm='rm -i'
alias sid='env|grep ORACLE_SID'
alias admin='cd $DBA/admin'

To illustrate how useful aliases are in Oracle administration, in the example below we can get to our pfile directory in a single command so we can view the contents of our init.ora file:

cheops*CPRO-/home/oracle
> pfile
cheops*CPRO-/u01/app/oracle/CPRO/pfile
>ls
initCPRO.ora

Aliases can also be used for sophisticated Oracle commands. For example, the following alias can be used to display all Oracle errors in the last 400 lines of the alert log:

cheops*testsid-/u01/app/oracle/admin/envtest/pfile
>alias errors='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|grep ORA-'

cheops*testsid-/u01/app/oracle/admin/envtest/pfile
>errors
ORA-00604: error occurred at recursive SQL level 1
ORA-01089: immediate shutdown in progress - no operations are permitted
ORA-00604: error occurred at recursive SQL level 3
ORA-01089: immediate shutdown in progress - no operations are permitted

Enabling Your Database to perform hash joins

In cases where a very small table is being joined to a large table, the Oracle hash join will often dramatically speed-up the query. Hash joins are far faster than nested loop joins in some cases, but it is not easy to get your database to use hash joins.



Unfortunately, the hash join is more memory intensive than a nested loop join. To be faster than a nested loop join, we must set the hash_area_size large enough to hold the entire hash table in memory (about 1.6 times the sum of the rows in the table). If the hash join overflows the hash_area_size memory, the hash join will page into the TEMP tablespace, severely degrading the performance of the hash join. You can use the following script, hash_area.sql, to dynamically allocate the proper hash_area_size for your SQL query in terms of the size of your target table.



hash_area.sql
set heading off;
set feedback off;
set verify off;
set pages 999;

spool run_hash.sql

select
'alter session set hash_area_size='||trunc(sum(bytes)*1.6)||';'
from
dba_segments
where
segment_name = upper('&1');

spool off;

@run_hash


Here is the output from this script. As you see, we pass the driving table name, and the script generates the appropriate alter session command to ensure that we have enough space in hash_area_size RAM to hold the driving table.



SQL> @hash_area customer

alter session set hash_area_size=3774873;

In addition to seeing the hash_area_size, we must also be able to adjust the degree of parallelism in cases where we use a full-table scan to access the tables in a hash join.



This short tip does not have complete information about the hash join, but you can get complete information from Oracle High Performance SQL Tuning by Oracle Press. The hash join is very finicky, and there are many conditions that must be satisfied. It is not uncommon to find that a use_hash hint is ignored, and here are some common causes of this problem.



Check initialization parameters Make sure that you have the proper settings for optimizer_index_cost_adj and optimizer_max_permutations to limit the number of table join evaluations. Also check your values for hash_area_size and hash_multiblock_io_count.


Verify the driving Table Make sure that the smaller table is the driving table (the first table in the from clause when using the ordered hint). This is because a hash join builds the memory array using the driving table.


Analyze CBO statistics Check that tables and/or columns of the join tables are appropriately analyzed.


Check for skewed columns Histograms are recommended only for nonuniform column distributions. If necessary, you can override the join order chosen by the cost-based optimizer using the ordered hint.


Check RAM region Ensure that hash_area_size is large enough to hold the smaller table in memory. Otherwise, Oracle must write to the TEMP tablespace, slowing down the hash join. Oracle recommends that the hash_area_size for the driving table be set at 1.6 times the sum of bytes for the driving table, and you can use the hash_area.sql script to set the hash_area_size for your query session.


If you have followed these steps, you should be able to add a use_hash hint ton your SQL to invoke a hash join. Remember, it is always a good idea to run a explain plan on your SQL to ensure that the hash join is being invoked.

Searching for freelist and buffer busy waits

Whenever multiple insert or update tasks access a table, it is possible that Oracle may be forced to wait to access the first block in the table. The first block is called the segment header, and the segment header contains the freelist for the table.



The number of freelists for any table should be set to the high-water mark of concurrent inserts or updates.



The script below will tell you if you have waits for table or index freelists. If so, you need to identify the table and add additional freelists. In Oracle8i, you can add freelists with the ALTER table command, but in earlier versions of Oracle you must reorganize the table to add freelists.



The procedure for identifying the specific table associated with a freelist wait or a buffer busy wait is complex, but it is fully described in my Oracle Press book “Oracle High-Performance Tuning with STATSPACK.



column s_v format 999,999,999 heading 'Total Requests' new_value tnr

column count format 99999990 heading ‘count’ new_value cnt

column proc heading 'Ratio of waits'



PROMPT Current v$waitstat freelist waits...

PROMPT

set heading on;





prompt - This displays the total current waits on freelists

select

class,

count

from

v$waitstat

where

class = 'free list';





prompt - This displays the total gets in the database

select

sum(value) s_v

from

v$sysstat

where

name IN ('db block gets', 'consistent gets');



PROMPT Here is the ratio

select

&cnt/&tnr * 100 proc

from

dual;





Here is the listing from this script. Whenever waits appear in v$waitstat, you need to investigate to see which table or index is experiencing multiple concurrent inserts or updates.





SQL> @cont

Current v$waitstat freelist waits...

- This displays the total current waits on freelists



CLASS COUNT

------------------ ---------

free list 0

- This displays the total gets in the database



Total Num of Requests

---------------------

140318872

Here is the ratio



Ratio in %

----------

0



Please note the freelist contention also can be manifested as a buffer busy wait. This is because the block is already in the buffer, but cannot be accessed because another task has the segment header.



The section below describes the process the block address associated with a wait.



As we discussed, Oracle does not keep an accumulator to track individual buffer busy waits. To see them, you must create a script to detect them and then schedule the task to run frequently on your database server.



get_busy.ksh

#!/bin/ksh

# First, we must set the environment . . . .
ORACLE_SID=proderp
export ORACLE_SID
ORACLE_HOME=`cat /var/opt/oracle/oratab|grep \^$ORACLE_SID:|cut -f2 -d':'`
export ORACLE_HOME
PATH=$ORACLE_HOME/bin:$PATH
export PATH

SERVER_NAME=`uname -a|awk '{print $2}'`
typeset -u SERVER_NAME
export SERVER_NAME

# sample every 10 seconds
SAMPLE_TIME=10

while true
do

#*************************************************************
# Test to see if Oracle is accepting connections
#*************************************************************
$ORACLE_HOME/bin/sqlplus -s /< /tmp/check_$ORACLE_SID.ora
select * from v\$database;
exit
!

#*************************************************************
# If not, exit immediately . . .
#*************************************************************
check_stat=`cat /tmp/check_$ORACLE_SID.ora|grep -i error|wc -l`;
oracle_num=`expr $check_stat`
if [ $oracle_num -gt 0 ]
then
exit 0
fi

rm -f /export/home/oracle/statspack/busy.lst

$ORACLE_HOME/bin/sqlplus -s perfstat/perfstat< /tmp/busy.lst

set feedback off;
select
sysdate,
event,
substr(tablespace_name,1,14),
p2
from
v\$session_wait a,
dba_data_files b
where
a.p1 = b.file_id
;
!

var=`cat /tmp/busy.lst|wc -l`

echo $var
if [[ $var -gt 1 ]];
then
echo
**********************************************************************"
echo "There are waits"
cat /tmp/busy.lst|mailx -s "Prod block wait found"\
info@remote-dba.net \
Larry_Ellison@oracle.com
echo
**********************************************************************"
exit
fi

sleep $SAMPLE_TIME
done



As we can see from this script, it probes the database for buffer busy waits every 10 seconds. When a buffer busy wait is found, it mails the date, tablespace name, and block number to the DBA. Here is an example of a block alert e-mail:



SYSDATE SUBSTR(TABLESP BLOCK
--------- -------------- ----------
28-DEC-00 APPLSYSD 25654



Here we see that we have a block wait condition at block 25654 in the applsysd tablespace. The procedure for locating this block is beyond the scope of this tip, but complete directions are in Chapter 10 of Oracle High Performance Tuning with STATSPACK.

Monitoring Oracle Parallel Query with V$ Views

Many Oracle professionals are not aware that Oracle provides system views to show the internal state of parallel queries.



To see how many parallel query servers are busy at any given time, the following

query can be issued against the v$pq_sysstat view:





select
statistic,
value
from
v$pq_sysstat
where
statistic = 'Servers Busy';

STATISTIC VALUE
--------- -----
Servers Busy 30



In this case, we see that 30 parallel servers are busy at this moment. Do not be misled by this number. Parallel query servers are constantly accepting work or returning to idle status, so it is a good idea to issue the query many times over a one-hour period to get an accurate reading of parallel query activity. Only then will you receive a realistic measure of how many parallel query servers are being used.



There is one other method for observing parallel query from inside Oracle. If you are running Oracle on UNIX, you can use the ps command to see the parallel query background processes in action:



ps -ef|grep "ora_p"



In sum, Oracle has several tools that you can use to watch the Oracle parallel query processes in action. This information is very useful when you want to see if your parallelism is stressing your database server.

Monitoring Oracle Parallel Query with STATSPACK

In addition to monitoring parallel execution for individual queries, you can also monitor parallel query activity for your whole database. Using STATSPACK, you can query the stats$sysstat table to extract the number of parallelized queries that have been run during each time period between your STATSPACK snapshots. Here is the script that I use to monitor parallel queries:


rpt_parallel.sql
set pages 9999;

column nbr_pq format 999,999,999
column mydate heading 'yr. mo dy Hr.'

select
to_char(snap_time,'yyyy-mm-dd HH24') mydate,
new.value nbr_pq
from
perfstat.stats$sysstat old,
perfstat.stats$sysstat new,
perfstat.stats$snapshot sn
where
new.name = old.name
and
new.name = 'queries parallelized'
and
new.snap_id = sn.snap_id
and
old.snap_id = sn.snap_id-1
and
new.value > 1
order by
to_char(snap_time,'yyyy-mm-dd HH24')
;



Here is a sample of the output. This will quickly show the DBA the time periods when parallel full-table scans are being invoked:



SQL> @rpt_parallel

yr. mo dy hr. nbr_pq
------------- -------------
2001-03-12 20 3,521
2001-03-12 21 2,082
2001-03-12 22 2,832
2001-03-13 20 5,152
2001-03-13 21 1,835
2001-03-13 22 2,623
2001-03-14 20 4,274
2001-03-14 21 1,429
2001-03-14 22 2,313



In this example, we see that there appears to be a period each day between 8:00 p.m. and 10:00 p.m. when tasks are executing parallel queries against tables.

In practice, you may want to run the STATSPACK report to identify periods of high parallel query activity and then go to the stats$sql_summary table to examine and tune the individual parallel queries. Of course, you can also examine parallel query summaries since database start-up time by using the v$pq_sysstat view.



For additional details see the Oracle Press Book “Oracle High Performance Tuning with STATSPACK“.

Tuesday, October 6, 2009

Correlating UNIX processes with Oracle sessions

If we want to move outside of Oracle and see details for the session at the UNIX level, we must correlate the Oracle PIS with the UNIX PID. To see details of these processes you can write an Oracle script to filter the UNIX ps output to only include these processes:

#/bin/ksh
sqlplus system/oracle@prodlive<set pages 999
set feedback off
set echo off
set heading off

spool /tmp/run_pid.ksh

select
'ps -ef|grep '||spid||’grep –v grep’
from
v\$process
where
spid is not NULL
;

spool off;
!

# Execute the UNIX commands . . . .
chmod +x /tmp/*.ksh

Here is the output from this script. As we see, the SQL*Plus script builds the UNIX ps command for the Oracle PIDs and then executes the command.

root> /tmp/run_pid.ksh

jjahn 3231 3129 0 08:12:17 ? 0:00 oraclePA

bbeckero 2482 2377 0 07:59:26 ? 0:00 oraclePA

scarf 2376 785 0 07:59:03 ttyp9 0:00 telnetd

brobins 2377 2376 0 07:59:04 ttyp9 0:01 runmenu50 pamenu

monte 1372 1 0 Sep 21 ? 5:58 /opt/hpnp/bin/hpnpd

imels 1886 1878 0 Sep 21 ttyp1 0:00 tee -a

This script allows us to see the start time for the UNIX connection and also see the cumulative CPU consumption for each task.

Oracle9i and multiple block sizes

For the Oracle administrator, multiple blocksizes in Oracle9i are extremely important and exciting. For the first time, you will be able to customize your data buffer sizes according to the specific needs of your database. Prior to Oracle9i, your entire Oracle database had to have a single block size and this block size was determined at the time that the database was created.

With Oracle9i, we can define tablespaces with block sizes of 2K, 4K, 8K, 16K and 32K, and assign tables and indexes to the best block size to minimize I/O and best manage wasted space in our data buffers. When we combine the new data buffers for these block sizes, we get a total of seven separate and distinct data buffers to segregate our incoming table and index rows.

As we know, disk I/O is the single most expensive operation within an Oracle9i database, and multiple block sizes give us a powerful new tool to manage disk I/O with more power than ever before.

Until RAM memory becomes cheap enough that we can cache our whole database, we need to manage the RAM that we allocate to our data buffers. The allocation of tables and indexes according to block sizes is a balancing act.

If we allocate the data blocks too large, then we waste valuable data buffer space holding row data that Oracle will never reference. If we allocate the data block too small, and Oracle will have to do more disk I/O to satisfy a request. Here are some general rules for allocating data block sizes:

Segregate large-table full-table scans - Tables that experience large-table full-table scans will benefit from the largest supported block size and should be placed in a tablespace with your largest block size.

Set db_recycle_cache_size carefully - If you are not setting db_cache_size to the largest supported block size for your server, you should not use the db_recycle_cache_size parameter. Instead, you will want to create a db_32k_cache_size (or whatever your max is), and assign all tables that experience frequent large-table full-table scans to the largest buffer cache in your database.

The Data Dictionary cache uses the default block size - You should ensure that the data dictionary (e.g. your SYSTEM tablespace) is always fully cached in a data buffer pool. Remember, the block size of the data dictionary is not as important as ensuring that the data buffer associated with the SYSTEM tablespace has enough RAM to fully-cache all data dictionary blocks.

Indexes want large block sizes - Indexes will always favor the largest supported blocksize. You want to be able to retrieve as many index nodes as possible in a single I/O, especially for SQL that performs index range scans. Hence, all indexes should reside in tablespaces with a 32k block size.

Average row length determines block size - The block size for a tables’ tablespace should always be greater than the average row length for the table (dba_tables.avg_row_len). Not it is smaller than the average row length, rows chaining occurs and excessive disk I/O is incurred.

Use large blocks for data sorting – Your TEMP tablespace will benefit from the largest supported blocksize. This allows disk sorting to happen in large blocks with a minimum of disk I/O.

Useful UNIX scripts to clean-up old trace and log files

Here is an example of a UNIX script for keeping the archived redo log directory free of elderly files. As we know, it is important to keep room in this directory, because Oracle may “lock-up” if it cannot write a current redo log to the archived redo log filesystem. This script could be used in coordination with Oracle Recovery Manager (rman) to only remove files after a full backup has been taken.

#!/bin/ksh
# Cleanup archive logs more than 7 days old
find /u01/app/oracle/admin/mysid/arch/arch_mysid*.arc -ctime +7 -exec rm {} ;

Now that we see how to do the cleanup for an individual directory, we can easily expand this approach to loop through every Oracle database name on the server (by using the oratab file), and remove the files from each directory. If you are using Solaris the oratab is located in /var/opt/oratab while HP/UX and AIX have the oratab file in the /etc directory.

#!/bin/ksh
for ORACLE_SID in `cat /etc/oratab|egrep ':N|:Y'|grep -v \*|cut -f1 -d':'`
do
ORACLE_HOME=`cat /etc/oratab|grep ^$ORACLE_SID:|cut -d":" -f2`
DBA=`echo $ORACLE_HOME | sed -e 's:/product/.*::g'`/admin
find $DBA/$ORACLE_SID/bdump -name \*.trc -mtime +14 -exec rm {} \;
$DBA/$ORACLE_SID/udump -name \*.trc -mtime +14 -exec rm {} \;
find $ORACLE_HOME/rdbms/audit -name \*.aud -mtime +14 -exec rm {} \;
done

The above script loops through each database, visiting the bdump, udump and audit directories, removing all files more than 2 weeks old.

A standard UNIX prompt for Oracle users

Placing the following code snippet in your oracle user login file will give you a standard UNIX prompt that identifies your current server name, the database name you’re environment is set for (i.e. the value of your $ORACLE_SID UNIX variable), and your current working directory. This standard prompt makes it very easy to know where you are when navigating UNIX, and it also ensures that you know where you are located at all times.

#*****************************************************************
# Standard UNIX Prompt
#*****************************************************************
PS1="
`hostname`*\${ORACLE_SID}-\${PWD}
>"

This standardized Oracle UNIX prompt has the advantage of displaying the server name, the ORACLE_SID and the current directory. The best features of the standard command prompt is that it also places the command prompt on the next line so you can have a full 80 characters to type UNIX commands:

cheops*CCPRO-/home/oracle
>pwd

/home/oracle

cheops*CCPRO-/home/oracle
>cd /u01/oradata/CPRO

cheops*CCPRO-/u01/oaradata/CPRO
>

Using UNIX aliases with Oracle

A UNIX alias is a short command that is replaced with a larger command to save typing time. For example, we could create an alias to allow us to count the number of connected users on our oracle server:

root> alias numuse=’who|wc –l’

root> numuse

463

The whole point of UNIX aliases is to save typing. Here is a list on common UNIX aliases that can be added to your UNIX logon file for the UNIX Oracle user. These aliases perform common Oracle functions such as checking the Oracle alert log and transferring quickly between directories.

#*******************************
# UNIX aliases for Oracle DBAs
#*******************************
alias alert='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|more'
alias errors='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|more'
alias arch='cd $DBA/$ORACLE_SID/arch'
alias bdump='cd $DBA/$ORACLE_SID/bdump'
alias cdump='cd $DBA/$ORACLE_SID/cdump'
alias pfile='cd $DBA/$ORACLE_SID/pfile'
alias rm='rm -i'
alias sid='env|grep ORACLE_SID'
alias admin='cd $DBA/admin'

To illustrate how useful aliases are in Oracle administration, in the example below we can get to our pfile directory in a single command so we can view the contents of our init.ora file:

cheops*CPRO-/home/oracle
> pfile
cheops*CPRO-/u01/app/oracle/CPRO/pfile
>ls
initCPRO.ora

Aliases can also be used for sophisticated Oracle commands. For example, the following alias can be used to display all Oracle errors in the last 400 lines of the alert log:

cheops*testsid-/u01/app/oracle/admin/envtest/pfile
>alias errors='tail -100 $DBA/$ORACLE_SID/bdump/alert_$ORACLE_SID.log|grep ORA-'

cheops*testsid-/u01/app/oracle/admin/envtest/pfile
>errors
ORA-00604: error occurred at recursive SQL level 1
ORA-01089: immediate shutdown in progress - no operations are permitted
ORA-00604: error occurred at recursive SQL level 3
ORA-01089: immediate shutdown in progress - no operations are permitted

Monday, October 5, 2009

Running hash joins in parallel

Oracle hash joins are notoriously hard to set up within Oracle, and the Oracle Metalink archives are full of problem reports regarding successful hash join invocation. I will discuss hash joins in detail in Chapter 16, but for now, just be aware that Oracle reads the driving table into a RAM array of hash_area_size and uses a special hashing technique to join the memory array with the larger table.



For equi-join operations, hash joins can outperform nested loop joins, especially in cases where the driving table is small enough to fit entirely into the hash_area_size. If the driving table is too large, the hash join will write temporary segments into the TEMP tablespace, slowing down the query. Since the reading of the table rows for a hash join is the most time-consuming operation in a hash join, setting parallelism on the table can dramatically improve the performance and throughput of the query.



Here is an example of a query that forces a parallel hash join. Note that the emp table is set as the driving table:



select /*+ use_hash(e,b) parallel(e, 4) parallel(b, 4) */
e.ename,
hiredate,
b.comm
from
emp e,
bonus b
where
e.ename = b.ename
;

Here is the execution plan for the hash join. Note that both tables in this join are using parallel query to obtain their rows:

OPERATION
----------------------------------------------------------------------
OPTIONS OBJECT_NAME POSITION
------------------------------ ---------------------------- ----------
OTHER_TAG
----------------------------------------------------------------------
SELECT STATEMENT
3
HASH JOIN
1
PARALLEL_TO_SERIAL
TABLE ACCESS
FULL EMP 1
PARALLEL_TO_PARALLEL
TABLE ACCESS
FULL BONUS 2


For equi-join SQL, hash joins are often faster than nested loop joins, especially in cases where the driving table is filtered into a small number of rows in the query’s where clause.

Using the parallel_automatric_tuning parameter

If you are using an SMP or MPP database server with many CPUs, you can issue a parallel request and leave it up to each Oracle instance to use its default degree of parallelism. For example, consider the following parallel query:



select /*+ FULL(employee_table)
PARALLEL(employee_table, DEFAULT, DEFAULT) */
employee_name
from
employee_table
where
emp_type = 'SALARIED';



Oracle also provides the parallel_automatic_tuning init.ora parameter to assist in setting the best degree of parallelism. When setting parallel_automatic_tuning, you only need to specify parallelism for a table, and Oracle will dynamically change the parallel_adaptive_multi_user parameter to override the execution plan in favor of maintaining an acceptable overall load on the database.



For additional details, see the Oracle Press Book “Oracle High Performance Tuning with STATSPACK”.

Show currently executing SQL

Whenever your database is experiencing a performance problem, it is very useful to have a script that displays all of the currently executing SQL statements. The following script joins the v$sql and v$session tables to show you all of the SQL that is executing at that moment in your database.



set pagesize 24
set newpage 1
set linesize 125

column sql_text format a100
column user_name format a12

select
u.sid,
substr(u.username,1,12) user_name,
s.sql_text
from
v$sql s,
v$session u
where
s.hash_value = u.sql_hash_value
and
sql_text not like '%from v$sql s, v$session u%'
order by
u.sid;


Here is the output from this script. As we see, this is extremely useful because a runaway SQL statement could cause degradation of your whole system. If you identify a runaway query, you can issue the ALTER SYSTEM KILL SESSION 'SID, SER#'; command to kill the query:





SQL> @cur_sql



5 select f.file#, f.block#, f.ts#, f.length from fet$ f, ts$ t where t.ts#=f.ts# and t.dflextpct!=0 an

d t.bitmapped=0



6 select local_tran_id, global_tran_fmt, global_oracle_id, global_foreign_id, state, status, he

uristic_dflt, session_vector, reco_vector, 3600*24*(sysdate-reco_time), 3600*24*(sysda

te-nvl(heuristic_time,fail_time)), global_commit#, type# from pending_trans$ where session_vector

!= '00000000'



7 BEGIN sys.dbms_ijob.remove(:job); END;

9 READER SELECT TO_CHAR(page_unique_id), page_seq_nbr, book_unique_id, visual_page_nbr,

page_text FROM page WHERE page_unique_id = TO_NUMBER('2380')



14 READER SELECT TO_CHAR(page_unique_id), page_seq_nbr, book_unique_id, visual_page_nbr,

page_text FROM page WHERE page_unique_id = TO_NUMBER('9975')

Viewing complete session information

Most Oracle professionals know that they can query the v$session view to see the current sessions. However, you can also interrogate the v$sysstat view to see the current logons, cumulative logons since startup, and the high-water-mark of logons since startup time.



The following script provides a wealth of information about the number of sessions on your Oracle database.



rem session.sql - displays all connected sessions
set echo off;
set termout on;
set linesize 80;
set pagesize 60;
set newpage 0;

select
rpad(c.name||':',11)||rpad(' current logons='||
(to_number(b.sessions_current)),20)||'cumulative logons='||
rpad(substr(a.value,1,10),10)||'highwater mark='||
b.sessions_highwater Information
from
v$sysstat a,
v$license b,
v$database c
where
a.name = 'logons cumulative'
;

ttitle "dbname Database|UNIX/Oracle Sessions";

set heading off;
select 'Sessions on database '||substr(name,1,8) from v$database;
set heading on;
select
substr(a.spid,1,9) pid,
substr(b.sid,1,5) sid,
substr(b.serial#,1,5) ser#,
substr(b.machine,1,6) box,
substr(b.username,1,10) username,
-- b.server,
substr(b.osuser,1,8) os_user,
substr(b.program,1,30) program
from v$session b, v$process a
where
b.paddr = a.addr
and type='USER'
order by spid;
ttitle off;
set heading off;
select 'To kill, enter SQLPLUS> ALTER SYSTEM KILL SESSION',
''''||'SID, SER#'||''''||';' from dual;
spool off;




Here is the listing from this script. Note that we see complete information about the four current sessions on this database, and we see that there have been 212,199 logons since database startup, with a high-water mark of 13 sessions.



SQL> @session

INFORMATION
----------------------------------------------------------------------------
PRODC1: current logons=4 cumulative logons=212199 highwater mark=13


Wed Jul 04 page 1
dbname Database
UNIX/Oracle Sessions

Sessions on database PRODC1


Wed Jul 04 page 1
dbname Database
UNIX/Oracle Sessions

PID SID SER# BOX USERNAME OS_USER PROGRAM
--------- ----- ----- ------ ---------- -------- ------------------------------
12923 17 1843 roviad OPS$ORACLE oracle sqlplus@roviadb-01 (TNS V1-V3)
24499 14 7188 search READER root rdbqry_ora@search-02 (TNS V1-V
24846 9 291 search READER root rdbqry_ora@search-01 (TNS V1-V
To kill, enter SQLPLUS> ALTER SYSTEM KILL SESSION 'SID, SER#';

Querying the X$ structures

One of the most hidden areas of the Oracle database is the X$ structures. As we may know, the X$ structures are the innermost tables that are used to build the common v$ views that we use to query the data dictionary.
Here is an idea of the naming of these X$ structures. Of course, these internal structures change between Oracle releases, and this list is from Oracle 8.1.5.







X$ks – Kernel Services


x$ksmfs kernel services, memory fixed SGA

x$ksmfsv kernel services, memory fixed SGA vectors

x$ksmjs kernel services, memory java_pool summary

x$ksmlru kernel services, memory LRU

x$ksmls kernel services, memory large_pool summary

x$ksmmem kernel services, memory

x$ksmpp kernel services, memory process pool

x$ksmsd kernel services, memory SGA definition

x$ksmsp kernel services, memory shared pool

x$ksmspr kernel services, memory shared pool reserved

x$ksmss kernel services, memory shared_pool summary

x$ksmup kernel services, memory user pool

x$ksqst kernel services, enqueue status

x$ksulop kernel services, user long operation

x$ksulv kernel services, user locale value

x$ksupr kernel services, user process





X$kc – kernel cache


x$kcbfwait kernel cache, block file wait

x$kcbwait kernel cache, block wait

x$kcccp kernel cache, controlfile checkpoint progress

x$kcfio kernel cache, file I/O

x$kclfh kernel cache, lock file header

x$kclfi kernel cache, lock file index

x$kcluh kernel cache, lock undo header

x$kclui kernel cache, lock undo index





X$kq – kernel query


x$kqfco kernel query, fixed table columns

x$kqfdt kernel query, fixed table

x$kqfp kernel query, fixed procedure

x$kqfsz kernel query, fixed size

x$kqfta kernel query, fixed table

x$kqfvi kernel query, fixed view

x$kqfvt kernel query, fixed view table







X$kg – kernel generic


x$kghlu kernel generic, heap LRUs

x$kgllk kernel generic, library cache lock

x$kglob kernel generic, library cache object

x$kglpn kernel generic, library cache pin

x$kglst kernel generic, library cache status





X$kz – Kernel security




x$kzsro kernel security, system role



X$le – lock element


x$le lock element

x$le_stat lock element status







The X$ views are completely undocumented, and Oracle does not provide any assistance for deriving the meaning of the contents of the X$ structures.



However, there are some queries that you can run to see internal operations within Oracle. Let’s begin by describing the X$ksqst structure, the structure that show kernel enqueue status:



SQL> desc x$ksqst;
Name Null? Type
----------------------------------------- -------- ------------------
ADDR RAW(8)
INDX NUMBER
INST_ID NUMBER
KSQSTTYP VARCHAR2(2)
KSQSTGET NUMBER
KSQSTWAT NUMBER


As can see, the X$ structures are quite cryptic, but there are hints about their contents.



We can also further explore the X$ structures by viewing the X$ksqst table. This table show the kernel status for all enqueues within your Oracle database:



-- See the gets and waits for different types of enqueues:
select
*
from
x$ksqst
where
ksqstget > 0;


Here is an actual listing from this script, showing the enqueue data from a real Oracle instance.



SQL> @x2
ADDR INDX INST_ID KS KSQSTGET KSQSTWAT
---------------- ---------- ---------- -- ---------- ----------
00000004269DBB10 49 1 BR 1045 0
00000004269DBBB0 69 1 CF 245954 0
00000004269DBBC8 72 1 CI 5780 0
00000004269DBC28 84 1 CU 1886 0
00000004269DBCE0 107 1 DL 34 0
00000004269DBD10 113 1 DR 3053 0
00000004269DBD30 117 1 DV 389 0
00000004269DBD40 119 1 DX 1255971 0
00000004269DC138 246 1 HW 11377 0
00000004269DC218 274 1 IS 2305 0
00000004269DC308 304 1 JQ 1514 0
00000004269DC610 401 1 MR 1083 0
00000004269DC8A8 484 1 PE 212238 0
00000004269DC8B0 485 1 PF 1 0
00000004269DCB20 563 1 RT 1 0
00000004269DCC08 592 1 SQ 1868 0
00000004269DCC20 595 1 ST 26909 0
00000004269DCC98 610 1 TC 220 0
00000004269DCCE8 620 1 TM 378751 0
00000004269DCD18 626 1 TS 6605 0
00000004269DCD20 627 1 TT 64173 0
00000004269DCD40 631 1 TX 392834 0
00000004269DCE18 658 1 US 134438 0
00000004269DCFE0 715 1 WL 192 0


It is great fun to attempt queries against the X$ structures and see if you can guess the contents of the structures. Below is a query that will display the largest number of blocks that can be written by DBWR at any time:



select
kviival write_batch_size
from
x$kvii
where
kviitag = 'kcbswc';


Here is the output from this query. This displays the write batch size for the Oracle database.



SQL> connect internal as sysdba;
Connected.
SQL> @x1
WRITE_BATCH_SIZE
----------------
4096


In sum, there is a wealth of interesting information in the X$ structures, and those DBAs with a propensity to learn Oracle Internals, these queries can be both fun and informative

Sunday, October 4, 2009

Display the execution plan for parallel queries

It is not easy to view the execution plan for SQL that is using parallel query. For parallel queries, it is important to display the contents of the other_tag in the execution plan display from the plan_table.



Some SQL professionals keep a special version of plan.sql called pq_plan.sql for displaying details about parallel execution. Here is the script that I use:


pq_plan.sql
set echo off
set long 2000
set pagesize 10000

column query heading "Query Plan" format a80
column other heading "PQO/Remote Query" format a60 word_wrap
column x heading " " format a18

select distinct
object_node "TQs / Remote DBs"
from
plan_table
where
object_node is not null
order by
object_node;

select lpad(' ',2*(level-1))||operation||' '||options||' '
||object_name||' '
||decode(optimizer,'','','['||optimizer||'] ')
||decode(other_tag,'',decode(object_node,'','','['||object_node||']')
,'['||other_tag||' -> '||object_node||']')
||decode(id,0,'Cost = '||position) query
,null x
,other
from
plan_table
start with id = 0
connect by prior id = parent_id;



Here is a sample query with parallel full-table scans. Let’s examine the different display formats for the execution plans for plan.sql and pq_plan.sql.

select /*+ use_merge(e,b) parallel(e, 4) parallel(b, 4) */

e.ename,

hiredate,
b.comm
from
emp e,

bonus b

where
e.ename = b.ename

;



Here is the execution plan displayed with pq_plan.sql. Let’s take a look at this output and compare the display formats.

TQs / Remote DBs
----------------------------------------------------------------------
:Q36000
:Q36001
:Q36002


Query Plan
----------------------------------------------------------------------
PQO/Remote Query
------------------ ---------------------------------------------------
SELECT STATEMENT [FIRST_ROWS] Cost = 5


MERGE JOIN [PARALLEL_TO_SERIAL -> :Q36002]
SELECT /*+ ORDERED NO_EXPAND USE_MERGE(A2) */
A1.C0,A1.C1,A2.C1 FROM :Q36000 A1,:Q36001 A2 WHERE
A1.C0=A2.C0

SORT JOIN [PARALLEL_COMBINED_WITH_PARENT -> :Q36002]


TABLE ACCESS FULL EMP [PARALLEL_TO_PARALLEL -> :Q36000]
SELECT /*+ NO_EXPAND ROWID(A1) */ A1."ENAME"
C0,A1."HIREDATE" C1 FROM "EMP" A1
WHERE ROWID BETWEEN :B1
AND :B2

SORT JOIN [PARALLEL_COMBINED_WITH_PARENT -> :Q36002]


TABLE ACCESS FULL BONUS [PARALLEL_TO_PARALLEL -> :Q36001]
SELECT /*+ NO_EXPAND ROWID(A1) */
A1."ENAME" C0,A1."COMM" C1
FROM "BONUS" A1 WHERE ROWID BETWEEN :B1 AND :B2

Here you see more detail about the internals of the execution plan, including details about the parallel query execution modes (other_tag in plan_table) and details about the tables that participate in the query. A complete description of these table access methods is beyond the scope of this tip, but you can get complete information in the Oracle Press Book “Oracle High Performance SQL Tuning”.

Getting a PMON process dump from Oracle

Oracle has delivered a hidden tool within server manager (SQL*Plus in Oracle8 and beyond), that allows you to view specific internal Oracle structures.



The following procedure provides a process dump for the Oracle database and shows all statistics for the PMON background process..



1 – We start by issuing the oradebug setorapid command to process 2. Process number 2 is the Oracle process monitor (PMON)





SVRMGR> oradebug setorapid 2

Unix process pid: 25159, image: ora_pmon_test



2 – Next we issue the procstat command to generate the statistics



SVRMGR> oradebug procstat

Statement processed.



3 – Now we can use the TRACEFILE_NAME command to see the location of our trace file



SVRMGR> oradebug TRACEFILE_NAME

/app/oracle/admin/orcl/bdump/pmon_25159.trc



Below is the listing from this procedure. As you can see, this provides detailed information regarding important PMON activities.



SQL> !cat /u01/app/oracle/admin/prodc1/bdump/prodc1_pmon_25159.trc

Dump file /u01/app/oracle/admin/prodc1/bdump/prodc1_pmon_25159.trc

Oracle8i Enterprise Edition Release 8.1.6.1.0 - 64bit Production

With the Partitioning option

JServer Release 8.1.6.1.0 - 64bit Production

ORACLE_HOME = /u01/app/oracle/product/8.1.6_64

System name: SunOS

Node name: burleson-01

Release: 5.8

Version: Generic_108528-03

Machine: sun4u

Instance name: prodc1

Redo thread mounted by this instance: 1

Oracle process number: 2

Unix process pid: 25159, image: oracle@burleson-01 (PMON)



*** 2001-07-04 12:13:25.042

*** SESSION ID:(1.1) 2001-07-04 12:13:24.979

----- Dump of Process Statistics -----

User level CPU time = 121

System call CPU time = 47

Other system trap CPU time = 0

Text page fault sleep time = 0

Data page fault sleep time = 0

Kernel page fault sleep time = 0

User lock wait sleep time = 0

All other sleep time = 562367527

Wait-cpu (latency) time = 0

Minor page faults = 0

Major page faults = 346

Swaps = 0

Input blocks = 251

Output blocks = 0

Messages sent = 0

Messages received = 0

Signals received = 1827660

Voluntary context switches = 1828280

Involuntary context switches = 58531459

System calls = 141934171

Chars read and written = 28650

Process heap size = 1425416

Process stack size = 565248

Gather detailed optimizer statistics for an SQL query

The TKPROF utility is also known as the SQL trace facility. In addition to the execution plan, TKPROF provides a detailed report that shows the execution details for the SQL statement. Let’s take a look at how TKPROF works.



Set the Environment for SQL Tracing


To enable TKPROF, you must set several Oracle initialization parameters and then turn on tracing with the alter session set sql_trace=true command. The following parameters need to be set up to get a SQL trace file.



1 - The sql_trace parameter can be set at the instance level or at the session level. To set SQL trace for whole instances, add the following to your initialization file. This can create a huge amount of data in the Oracle user_dump_dest directory, and the system-wide option is rarely used because of the large amount of data generated.



To enable SQL trace at the session level, the following command can be entered from SQL*Plus:



alter session set sql_trace=true;



2 - The timed statistics parameter allows for the computation of SQL statistics such as CPU usage and elapsed time. This can be set in three ways. For instance-wide statistics, the following initialization parameter can be set:

timed_statistics=true



For database-wide statistics after the instance is started, you can use the alter system command to turn on timed statistics:



alter system set timed_statistics=true;



At the session level, time statistics can be set in SQL*Plus with the following command:



alter session set timed_statistics=true;



3 - The user_dump_dest initialization parameter specifies the location of the trace files. You will need to know the location of the trace files to create your TKPROF report



4 - The max_dump_file_size parameter must be set high enough to allow the complete trace file for a complex SQL statement.



Generating the SQL Trace File


Once the four steps above are complete, you will get a trace file for all SQL commands issued by your session. This trace file is known as a level-1 trace.



Oracle will generate trace files for every session where the value of sql_trace=true and write them to the user_dump_dest destination (the UDUMP directory).



Once you have generated the trace files, the next step is to format the trace file. This utilizes the TKPROF utility to format and make the trace file readable.



TKPROF EXPLAIN=user/password


You can then view the detailed SQL execution for your query. This report shows the time spent in each state of the query and provides valuable information about the internal workings of any SQL query:



call count cpu elapsed disk query current rows
------- ------ -------- ---------- ---------- ---------- ---------- ---------
Parse 2 0.02 0.02 0 0 0 0
Execute 2 239.39 1003.16 274981 3792129 534 242
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ---------
total 4 239.41 1003.18 274981 3792129 534 242





For complete details on using TKPROF, see the Oracle Press Book “Oracle High Performance SQL Tuning”.

How to tell Oracle to take a dump

It is often useful to dump the internal contents of Oracle internal structures such as the control files, library cache, and redo log headers. The following undocumented Oracle commands can be used for this purpose.



For each of these commands, a trace file will be generated in your UDUMP directory. After issuing the command, you can go to your UDUMP directory to view the contents of the dump.



1 – SCN Dump - This command dumps the control file contents to see SCN information. The SCN information is especially useful when doing a database recovery.



alter session set events
'immediate trace name CONTROLF level 10';





2 – File Header Dump - Here is the command to dump file headers. This dump is also quite useful if you have a corrupt database or a failure from rman in doing a database recovery.



alter session set events
'immediate trace name FILE_HDRS level 10';



3 - Dump redo log headers – This command will dump all of the redo log header information.



alter session set events
'immediate trace name REDOHDR level 10';



4 – System state dump – This command will dump the system state to your trace file.



alter session set events
'immediate trace name SYSTEMSTATE level 10';



5 – Process state dump – This command will dump the process state for all active processes in your database:



alter session set events
'immediate trace name PROCESSSTATE level 10';



6 – Library Cache Dump – This command is very useful for examining library Cache details, especially when you are having high library cache reloads or SQL access contention:





alter session set events
'immediate trace name library_cache level 10';



Here is a partial listing from this dump. As you can see, Oracle provides a wealth of detailed information about the internals of the library cache:



Dump file /u01/app/oracle/admin/testc1/udump/testc1_ora_16167.trc

Oracle8i Enterprise Edition Release 8.1.6.1.0 - 64bit Production

With the Partitioning option

JServer Release 8.1.6.1.0 - 64bit Production

ORACLE_HOME = /u01/app/oracle/product/8.1.6_64

System name: SunOS

Node name: sting

Release: 5.8

Version: Generic_108528-03

Machine: sun4u

Instance name: testc1

Redo thread mounted by this instance: 1

Oracle process number: 10

Unix process pid: 16167, image: oracle@sting (TNS V1-V3)



*** SESSION ID:(10.6858) 2001-07-05 10:29:17.148

LIBRARY CACHE STATISTICS:

gets hit ratio pins hit ratio reloads invalids namespace

---------- --------- ---------- --------- ---------- ---------- ---------

48475 0.9905518 273499 0.9975868 20 13 CRSR

5043 0.9569700 151734 0.9973045 0 0 TABL/PRCD/TYPE

4044 0.9943126 4051 0.9923476 0 0 BODY/TYBD

1 0.0000000 1 0.0000000 0 0 TRGR

29 0.0000000 29 0.0000000 0 0 INDX

Allowing Oracle to set the optimal degree of parallelism in a query

Oracle parallel query allows you to control the number of parallel query slave processes that service a table with the DEGREE clause.



Oracle states that the optimal degree of parallelism for a query is based on several factors. These factors are presented in their order of importance:



The number of CPUs on the server
The number of physical disks that the table resides upon
For parallelizing by partition, the number of partitions that will be accessed, based upon partition pruning (if appropriate)
For parallel DML operations with global index maintenance, the minimum number of transaction freelists among all the global indexes to be updated. The minimum number of transaction freelists for a partitioned global index is the minimum number across all index partitions. This is a requirement in order to prevent self-deadlock.


For example, if your system has 20 CPUs and you issue a parallel query on a table that is stored on 15 disk drives, the default degree of parallelism for your query is 15 query servers.



There has been a great deal of debate about what number of parallel processes results in the fastest response time. As a general rule, the optimal degree of parallelism can be safely set to N-1 where N is the number of processors in your SMP or MPP cluster. Remember, the proper degree of parallelism will always result in faster execution, provided you have a server with lots of CPUs.



We can also let Oracle compute the optimal degree of parallelism. For example, we can create a script to gather the optimal degree of parallelism and pass this argument to the SQL.



The following script uses the IBM-AIX lsdev command to compute the number of CPUs on the server, and pass this value to the Oracle parallel query:


parallel_query.ksh


#!/bin/ksh
# Get the number of CPUs
num_cpu=`lsdev –C|grep mem|wc –l`
optimal_parallelism=`expr $num_cpu`-1

sqlplus system/manager<select /*+ FULL(employee_table)
PARALLEL(employee_table, $optimal_parallelism)*/
employee_name
from
employee_table;
exit
!



Here is how to get the number of CPUs on your Oracle server:





Windows NT - If you are using Windows NT, you can find the number of CPUs by entering the Control Panel and choosing the System icon.



Linux - To see the number of CPUs on a Linux server, you can cat the /proc/cpuinfo file. In this example, we see that our Linux server has four CPUs:



>cat /proc/cpuinfo|grep processor|wc -l
4



Solaris - In Sun Solaris, the prsinfo command can be used to count the number of CPUs on the processor.



>psrinfo -v|grep "Status of processor"|wc -l
24



IBM-AIX - The following example, taken from an AIX server, shows that the server has four CPUs:



>lsdev -C|grep Process|wc -l

36



HP-UX - In HP UNIX, you can use the ioscan command to find the number of CPUs on your server.



>ioscan -C processor | grep processor | wc -l
6



Oracle parallel query processes can be seen on the server because background processes will start when the query is serviced. These factotum processes are generally numbered from P000 through Pnnn and you can issue a UNIX command to watch them get created and destroyed:



ps -ef|grep ora|grep -i p0

The dangers of Invoking Oracle parallel query

There are some global Oracle changes that can have a profound impact on your whole system, and turning on parallel query with the ALTER TABLE command can be very dangerous. Oracle parallel query can be turned on in several ways. You can turn it on permanently for a table, or you can isolate the parallel query to a single table.



The danger is setting parallel at the database level with the ALTER TABLE command like this:



Alter table customer parallel degree 35;



The recommended approach for using Parallel query is to add a parallel hint to all SQL statements that perform a full-table scan and would benefit from parallel query. If you cannot use hints and you are using Oracle8i, you can also use Optimizer Plan stability for this purpose.



select /*+ FULL(emp) PARALLEL(emp, 35) */
emp_name
from
emp;


Note the use of the double hints in the preceding query. Most Oracle DBAs always use the full hint with the parallel hint because they are both required to use Oracle parallel query. Smart Oracle professionals always use hints to invoke parallel query.



As we stated, it is very dangerous to enable Oracle parallel query for a table with the alter table command. Once a table is marked for parallel query, the Cost-Based Optimizer (CBO) will change the execution plan for existing queries to use parallel full-table scans instead of index scans. This well-intentioned mistake has crippled many databases, since queries that used to run fast with indexes will now use a full-table scan.



This happens because the CBO will have a lower cost for a parallel full-table scan than a single-process full-table scan. If you must set table parallelism at the database level, many DBAs will alter the optimizer_index_cost_adj parameter.



The default value for optimizer_index_cost_adj is 1,000, and any value less than 1,000 makes the CBO view indexes less expensive. If you do not like the propensity of the CBO first_rows mode to favor full-table scans, you can lower the value of optimizer_index_cost_adj to 10, thereby telling the CBO to always favor index scans over full-table scans.

Dumping Oracle database blocks

Oracle provides several undocumented ALTER SESSION commands that can be used to dump details from the Oracle internal control structures. These can be very useful for investigating Oracle corruptions and peeking into the Oracle internal structures.



To do this, you must first start by getting the data block address (DBA) for the block that you want to display. This is done by taking the file number and block number and running the dbms_utility.make_data_block_address function.



For example, you can use the dbms_utility package to get a data block address. Many Oracle scripts will provide you with the file number and block number, but you must then translate this information into the actual data block address (DBA) for the block.



For example, to dump file number 101, block 50, you could enter the following PL/SQL.



variable dba varchar2;

exec :dba := dbms_utility.make_data_block_address(101,50);

print dba



Next, you take the resulting DBA and use it with the alter session command to dump the contents of the data block. In this example, we assume that we have returned data block address 10059



alter session set events 'immediate trace name blockdump level 10059';



We can then go to our trace file directory (usually BDUMP) and view the trace file with the detailed block contents.



While Oracle does not provide data descriptions (DESCTS) for the data blocks, with a little work and intuition, you can reckon the details of the block header and footers, while seeing the actual contents of the data rows.

Re-setting PCTUSED based on row length

It is very important that the DBA understand how the row length affects setting the values for PCTFREE and PCTUSED. You want to set PCTFREE such that room is left on each block for row expansion, and you want to set PCTUSED so that newly linked blocks have enough room to accept rows.

Here we see the tradeoff between effective space usage and performance. If you set PCTUSED to a high value, say 80, then a block will quickly become available to accept new rows, but it will not have room for a lot of rows before it becomes logically full again.

Remember the rule for PCTUSED. The lower the value for PCTUSED, the more space will be available on each data block, and subsequent INSERTs will run faster. The downside is that a block must be nearly empty before it becomes eligible to accept new rows.



rem pctused.sql
set heading off;
set pages 9999;
set feedback off;

spool pctused.lst;
column db_block_size new_value blksz noprint
select value db_block_size from v$parameter where name='db_block_size';

define spare_rows = 2;

select
' alter table '||owner||'.'||table_name||
' pctused '||least(round(100-((&spare_rows*avg_row_len)/(&blksz/10))),95)||
' '||
' pctfree '||greatest(round((&spare_rows*avg_row_len)/(&blksz/10)),5)||
';'
from
dba_tables
where
avg_row_len > 1
and
avg_row_len < .5*&blksz
and
table_name not in
(select table_name from dba_tab_columns b
where
data_type in ('RAW','LONG RAW','BLOB','CLOB','NCLOB')
)
order by
owner,
table_name
;

spool off;

Oracle9i Index skip scans

The index skip scan is a new execution plan in Oracle9i whereby an Oracle query can bypass the leading-edge of a concatenated index and access the inside keys of a multi-values index. For example, consider the following concatenated index:



create index
sex_emp_id
on
emp (sex, emp_id)
;




Prior to Oracle9i, this index could only be used with both sex and emp_id were present in the SQL query, or when the sex column was specified. The following query would not be able to use the concatenated index:



select
emp_id
from
emp
where
emp_id = 123;


The Oracle9i skip scan execution plan allows for the concatenated index to be used, even though sex is not specified in the SQL query. This feature promises that there is no need to provide a second index on the emp_id column. Oracle acknowledges that the index skip scan is not as fast as a direct index lookup, bit states that the index skip scan is faster than a full-table scan.



What Oracles does not mention is that the cardinality of the leading column has a direct impact on the speed of the index skip scan. In our example, the first column, sex has two columns (three if you count eunuchs or those who have recently visited Sweden for surgical procedures).



While Oracle does not publish the internals of the index skip scan, we can infer from the execution plans that Oracle is internally generating multiple queries, thereby satisfying the query with multiple sub-queries:



SELECT STATEMENT Optimizer=CHOOSE (Cost=6 Card=1 Bytes=5)

0 SORT (AGGREGATE)

1 INDEX (SKIP SCAN) OF 'SEX_EMP_ID' (NON-UNIQUE)





Internally, Oracle is probably generating two queries and joining the resulting Row ID lists:



select emp_name from emp_where sex = ‘F’ and emp_id = 123
UNION
select emp_name from emp_where sex = ‘M’ and emp_id = 123;




The implications of using the index skip scan are clear:



1. Oracle skip scan execution plan performance will decrease according to the number of unique values in the high order key. If the leading column were “state” with 50 values, Oracle would be issuing 50 index probes to retrieve the result set.


2. The index skip scan is only useful in shops where disk space savings are critical. Shops that can afford the disk space to build a second index will always get faster performance.

Materialized views and query re-write

Oracle8 has a special feature called materialized views at can greatly speed-up data warehouse queries. In a materialized view, a summary table is created from a base table, and all queries that perform a similar summation against the base table will be transparently re-written to reference the pre-built summary table.


Below is a simple example. We begin by creating a materialized view that sums sales data.


create materialized view
sum_sales
build immediate
refresh complete
enable query rewrite
as
select
product_nbr,
sum(sales) sum_sales
from
sales;
Now, when we have any query that summarizes sales, that query will be dynamically re-written to reference the summary table.


alter session set query_rewrite_enabled=true;
set autotrace on

select
sum(sales)
from
sales
;
In the execution plan for this query we see that the sum_sales table is being referenced.


Execution Plan
----------------------------------------------------------
0 SELECT STATEMENT Optimizer=CHOOSE (Cost=1 Card=1 Bytes=83)
1 0 SORT (AGGREGATE)
2 1 TABLE ACCESS (FULL) OF 'SUM_SALES' (Cost=1 Card=423 Bytes=5342)



If you use bind variables in a query, the query will be not be rewritten to use materialized views even if you have enabled query rewrite.



Once the query re-write feature is enabled, you can use standard SQL hints to force the SQL parser to re-write the query.


select /*+RRWRITE(sales)*/
...
As Oracle SQL evolves and becomes more sophisticated there will be more cases where the SQL parser will re-write queries into a more efficient form