Installing Oracle Rdbms 12c Release 1 on Oracle Linux 6

In this previous post, i showed how to build a Virtual Box image, that runs the Oracle Linux 6 Update 4 x86-64 bit operating system. I set the image up with all the pre-requisites for installing the Oracle Rdbms 12c Release1. In this blog post i’ll describe the steps i executed to install a single instance, oracle 12c Oracle Home on Oracle Linux 6.

Download the Oracle 12c Software


Download the above two files to your desktop.

Scp them to the Oracle Linux 6 server.

I copied the zip files into a directory named /u02/sw (Owned by oracle) and unzipped both of them in that directory.

Directory structure

Since this is a test install, i am not really following the oracle best practices on separating the Oracle Base, Oracle Inventory location and Oracle Home. In my installation all these are going into /u01/12cr1

Setting up the X Display

Since the installer is a graphical user interface, we need to redirect the display to the client (Unless you are installing using a Vnc client, or logged into the Oracle Linux 6 virtual machine using a Gui).

I am running this on ubuntu linux, so my ubuntu linux desktop is the X client.

I login to the Oracle Linux 6 virtual machine using the following command, which redirects the display to my desktop.

ssh -X -l oracle

Install Oracle 12c Binaries

Login as the oracle user to the OEL6 virtual machine

cd /u02/sw/database

Invoke runInstaller as shown in the screen above

The 12c Logo displays


Click Next on the above 2 screens.

In the screen shown above, choose “Install database software only”. This way we install just the rdbms binaries and then we’ll create the database afterwards.

In the screen above choose “Single Instance database installation”.

In the Screen above choose “English” as the language.

In the Screen above, chooose “Enterprise Edition”

In the Screen above, choose the Oracle Base as “/u01/12cr1” and the Oracle Home as “/u01/12cr1/dbh1”

In the Screen above Choose the location of the Oracle Inventory as “/u01/12cr1/oraInventory”

In the above Screen, choose the defaults

In the screen above, the installer complains about the physical memory, shmall and shmmax.

In my case shmall and shmmax are set much higher than the values that oracle recommends.

If you have set the memory for the virtual machine to 2Gb as i mentioned in my previous post, the warning about the physical memory should not be present. So it is ok to ignore these warnings and proceed with the installation, as shown in the picture above.

The installer displays the installation locations. Click on “Install” to start the installation.

The installation proceeds as shown in the figure above.

After the binary installation, the installer prompts you to run two scripts.

Login as root, to the Oracle 6 virtual machine, from another screen.

Then click Ok on the screen above.

Click “Close”  in the screen above to complete the rdbms binary installation.

In the next blog post, i will document the steps to create and start a Container database and Pluggable database.

Creating an Oracle Linux Release 6 VirtualBox image

Oracle Database 12c was recently announced (On June 1 2013) to be Generally Available for customers to use. I created a new Oracle Linux 6 virtual box, virtual machine and installed Oracle 12c Release 1 on it. So i’ll blog about the steps i used, with screenshots, in a 3 part blog series. This first one is this post, where i’ll document the steps i followed to create the Oracle Linux 6 – 64 bit virtual machine (In the next 2 posts i’ll document the process used to install the rdbms binaries and create the database).

Download Oracle Linux 6 Update 4

I downloaded Oracle Linux from



Create a New VirtualBox Image

Start the Oracle VM VirtualBox Manager from your program Menu

Click on “New”, you get the screen Below

Choose the Name as “OEL6” (Any name you want) and Type “Linux” and Version “Oracle (64 Bit)” as in the picture above, Click Next.

In the above screen, choose the file type to be “VDI”, Click Next.

Choose the file location and size in the screen above, Click Next.

Add the Oracle Linux ISO as a bootable Drive.

In the Oracle VM VirtualBox Manager, click and Highlight the new virtual Machine “OEL6” that we created.

In the right hand side of the window you will see the details regarding this new image.

Now would be a good time to click on “System” and increase the memory allocated to the Virtual Machine to 2Gb and the Number of Processors to 2.

Then click on “Storage”, which brings you to the screen below.

Click on the icon pointed to by the Red Arrow in the Figure Above, to create a new Dvd Drive.

Choose the Oracle Linux 6 iso image that you just downloaded from

This action will result in a new DVD drive being displayed in the storage screen, as in the Figure above. Click OK and return to the “Oracle VM VirtualBox Manager” main screen.

Install Oracle Linux 6 Update 4

In the Left panel of the Main Menu, Highlight your new Virtual Box image “OEL6”, and click on “Start”. It brings up the screen below

Hit Enter, it will Bring you to the screen below

In this screen i chose to skip the test of the media, tab to skip and hit enter.

In this screen i chose “English”, hit Enter

In this screen i chose “U.S English”, hit Enter

Choose “Basic Storage Devices” and in the next screen Choose “Yes Discard Any Data”

In the above screen, i left the hostname as “localhost.localdomain” and did not configure networking. (Configured these manually later).

It autodetects your timezone, click Next.

Enter a password for the root user and click Next

Choose “Use All Space” and click Next, Confirm “Write Changes to Disk”, Click Next.

I chose “Database Server” in the screen above. Keep in mind that once you do this, you do not get a GUI to login to the machine. All access has to be via the command line. If you want a GUI, you can choose “Basic Server”.

The above two screens appear when the installation is in progress, and as soon as the installation is complete.

Congratulations, you now have a OEL6 Update 4, 64 bit Virtual Box image !!!

Configure Networking and Hostname

At this point it would be best if you configure the virtualbox image to use a “Bridged Network”, so that it can communicate with a ULN server, to get the updates and required packages.

Follow the instructions from Setting Up Oracle Linux 6 with public-yum for updates to setup your image to receive packages from the oracle public-yum repository.

Login as root

Edit /etc/sysconfig/network, and change

Edit /etc/sysconfig/network-scripts/ifcfg-eth0 and add


This sets up a static IP address and plumbs up eth0 after reboots.

Install Pre-requisite RPM’s (For a 12cR1 oracle installation)

Login as root

Run “yum update” (This will bring any new versions of installed packages that are available on public-yum)

Run “yum install oracle-rdbms-server-12cR1-preinstall” (This will install all the pre-requisite rpm’s, setup the oracle user, setup the oinstall and dba groups, update kernel parameter settings and security settings required for an oracle installation).

Most of the work is done, you are almost there.

Create Additional Hard Disks

I like to separate my binary and database installations into separate hard disks, so now i go ahead and create two additional hard disks.

Create two new hard disks in Virtual Box

From the Oracle VM VirtualBox Manager, click on your Virtual Box Image “OEL6”

Click on Storage in the right hand side panel.

Click on the Hard Disk Icon next to “Controller SATA” pointed to by the Red Arrow

Click on “Create New Disk”

Choose “VDI” for hard disk file type

Choose “Dynamically allocated” for allocation type

Choose the proper File Location and File size as 10GB

Follow the above procedure and add a second hard disk (This one 20Gb) as shown below.

Now you should have two additional hard disks in this Virtual Box Image.

I will be using the disk OEL6-u01 as the device for the /u01 (10Gb) file system and OEL6-u02 as the device for the /u02 (20Gb) file system.

Now when you reboot the Linux virtual machine, OEL6-u01 will show up as /dev/sdb and OEL6-u02 will show up as /dev/sdc.

Partition, Format and Mount the hard disks

Once the devices are created and the VirtualBox Virtual machine has been rebooted, login as root.

Use the fdisk -l commands as shown above, to check that the disks /dev/sdb and /dev/sdc exist and are the correct sizes.

Partition the disks as shown above.

Format the disk /dev/sdb1 as shown above. Repeat the process for /dev/sdc1

Add the entries shown above to /etc/fstab so that the filesystems are mounted everytime you reboot the machine.

At this point, i shut down the virtual machine and switched it from using a “Bridged Network” to using a “Host Only” network.

Now we have a Oracle Linux 6 Virtual Machine that is ready for a Oracle 12c database Release 1 installation.

My next post will walk through the steps to install a single instance Oracle 12c Release 1 Rdbms Environment.


Graph CPU usage on exadata using oswatcher files

On the oracle database machine, oswatcher is installed during setup time, both on the database nodes and the exadata cells. This utility collects linux operating system level statistics, which comes in very handy when troubleshooting operating system level issues. The data is collected in text files. There is a Java based utility (OSWG) provided by oracle support to graph the contents of these files, however that utility does not work on the oswatcher files generated on exadata.

Here is a python script that can graph the cpu used from the mpstat information that oswatcher captures. It has been tested on new oswatcher files on an x3-2. You need to first install a python environment that has the “numpy” and “matplotlib” modules installed.

Install a Python Virtualenv.

If you create multiple applications using Python and end up using different versions, it is easier to maintain different virtualenv’s. You can create a python virtualenv as shown below (On ubuntu linux).

curl -O
tar -xzvf virtualenv-1.9.1.tar.gz
cd virtualenv-1.9.1
python ../p273env2
. p273env2/bin/activate
pip install numpy
sudo apt-get install libfreetype6-dev
pip install matplotlib

Now that you have a python environment, with your required libraries, you can go ahead and execute the script as shown below.

The oswatcher files in /opt/oracle/oswatcher are .bz2 files and there will be one file per hour per day. Copy the mpstat .bz2 files into a directory and use bunzip2 to unzip them. In this example let us say that the directory name is /u01/oswatcher/mpstat/tmp

You can now run the script as shown below

python  /u01/oswatcher/mpstat/tmp
python  /u01/oswatcher/mpstat/tmp '06/14/2013 05:00:00 AM' '06/14/2013 07:00:00 AM'

The first command will graph the cpu usage for the entire time range in all those files and the second command graphs the cpu information for the date and time range you have specified.

It creates a file in the current directory, named oswmpstat.png, which has the graph.

You can find the full script here.

You can find a sample output graph here.

Plotting AWR database metrics using R

In a previous post i showed how you can connect from R to the oracle database using the R driver. In this post i will explain how we can run queries against the AWR history tables and gather data that can be plotted using ggplot.

When you install R on linux, like i outlined in the above post, you get an executable named Rscript. Rscript is a NonInteractive variant of the R command, so you can run a R batch file from the linux shell (Like running a bash shell script). I am using Rscript as the interpreter in my script (First line).

ggplot2 is a R library that can be used for plotting in R programs. There is native plotting capability in R and there is another library named lattice. ggplot2 is much more robust and is based on the grammar of graphics. You have to install ggplot2 (install.packages(“ggplot2”)) in R before you can use this.



Process command line arguments. This script expects 3 commandline arguments. Copy each argument to a R variable.

args <- commandArgs(TRUE)
l_dbid <- as.double(args[1])
l_bsnap <- as.double(args[2])
l_esnap <- as.double(args[3])

Connect to Oracle

drv <- dbDriver(“Oracle”)
con <- dbConnect(drv,username=”system”,password=”manager”,dbname=”burl5vb1:1521/rk01″)

Popluate a data frame with the values you will need for bind variables in the query you will be submitting. = data.frame(dbid = l_dbid, bsnap =l_bsnap,esnap=l_esnap)

Prepare and Execute the query

res <- dbSendQuery(con,”select dhss.instance_number,dhss.snap_id,dhs.end_interval_time et,
round(sum(decode(dhss.metric_name,’User Transaction Per Sec’,dhss.average,0))) utps,
round(sum(decode(dhss.metric_name,’Average Active Sessions’,dhss.average,0))) aas,
round(sum(decode(dhss.metric_name,’Host CPU Utilization (%)’,dhss.average,0))) hcpu,
round(sum(decode(dhss.metric_name,’Buffer Cache Hit Ratio’,dhss.average,0))) bchr,
round(sum(decode(dhss.metric_name,’Logical Reads Per Sec’,dhss.average,0))) lr,
round(sum(decode(dhss.metric_name,’I/O Megabytes per Second’,dhss.average,0))) iombps,
round(sum(decode(dhss.metric_name,’I/O Requests per Second’,dhss.average,0))) iops,
round(sum(decode(dhss.metric_name,’Redo Generated Per Sec’,dhss.average,0))) rg,
round(sum(decode(dhss.metric_name,’Temp Space Used’,dhss.average,0))) ts,
round(sum(decode(dhss.metric_name,’Physical Write Total IO Requests Per Sec’,dhss.average,0))) pw,
round(sum(decode(dhss.metric_name,’Physical Read Total IO Requests Per Sec’,dhss.average,0))) pr
from dba_hist_sysmetric_summary dhss,dba_hist_snapshot dhs
dhss.dbid = :1
and dhss.snap_id between :2 and :3
and dhss.metric_name in (
‘User Transaction Per Sec’,
‘Average Active Sessions’,
‘Host CPU Utilization (%)’,
‘Buffer Cache Hit Ratio’,
‘Logical Reads Per Sec’,
‘I/O Megabytes per Second’,
‘I/O Requests per Second’,
‘Redo Generated Per Sec’,
‘Temp Space Used’,
‘Physical Write Total IO Requests Per Sec’,
‘Physical Read Total IO Requests Per Sec’)
and dhss.dbid = dhs.dbid
and dhs.instance_number=1
and dhss.snap_id = dhs.snap_id
group by dhss.instance_number,dhss.snap_id,dhs.end_interval_time
order by 1,2″,

Fetch the rows, and disconnect from the db.

data <- fetch(res)

Open a pdf file to save the graphs to.
Generate the graphs using ggplot.
print the graphs to the pdf file
Close the pdf file.

In the ggplot function call, ET and INSTANCE_NUMBER represent the End Snap Time and Instance Number columns output from the query, and AAS, UTPS, HCPU, PW and PR represent the AverageActiveSessions, UserTransactionPerSecond, HostCpu, PhysicalWrites and PhysicalReads columns from the query.

pdf(“plotstat.pdf”, onefile = TRUE)
p1<-ggplot(data,aes(strptime(ET,format=”%Y-%m-%d %H:%M:%S”),AAS,group=INSTANCE_NUMBER,color=INSTANCE_NUMBER))+geom_point()+geom_line()+ggtitle(“Average Active S
essions”)+labs(x=”Time of Day”,y=”Average Active Sessions”)
p2<-ggplot(data,aes(strptime(ET,format=”%Y-%m-%d %H:%M:%S”),UTPS,group=INSTANCE_NUMBER,color=INSTANCE_NUMBER))+geom_point()+geom_line()+ggtitle(“Transactions Pe
r Second”)+labs(x=”Time of Day”,y=”Transactions Per Second”)
p3<-ggplot(data,aes(strptime(ET,format=”%Y-%m-%d %H:%M:%S”),HCPU,group=INSTANCE_NUMBER,color=INSTANCE_NUMBER))+geom_point()+geom_line()+ggtitle(“CPU Usage”)+lab
s(x=”Time of Day”,y=”Cpu Usage”)
p4<-ggplot(data,aes(strptime(ET,format=”%Y-%m-%d %H:%M:%S”),PW,group=INSTANCE_NUMBER,color=INSTANCE_NUMBER))+geom_point()+geom_line()+ggtitle(“Physical Writes”)
+labs(x=”Time of Day”,y=”Phywical Writes”)
p5<-ggplot(data,aes(strptime(ET,format=”%Y-%m-%d %H:%M:%S”),PR,group=INSTANCE_NUMBER,color=INSTANCE_NUMBER))+geom_point()+geom_line()+ggtitle(“Physical Reads”)+
labs(x=”Time of Day”,y=”Physical Reads”)

You can run this script as follows from the Linux Command Line. The first argument is the dbid, the second argument is the begin snap id and the last argument is the end snap id.

./plotstat.R 220594996 5205 5217

You will then see a pdf document named plotstat.pdf in your directory that has 5 separate graphs in it.
Click on the link below to see a sample file. This is plotting awr data from a 4 node Oracle Rac Database.


Click Here to download the whole script, plotstat.R

ggplot2 : Elegant Graphics for Data Analysis is a great book to learn about ggplot2.

Oracle database machine x3-2

The Oracle database machine, gets a major makeover. As Larry Ellison phrased it in his Openworld 2012 Keynote, “Thought that the x2-2 was fast ? You Aint seen nothin Yet”.

If you go to, at the middle of the page, in the section titled “What’s New”, you can see a in depth technical discussion of the changes incorporated in the x3-2.

So without further Ado, let me explain what the changes are, in the x3-2 compared to the x2-2

Hardware Improvements

Faster CPU’s/More Cores.

– The Oracle Database Machine x3-2, uses the Intel Xeon E5-2690 Processors (2.9Ghz). 2 Sockets, 8 cores each, total 16 cores in each database node (The x2-2 had 12 cores per node). These are the Sandy bridge processors (x2-2 had the Intel Xeon westmere processors), which have a new micro architecture, and are extremely fast (Comparable in speed to the IBM Power7 cpu’s).

So now in the full Rack of x3-2, the database machine has 128 CPU Cores (The x2-2 had 96 Cores).

– The CPU’s on the exadata cells have been upgraded to use the Intel Xeon E5-2630L (2.0Ghz) Sandybridge processors. The Cpu’s are 6 cores each.

More Physical Memory (DRAM)

– The Oracle Database Machine x3-2 has 128Gb of DRAM memory per database server. This is expandable to 256Gb of Memory. So in the Full Rack you can have upto 2048Gb (2Tb) of physical memory.

– The physical memory on the x3-2 exadata cells, has been upgraded to have 64Gbytes of Ram.

More 10GigE networking ports

– The 4 Networking ports on the database server, mother board are now 1/10Gbe. They are autosensing,and are copper only. The remaining 2 Network ports are 10Gbe and can be connected via fiber.

More Flash Cache.

– The x3-2 exadata storage servers now use the Sun F40 Flash cards instead of the Sun F20 Flash cards used in the x2-2. Each Card is 400Gb. There are 4 PCI-E Flash cards in each cell. So you have 1600Gbytes of Flash cache in each cell. In a full rack x3-2, you get 22.4Tb of Flash cache (The x2-2 had 5Tb of Flash cache in a full rack).

So what does this increased amount of Flash mean in terms of performance ?

On an x3-2 full rack, you can get
– 1.5 Million datatase read iops from the flash cache.
– 1 Million database write iops from flash cache
– 100Gbytes/sec Flash Cache, scan throughput

New 1/8th Rack

A new configuration (In addition to the Full, Half & Quarter configurations) of a 1/8th Rack has been announced. So customers can now buy a configuration smaller than the quarter rack. It is really a 1/4th rack with half the cpu’s, half the flash cards and half the disks turned off. So the hardware price is lower and the software licensing costs are lower.

The other improvements include lower power consumption and improved cabling and airflow.

One notable change is that, the x3-2 now, does not have a KVM. This leaves 2U at the top of the Rack, where customers can deploy their in home switches, for network connectivity.

The number of disks, the type of disks, the disk capacities and speeds, in the exadata x3-2 cells,remain the same as it was in the x2-2 cells.

Software Improvements

Exadata Smart Flash Cache Write-Back

With the improved write speeds of the new PCI-E flash cards, the flash cache can now used as a write-back cache. This means that as soon as the data is written to flash cache, oracle database considers the write complete (ie it does not have to wait till the data is written to the physical magnetic disk). This helps improve the performance of applications that are currently bottlenecked on database writes.

On the x2-2, the random writes were written to the flash cache too, however it had to be written to disk (Or strictly speaking, to the disk controller cache) before the write was acknowledged by the database as completed. With the write-back cache functionality in x3-2 as soon as the write is persisted in the flash cache the database considers the write as complete. The writes to disk only get done when the ESS software detects that new blocks need to be read from disk to the flash cache and there is no free space in the flash cache. At such times, least frequently used data from the flash cache gets written to physical disk.

The smart flash cache algorithm makes sure that things like backups do not overwrite the entire cache.

The Full Rack x2-2 can do 1 million write iops to flash cache using this new functionality.

Reduced database brownout time during cell failure/removal.

In previous versions of the ESS software there could be upto 8 seconds of brown out time, when a cell failed, which has been now reduced to sub second.

Unbreakable Enterprise Kernel

– The database servers and Exadata storage servers on the x3-2 now use Oracle Unbreakable Enterprise Kernel 1.

The UEK1 was the operating system on the x2-8’s for a while now. With the x3-2’s we now use the UEK Kernel on the x3-2 database and storage server.


– DBFS now supported on Solaris and Sparc Super Cluster.

The above list of hardware and software changes are just the highlights, not a complete list.

How to influence the execution plan without modifying the sql from the application

It is likely that all of us have encountered the following situation. Your company runs a packaged application, you get some poorly performing sql, root cause happens to be the bad execution plan. You could battle statistics and see if you can get the optimizer to pick a better plan. One of the options, is to put this sql through the sql tuning advisor and see if it comes up with a sql profile that improves the execution plan, and if it does, to accept the profile.

Some of these performance issues could be critical, and require urgent resolution. In such urgent situations, you might catch yourself thinking, “I wish i could hint this query to get it to pick a better execution plan”. However, this being a packaged application, your hands are tied.

This year at oracle openworld 2011, Maria Colgan and Mohamed Zait, presented a session  “Oracle Database Optimizer : Tips for preventing suboptimal execution plans”. In there, towards the end, was a gem, which illustrated how to get a sql to pick a different execution plan (preferably generated by a modified version of the sql, hinted to pick a better plan), without actually modifying the application sql. This technique uses sql plan management.

This blog post is just reproducing the exact same method, with the exact same example they used, with a little bit more illustration of the execution plan, hopefully to benefit folks, who have not used this procedure before, and did not attend the openworld session.

The original sql

SQL> connect sh/sh

SQL> variable sup_id number;
SQL> exec :sup_id := 1;

select p.prod_name,sum(s.amount_sold) amt
from sales s,products p
where s.prod_id = p.prod_id
and p.supplier_id = :sup_id
group by p.prod_name

Envoy External 6X CD-ROM			    645586.12
Model SM26273 Black Ink Cartridge		    617732.28
Model K8822S Cordless Phone Battery		    582640.54
Bounce						    244595.65
Smash up Boxing 				    260436.75
Comic Book Heroes				     101214.6

The original Plan

Note the full table scan on the products table

SQL> select * from table (dbms_xplan.display_cursor(null,null,'TYPICAL'))
SQL_ID	fmjmws8askq3j, child number 0
select p.prod_name,sum(s.amount_sold) amt from sales s,products p where
s.prod_id = p.prod_id and p.supplier_id = :sup_id group by p.prod_name

Plan hash value: 504757596

| Id  | Operation		| Name	   | Rows  | Bytes | Cost (%CPU)| Time	   | Pstart| Pstop |
|   0 | SELECT STATEMENT	|	   |	   |	   |   587 (100)|	   |	   |	   |
|   1 |  HASH GROUP BY		|	   |	71 |  3550 |   587  (12)| 00:00:08 |	   |	   |
|*  2 |   HASH JOIN		|	   |	72 |  3600 |   586  (12)| 00:00:08 |	   |	   |
|   3 |    VIEW 		| VW_GBC_5 |	72 |  1224 |   583  (12)| 00:00:07 |	   |	   |
|   4 |     HASH GROUP BY	|	   |	72 |   648 |   583  (12)| 00:00:07 |	   |	   |
|   5 |      PARTITION RANGE ALL|	   |   918K|  8075K|   533   (3)| 00:00:07 |	 1 |	28 |
|   6 |       TABLE ACCESS FULL | SALES    |   918K|  8075K|   533   (3)| 00:00:07 |	 1 |	28 |
|*  7 |    TABLE ACCESS FULL	| PRODUCTS |	72 |  2376 |	 3   (0)| 00:00:01 |	   |	   |

Predicate Information (identified by operation id):

   2 - access("ITEM_1"="P"."PROD_ID")
   7 - filter("P"."SUPPLIER_ID"=:SUP_ID)

26 rows selected.

Create the plan baseline for this sql

Note that we are using the sql_id we got from the output of dbms_xplan in the previous step.

SQL> variable cnt number;
SQL> execute :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id=>'fmjmws8askq3j');

PL/SQL procedure successfully completed.

Verify that the plan baseline was created by checking dba_sql_plan_baselines (Notice that the plan is enabled by default when you created it)

SQL> select sql_handle,sql_text,plan_name,enabled
sql_text like '%select p.prod_name%'  2    3    4    5
  6  /

SQL_10ed3803a09c8fe1	       select p.prod_name,sum(s.amount_sold) amt     SQL_PLAN_11v9s0fh9t3z1c47b6be0 YES
			       from sales s,products p
			       where s.prod_i

Disable the original plan baseline from being used. (Since we’d be substituting this with a new hinted plan)

Note that we are using the plan_name and sql_handle that we got from the previous query.

SQL> exec :cnt := dbms_spm.alter_sql_plan_baseline(sql_handle =>'SQL_10ed3803a09c8fe1',-
					      plan_name => 'SQL_PLAN_11v9s0fh9t3z1c47b6be0',-

Check dba_sql_plan_baselines to ensure the baseline is now disabled.

  1  select sql_handle,sql_text,plan_name,enabled
  2  from
  3  dba_sql_plan_baselines
  4  where
  5* sql_text like '%select p.prod_name%'
SQL> /
SQL_10ed3803a09c8fe1	       select p.prod_name,sum(s.amount_sold) amt     SQL_PLAN_11v9s0fh9t3z1c47b6be0 NO
			       from sales s,products p
			       where s.prod_i

Rerun the sql with an Index Hint

Note that we are using a index hint (/*+ index(p) */ , to illustrate the point that, the execution plan has changed, and now it picks an index as opposed to a full table scan from the original query. In this specific case, there is no index on the supplier_id on the table products, so it picks the primary key on the table and does a full index scan. But you can see how, hinting your queries, with the proper index names can help your query.

SQL> select /*+ index(p) */ p.prod_name,sum(s.amount_sold) amt
from sales s,products p
where s.prod_id = p.prod_id
and p.supplier_id = :sup_id
group by p.prod_name
/   2    3    4    5    6
Envoy External 6X CD-ROM			    645586.12
Model SM26273 Black Ink Cartridge		    617732.28
Model K8822S Cordless Phone Battery		    582640.54
Bounce						    244595.65
Smash up Boxing 				    260436.75
Comic Book Heroes				     101214.6
Finding Fido					     78881.08

Check the new execution plan

Notice that the new plan uses the index products_pk on the table products.

SQL> select * from table (dbms_xplan.display_cursor(null,null,'TYPICAL'))
/  2
SQL_ID	gtdunv6qmpqqw, child number 0
select /*+ index(p) */ p.prod_name,sum(s.amount_sold) amt from sales
s,products p where s.prod_id = p.prod_id and p.supplier_id = :sup_id
group by p.prod_name

Plan hash value: 4089802669

| Id  | Operation		      | Name	    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT	      | 	    |	    |	    |	587 (100)|	    |	    |	    |
|   1 |  HASH GROUP BY		      | 	    |	 71 |  3550 |	587  (12)| 00:00:08 |	    |	    |
|*  2 |   HASH JOIN		      | 	    |	 72 |  3600 |	586  (12)| 00:00:08 |	    |	    |
|   3 |    VIEW 		      | VW_GBC_5    |	 72 |  1224 |	583  (12)| 00:00:07 |	    |	    |
|   4 |     HASH GROUP BY	      | 	    |	 72 |	648 |	583  (12)| 00:00:07 |	    |	    |
|   5 |      PARTITION RANGE ALL      | 	    |	918K|  8075K|	533   (3)| 00:00:07 |	  1 |	 28 |
|   6 |       TABLE ACCESS FULL       | SALES	    |	918K|  8075K|	533   (3)| 00:00:07 |	  1 |	 28 |
|*  7 |    TABLE ACCESS BY INDEX ROWID| PRODUCTS    |	 72 |  2376 |	  3   (0)| 00:00:01 |	    |	    |
|   8 |     INDEX FULL SCAN	      | PRODUCTS_PK |	 72 |	    |	  1   (0)| 00:00:01 |	    |	    |

Predicate Information (identified by operation id):

   2 - access("ITEM_1"="P"."PROD_ID")
   7 - filter("P"."SUPPLIER_ID"=:SUP_ID)

28 rows selected.

Switch the execution plan for the original, unhinted sql

From the dbms_xplan output above we know the new sql_id (gtdunv6qmpqqw) and the new plan_hash_value (4089802669) (For the plan that is using an index).

We can then use dbms_spm to associate this new execution plan, to the sql_handle we created, for the original un-hinted sql, in the sql plan baseline. We use the new sql_id and plan_hash_value, from our hinted plan and we associate it to the sql_handle from the original query.

Note that the sql_handle we are using here, is the sql_handle for the original unhinted sql (We get this value from the step “Verify that the plan base line was created”) .

SQL> execute :cnt := dbms_spm.load_plans_from_cursor_cache(sql_id => 'gtdunv6qmpqqw',-
						      plan_hash_value => 4089802669,-

PL/SQL procedure successfully completed.

Check that a new plan has been added to the baseline

Note that the new plan is enabled by default.

SQL> select sql_handle,sql_text,plan_name,enabled
sql_text like '%select p.prod_name%'

SQL_10ed3803a09c8fe1	       select p.prod_name,sum(s.amount_sold) amt     SQL_PLAN_11v9s0fh9t3z1c47b6be0 NO
			       from sales s,products p
			       where s.prod_i

SQL_10ed3803a09c8fe1	       select p.prod_name,sum(s.amount_sold) amt     SQL_PLAN_11v9s0fh9t3z1d20e849e YES
			       from sales s,products p
			       where s.prod_i

Re-Run the original sql

select p.prod_name,sum(s.amount_sold) amt
from sales s,products p
where s.prod_id = p.prod_id
and p.supplier_id = :sup_id
group by p.prod_name

Envoy External 6X CD-ROM			    645586.12
Model SM26273 Black Ink Cartridge		    617732.28
Model K8822S Cordless Phone Battery		    582640.54
Bounce						    244595.65
Smash up Boxing 				    260436.75
Comic Book Heroes				     101214.6

Check the new execution plan

You can see that the original unhinted sql statement is now using the plan hash value of the hinted query and hence is using the primary key index on the products table (As opposed to the full table scan on the original table).

SQL> select * from table (dbms_xplan.display_cursor(null,null,'TYPICAL'));
SQL_ID	fmjmws8askq3j, child number 1
select p.prod_name,sum(s.amount_sold) amt from sales s,products p where
s.prod_id = p.prod_id and p.supplier_id = :sup_id group by p.prod_name

Plan hash value: 4089802669

| Id  | Operation		      | Name	    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
|   0 | SELECT STATEMENT	      | 	    |	    |	    |	587 (100)|	    |	    |	    |
|   1 |  HASH GROUP BY		      | 	    |	 71 |  3550 |	587  (12)| 00:00:08 |	    |	    |
|*  2 |   HASH JOIN		      | 	    |	 72 |  3600 |	586  (12)| 00:00:08 |	    |	    |
|   3 |    VIEW 		      | VW_GBC_5    |	 72 |  1224 |	583  (12)| 00:00:07 |	    |	    |
|   4 |     HASH GROUP BY	      | 	    |	 72 |	648 |	583  (12)| 00:00:07 |	    |	    |
|   5 |      PARTITION RANGE ALL      | 	    |	918K|  8075K|	533   (3)| 00:00:07 |	  1 |	 28 |
|   6 |       TABLE ACCESS FULL       | SALES	    |	918K|  8075K|	533   (3)| 00:00:07 |	  1 |	 28 |
|*  7 |    TABLE ACCESS BY INDEX ROWID| PRODUCTS    |	 72 |  2376 |	  3   (0)| 00:00:01 |	    |	    |
|   8 |     INDEX FULL SCAN	      | PRODUCTS_PK |	 72 |	    |	  1   (0)| 00:00:01 |	    |	    |

Predicate Information (identified by operation id):

   2 - access("ITEM_1"="P"."PROD_ID")
   7 - filter("P"."SUPPLIER_ID"=:SUP_ID)

   - SQL plan baseline SQL_PLAN_11v9s0fh9t3z1d20e849e used for this statement

Installing cx-oracle with on ubuntu

cx_Oracle is a Python extension module that allows access to Oracle databases and conforms to the Python database API specification. Below are the steps i used to setup cx_Oracle, with, python 2.6 on Ubuntu 10.04 (Lucid Lynx).

1) Installed Oracle Enterprise Edition on Ubuntu (You can also configure this by just installing the instant client too)

2) Download cx_Oracle Source code from

3) Install python-dev on ubuntu (Or else you will get compile errors (Like file Python.h not found) when you try to install cx-oracle)

– sudo apt-get install python-dev

4) Login as root

5) export ORACLE_HOME=/u01/11gr2/db_1;export LD_LIBRARY_PATH=$ORACLE_HOME/lib:$LD_LIBRARY_PATH

6) cd <dir-where-cx-oracle-untarred>/cx_Oracle-5.0.4

7) python install

Once it is installed you can run the sample program from to make sure it works.

Howto Add a new virtual disk to a virtual box, virtual machine

Once i built a oracle enterprise linux virtual machine, using oracle virtual box (opensource destop virtualization software), i wanted to increase capacity by adding a new virtual disk to the configuration. My original configuration was built using only one virtual disk and this was mounted as the / file system. Now i want to add a new file system /u01 on a new virtual disk. Below are the steps i performed to accomplish this.

First create a new virtual disk using virtual box
Applications -> System Tools -> Oracle VM Virtual Box
File -> Virtual Media Manager
You will see the window below

Click on New to create a New disk. The New Disk creation wizard starts up.

Click on Next

Choose Dynamically expanding storage, Click Next

Choose the location and file name for the virtual box, vdi file. I chose the same directory, where my first vdi file was located (/u02/vimage/oel55). Also choose the size of the new disk you need. Click Next.

The Summary Screen displays summary information regarding the new disk you are configuring. click next

Now add the new virtual disk to your operating system image

Applications -> System Tools -> Oracle VM Virtual Box

Click on the image that you want to add the virtual disk to.

On the right hand side click on Storage

Choose the SATA controller and click on the + button to add a new disk

You can edit the new disk it creates by click on the disk Name and editing the disks properties on the right most window. (Use the properties to pick the new disk you created in the previous step, and give it a new name).

Now once you boot up your image you will see a new disk at your operating system level. In my case the original disk i had was /dev/sda. The new disk i added showed up as /dev/sdb. I partitioned the disk, formatted it, mounted it and added an entry to /etc/fstab so that every time i startup the image this file system will be mounted.

fdisk /dev/sdb

mkfs -t ext3 /dev/sdb1

mkdir /u01

Add the following entry into /etc/fstab

/dev/sdb1 /u01 ext3 defaults 0 2

Then run the command “mount” from the prompt and /u01 will be mounted.

Tracing oracle parallel query sessions and creating a tkprof output

When running queries in oracle with oracle parallel query, oracle spins up multiple parallel query processes to process the query. Each parallel query process gets its own database session. So when we turn trace on the session oracle creates multiple trace files in the udump directory. Here are the steps that i went through to gather a tkprof output of all those files for a query (Or for anything you run in the same session).
Login to sqlplus from where you are going to run your parallel query.
Setup a client id for the session
exec dbms_session.set_identifier(‘px_test’);
alter session set events=’10046 trace name context forever,level 1′;
Run your sql query (That uses parallel query)
Quit from sqlplus
Find all your trace files and move it to a different directory
Identify your user_dump_directory
sqlplus / as sysdba
SQL> sho parameter user_dump_directory
Locate all your trace files (There will be one for the main session and then 1 each for all the parallel query processes used)
cd /u01/udump
mkdir tmp2
find . -name ‘*.trc’ -mmin 5
The find command above finds and lists all the trace files that have been updated in the last 5 minutes (Change mmin to your time period within which you want to list trace files for).
move the files created by your session to the subdirectory named tmp2
Now cd tmp2
Remove all the files in tmp2 which are not the sessions trace or one of the parallel query slaves. This simplifies the trcsess command you need to run. Or else you can list all of your trace files by name in your trcsess command.
trcsess output=prog9.trc clientid=px_test *.trc
tkprof prog9.trc prog9.out sort=exeela sys=no

Datapump export and import – parallel and compress

It’s been a while since i wrote anything on my blog. Not because I am lazy, but because I’ve been doing a bunch of proof of concepts for various customers. database machine, audit vault, data masking, rac and securefiles etc. Its been loads of fun.

I wanted to write about a a couple of neat things i came across .

Exporting from an oracle database in parallel

Imagine that you have a fairly large database and you want to export the database onto two different devices in parallel (Let us say you have two usb devices attached to the server and you want to leverage the write throughput you get to both simultaneously). You can do this in two steps

  • Define 2 different oracle directories
    • Let us say for eg: the drives you want to use are mounted at /u01/firstusb and /u01/secondusb
    • create directory exp1 as ‘/u01/firstusb’;
    • create directory exp2 as ‘/u01/secondusb’;
  • While exporting use the directories in the dumpfile keyword
    • expdp system/manager directory=exp1 dumpfile=exp1:exp_test_%U.dmp,exp2:exp_test_%U.dmp schemas=AAA,BBB,CCC,DDD parallel=8  logfile=exp.log

So since you are using exp1:exp_test_%U.dmp,exp2:exp_test_%U.dmp  and a parallel=8, datapump creates 4 dump files each on exp1 and exp2 which points to /u01/firstusb and /u01/secondusb respectively.

Importing and Enabling Compression (OLTP or Exadata Hybrid Columnar Compression)

Let us say you want to export from a database that does not have compression turned on, and want to import into one with compression turned ON. Since the table is created with the NOCOMPRESS (defautlt) keyword, the expdp statement actually gathers this info and uses it to create the “create table” statement when it creates the table during the import. So the default is for the imported table also to be NOCOMPRESS.

If you only have a hand full of tables you want to enable compression on, you can pre-create the table (And its indexes and such) using the Compress for Oltp clause and then run the datapump import specifying the parameter table_exists_option=APPEND

If you want to do it for all the tables in a tablespace.

  • Create the tablespace with compression enabled at the tablespace level.
  • Then while importing using datapump specify the transform=SEGMENT_ATTRIBUTES:n:table parameter.

This causes import to ignore the segment attributes for the table while creating it, which will cause the table to inherit the attributes specified at the tablespace level and will be created with OLTP compression enabled.

expdp system/manager directory=exp1 dumpfile=exp1:exp_test_%U.dmp,exp2:exp_test_%U.dmp \ schemas=AAA,BBB,CCC,DDD parallel=8 \