Notes on Oracle Parallel Query – Part 0

Oracle introduced “Parallel Query” in version 7.1 in 1994. A decade and a half later, it still trips some unsuspecting customers up, when they

  • Turn parallelism on via a hint in all queries


  • Turn parallelism on at every table and index level

Doing this, coupled with high concurrent usage of the database, can lead to a lot of query server processes being spawned, leading to very high cpu usage, high load average, high number of processes waiting on the cpu run queue and high amount of i/o requests. Such situations may result in overall degraded performance. Below are some concepts to note when using parallel query. Sometimes Too much of a good thing is indeed a Bad thing.

Parallel query (and/or parallel DML) is a mechanism to levarage the cpu and i/o resources available on an SMP system to break down a database operation (query, insert , update, delete,merge) into multiple small operations, execute the small operations concurrently on the system, completing the database operation significantly faster than if it would have executed in a serial fashion. Database operations that include large table scans, large index scans or large joins usually benefit highly from using parallel query.

Oracle does not turn this feature on by default for all queries. There are specific steps to perform to turn this on as discussed in my previous post.

The total number of parallel query server’s that can be running at any given time is determined by the initialization parameter parallel_max_servers. This parameter defaults to cpu_count*parallel_threads_per_cpu*(2 if pga_aggregate_target > 0, else 1)*5

Oracle’s recommendation is that you set parallel_max_servers to 2 * Degree of Parallelism * Number of concurrent users

The reason for the recommendation 2 * DOP is that, if the operation can use “Inter Operation” parallelism, it could end up using 2 sets of Query Server Processes at the same time.

“Intra Operation” parallelism means that one operation is executed in parallel using multiple Query Server Processes (eg: Full table scan). “Inter Operation” parallelism means that the output of one parallel operation is being pipelined to another parallel operation (eg: When there are two tables being full table scanned followed by a hash join, so here the first set of query servers could be scanning one table and the second set consuming those rows and building the hash table). So there are circumstances where the operation ends up using two sets of Query Server Processes. Hence the recommendation of 2*DOP.

You ideally want to keep the CPU usage under 70% with your load averages equal to your number of cpu’s and average run queues as close to 0 as possible. The idea is to work the server efficiently but not work it to death.

Keep in mind that on systems like sun’s t2000 and t5240 systems, each thread shows up as a cpu in cpu_count .

You can control the use of parallel query server’s by setting the initialization parameter parallel_adaptive_multiuser=true (It defaults to true). When enabled this parameter causes oracle to enable an adaptive algorithm that reduces the degree of parallelism used by queries based on the database load.

One another common mistake is that for statements like insert /*+ append */ ….select…..  users forget to enable parallel dml. When running normally the select part of the insert gets parallelized (as long as dop is set) but the insert runs in a serial fashion. Once you enable parallel dml even the insert is parallelized.

You have to explicitly issue the command ‘alter session enable parallel dml’ to enable parallel dml. You should avoid turning parallel dml ON  for all sessions and all transactions. Turn it selectively on for the sessions you need parallel dml. There are some restrictions for using parallel dml.

Consider increasing parallel_execution_message size to 4k or 8k from the default of 2k, for improved parallel execution performance.

Determining the best degree of parallelism for tables is a tough problem. You need to consider the following factors.

  • What is the required query response time by the end user
  • How many parallel query servers does it take to achieve that query response time
  • Haw many of such large queries will concurrently execute on my server (Consider your application generated queries and ad hoc queries)
  • How many CPU’s do i have on the system.
  • How much i/o volumes can the system handle

Once you have the above information you need to determine a degree of parallelism that gives a reasonable query response time, when your average number of large queries execute concurrently, without clobbering your CPU, memory and i/o subsystems (Alternatively you could just determine the number of cpu’s, amount of memory, i/o channels and physical disks you need to sustain the above workload and go buy and use that hardware :-)).

Also see my previous post on enabling and monitoring parallel query for more parallel query info.

ILM Cost savings calculator

This is a simple calculator (Written using Javascript), that can be used to estimate the potential cost savings (As a result of reducing your total storage costs)  of using a Information Lifecycle Management (ILM) Strategy to store and maintain your oracle data.
When architecting your ILM strategy you may decide to use the following different types of storage

  • High performance tier where all the important and frequently accessed data is stored (Smaller faster disks).
  • Low cost storage tier where the less frequently accessed data is stored (Larger slower, ATA disks).
  • Online archive storage tier where all the data that is hardly or never accessed is stored (Low cost ATA disks).

Once you decide your classifications you could partition your tables accordingly (And then place different tablespaces on different type of storage) and then use Oracle ILM Assistant to maintain the tables and their data.

This calculator can be used to estimate the cost savings you might get by implementing such a strategy. (Remember this is only a very rough first estimate. Use values that you have received from your hardware vendor to use as inputs for cost per Gb of different tiers of storage).

Click Here to Access the ILM Calculator.

All comments and suggestions, for corrections and improvements  are welcome.

A look inside an Oracle DBA’s performance troubleshooting toolbox

From time to time, anyone who is an Oracle Database Administrator, would get summoned to troubleshoot a “database performance” issue. Once he/she has properly qualified and defined the performance issue, he/she settles down at his/her  laptop/desktop/netbook, accesses  ssh, sqlplus, enterprise manager and gets on their merry way trying to identify the root cause(s) of the performance issue and rectifying  the issue (At the same time twittering and posting facebook wall messages) (Maybe log a tar for the issue too).

In this process we often resort to using the numerous tools/concepts/techniques that are at our disposal to troubleshoot the issue. Over the years, i have had the privilege of using most of the tools/concepts/techniques that i have outlined below. These are just some of the tools and techniques that get you the best results quickly. And i know that the list is by no means exhaustive. Just wanted to give a shout out to all the DBA’s out there who carry this enormous amounts of information in their heads (And share a lot of it in blogs, collaborate,user group and openworld presentations) and apply the appropriate tools to resolve oracle performance issues.


– AWR (Automatic workload repository)
– ADDM (Automatic database diagnostic monitor)
– ASH (Active session history)
– Real Time Sql Monitoring
– Optimizer statistics (dbms_stats)
– Extended Statistics
– Time Model Statistics
– Histograms
– Explain plan (dbms_xplan,gather_plan_statistics)
– event 10046 trace
– Tkprof
– event 10053 trace
– Oracle diagnostics pack
– Oracle tuning pack
– Sql profiles
– Sql Plan Baselines
– Stored outlines
– Optimizer Hints
– Oracle Real Applications testing (Database Replay, Sql Performance Analyzer)
– Oracle Advanced Compression (Reduces Number of blocks read)
– Oracle parallel query
– Indexes
– Partitioning
– Index Organized tables
– Clusters
– SGA Sizing/configuration
– Initialization parameters
– Cursor sharing
– SQLTXPLAIN (Metalink script)
– TRCANLZR  (Metalink script)
– OLAP cubes
– Materialized views
– Automatic storage management (ASM)
– Server Results Cache
– OCI Consistent Client Cache
– Logical/Standby , Active Dataguard (Offload queries)
– Asynchronous I/O, Direct I/O

– Table/Index Rebuild

– Patches for performance bugs


– Database Non Idle Wait Events
– v$ views
– x$ tables
– SQL query structure/rewrite
– Sql tuning sets
– Star Schema Design

Simple Rails Program

Thought building applications  using Microsoft Access was easy ? try building one in Rails, it is Easy…

Install and configure ruby and rails as described in my earlier post.

First create a directory to hold your Rails code. Let us say /u01/ror

cd /u01/ror

Issue the following command to create the necessary rails directory structure

rails test

cd to the config directory

cd test/config

Edit the database Connection properties file

vi database.yml

Modify the details for the development environment as follows (You can remove test and production). Make sure you use oracle_enhanced as the adapter, and not just oracle.

adapter: oracle_enhanced
database: RK01
username: scott
password: tige

cd /u01/ror

Create the Model/View and Controller

script/generate scaffold Customer name:string contact:string phone:string email:string

Create the table named Customers in the Oracle database

rake db:migrate

Start the web server

ruby script/server

Then you can access the application you just created by going to your browser and putting in the URL http://localhost:3000/customers.

This application can now view/add/modify/delete customer information

“That was easy”

Simply Rails 2” by patrick lenz is a great book for Rails beginners and includes Rails 2 information.

Installing Grid control with oracle database for the repository

Below is what i think is the quickest (And possibly the only supported way to do this directly (instead of installing, finishing the grid control install and upgrading to of steps to install grid control, while using oracle database version for the grid control management repository on Linux x86.

Typically you would install grid control, which installs a database as the management repository, then you would upgrade enterprise manager to and then eventually upgrade the database to Apart from the numerous steps in this method you also take more time because the entreprise manager database is fully configured when you install and then it takes even more time updating all this configuration when you install the patch. In the method outlined below all the configuration is done when you run the configuration script in the end, saving you precious time.

  • Install Oracle RDBMS
  • Upgrade Oracle RDBMS to
  • Create a listener
  • Create a Database
    • When you create/configure the database make sure that, you DO NOT choose the option to enable database control for the instance. (If you choose to enable database control, you will have to de-configure it).
  • Configure the Initialization parameters
  • Install Oracle Grid Control, software only.
    • Install Oracle Grid Control, software only, Subsection titled ” Enterprise Manager Grid Control Using an Existing Database“.
    • Here be extremely careful when you are configuring the em_using_existing_db.rsp file. When you are setting the value for the parameter s_reposPort=”1521″, remember to use the double quotes around the 1521 or else you will have issues with the installation.
  • Install Oracle Grid Control, software only.
    • In the section of the documentation you were following above, follow instructions in “Step 7 Apply the patch set to OMS
  • Apply the patch set to the Agent on the management server
    • Follow instructions in “Step 9, “Apply the patch set to Agent
  • Run the configuration scripts to configure grid control to use the already created database.
    • Follow instructions in “Step 10, 11 Configure Enterprise Manager Grid control by running the….

Instructions on how to upgrade this installation to Enterprise Manager can be found in my followup post , Upgrading Enterprise Manager to

Some Useful Metalink Notes for Grid control Installation and Upgrades

Documentation Reference for Install and Upgrades

Installing Enterprise Manager using an 11g Database for the repository

Steps to upgrade or higher Repository to 11g

Install grid control on enterprise linux 4

Install grid control on enterprise linux 5

You can find samples of the rsp files below



Installing Ruby and Rails 2 on Ubuntu Lucid running Oracle 11g

Updated this post from Ubuntu Jaunty to Lucid.

Unless you have been living on one of the planets orbiting alpha centauri (or if you have nothing to do with computers, in which case you would not be reading this post), you must have heard of ruby on rails.

Below are the steps i went through to get ruby and rails installed and configured on a Ubuntu Jaunty (10.04) system.

Make sure that you have installed Oracle 11g or the Oracle 11g instant client on the Ubuntu server, before you start.

Get the ruby packages for ubuntu

sudo apt-get install ruby  ruby-dev  libopenssl-ruby1.8 irb ri rdoc

sudo apt-get install sqlite3

sudo apt-get install libsqlite3-ruby libsqlite3-dev

Download, Install setup Gems

cd rubygems-1.3.6
sudo ruby setup.rb

sudo ln -s /usr/bin/gem1.8 /usr/local/bin/gem

Install Rails 2

sudo gem install rails

Download and install the ruby interface for oracle

Download ruby-oci8 from

tar -xzvf ruby-oci8-2.0.4.tar.gz

cd ruby-oci8-2.0.4/


sudo make install

Get the Enhanced ActiveRecord adapter for oracle


sudo gem install activerecord-oracle_enhanced-adapter-1.2.2.gem

You are all set to write and test your first ruby program using the rails framework .

Please see my followup post on writing a simple rails program with 11g