Shell script to generate awr reports for a range of snapshots

We have been doing some performance testing in the recent days. We were running some workload using an application, capturing awr snapshots every 15 minutes on a 8 node rac cluster. I needed to generate the awr reports between each of the snapshots for each of the nodes. Running awrrpt.sql for each 15 minute intervals for each one of the 8 nodes would have been a tedious task. So i wrote a bash shell script that generates those awr reports.

It takes the database id, instance number, start snapshot id and end snapshot id as arguments (you can query dba_hist_snapshot to find out the start and end snapshot id’s you want to use). The directory in which it generates the reports is hardcoded in the script. The script has to be run as the oracle user  and it logs in as sys to generate the reports (You can modify as you need if you want it to be run as a different user). I have tested it only on Linux.

You can run it as ./genawrs.sh dbid  instancenumber beginsnap endsnap, to generate the reports.  This means that you have to run it once for each instance you need the report for. The script can be found below.

#!/bin/bash

if [ $# != 4 ]
then
echo “Syntax  genawrs.sh dbid instanceId startsnapid endsnapid”
exit 1
fi

l_dbid=$1
l_instid=$2
l_start_snapid=$3
let l_end_snapid=$4-1

# For all snapshot id’s
# Set the next snapshot id as current snapshot id + 1
# Spool a log file
# Log into sqlplus and call dbms_workload_repository.awr_report_text
# To generate the awr

for i in `seq $l_start_snapid $l_end_snapid`
do
let l_next_snapid=$i+1;
l_awr_log_file=”/awrs/out/awrrpt_${2}_${i}_${l_next_snapid}.log”

sqlplus -s / as sysdba << EOC
set head off
set pages 0
set lines 132
set echo off
set feedback off
spool $l_awr_log_file
SELECT
output
FROM
TABLE
(dbms_workload_repository.awr_report_text
($l_dbid,$l_instid,$i,$l_next_snapid)
);
spool off
EOC
done

11g Rac Installation Checklists

In my numerous engagements with customers, assisting them with Oracle Rac installations, i have found that providing the customer with a checklist of tasks to be performed before the installation, and reviewing it with their Database/Server/Storage/Network  Administrator’s  help in a smooth RAC installation.The checklist only lists the tasks, but does not go into step by step details regarding how to perform the task. For the details, i then point them to the Oracle Installation and Configuration Guides (Links to which can be found in the checklist itself) and relavent metalink notes.

Below are links to the checklists that I use for 11g Installations on Linux and Solaris. All pointers to errors and improvements are welcome.

Oracle Real Application Clusters, 11g Installation Checklist for Redhat Enterprise Linux 5 and Oracle Enterprise Linux 5

Oracle Real Application Clusters, 11g Installation Checklist for Solaris

Rac how to determine interconnect speed

During a Recent  Oracle 11g Rac installation on Solaris, i ran into  the following issue. After installing and configuring oracle clusterware, when we were trying to create the ASM instance, the ASM instance would only stay alive on one node of the cluster. The customer had configured the private interconnect to be a 100 base T connection (As opposed to GiGE). Once the customer re-configured the interconnect to be a GiGE, the ASM instance came up properly. Oracle recommends that you have a GiGE connection for your private interconnect.

Before starting your installation you can check if the interface you are using for the private interconnect, is configured to be a GIGE connection.

On Redhat or Oracle Enterprise Linux

Install the rpm ethtool

ethtool <interfacename> | grep Speed ,will give you the speed of the interface

On Solaris



kstat <interfacename> | grep link_speed  ,will give you the speed of the interface

Controlling Parallel query in 11g Rac

            In a Rac environment, when you execute a parallel query, it is very likely that the parallel execution servers, get executed on all nodes in a Rac cluster. In releases prior to Oracle 11g, if you wanted to ensure that all the parallel execution servers for a single query get executed on the same node (or a group of nodes) you had to set the initialization parameter parallel_instance_groups

           Starting with Oracle 11g, you can just create services which are only active on certain nodes in the Rac cluster . So when you login to the database using such a service name, your parallel query will only spawn parallel execution servers on the nodes where the service is active. Oracle automatically adjusts the values for parallel_instance_groups (Without you having to explicitly set it) based on your service name you used to connect. Simplifies our life.

Using ASM and OCFS2 for database storage with Oracle RAC

Even though i always prefer to use ASM for all aspects of database storage (database files, flash recovery area etc) when using oracle real application clusters, there might be rare cases where you want to mix and match ASM and ocfs2 for the same Rac database. A specific case that i came across was that the customer’s backup software did not support direct backups from rman to tape (Hence they need to backup to a file system and then backup the filesystem to tape).

It is supported to have the oracle database files on ASM and the oracle flash recovery area on OCFS2, if you have a requirement to do so. DBCA supports the creation of such a database (Where the database storage is on a ASM disk group and the Flash recovery area is on OCFS2). In the DBCA screen where you choose the location of the Flash recovery area, you will not be able to click on the <BROWSE> button and pick an OCFS2 mount point (because by default it only shows asm disk groups). However you can type the OCFS2 mount point into the field and dbca accepts it and configures it correctly. DBCA will actually keep (As it does with ASM) one copy of the control file and the second member of the redo log groups on the OCFS2 mount point.

I am not advocating that customers use this configuration, because of the obvious difference of performance charachteristics between ASM and OCFS2. If one wants to use such a configuration then they should consider creating additional asm disk groups to keep copies of the control files and online redo log’s (As opposed to keeping them on OCFS2).

Rac 10.2.0.4, ASM and Dbca

I was installing Oracle Real Application Clusters on Redhat Linux 4 (x86-64) and ran into an issue I had not come across in my previous installs.

I installed 10.2.0.1 (Separate crs_home,asm_home and oracle_home) and patched it up to 10.2.0.4.

Then I created a listener from the ASM Home.

Once I ran DBCA to create the Rac database from the Oracle Home, i got the following error

DBCA could not startup the ASM instance configured on
this node. To proceed with teh database creation using
ASM you need the ASM instance to be up and running.
Do you want to recreate the ASM instance on this node?
[Yes] [No]

According to metalink note 550509.1 this is because DBCA looks for the listener.ora in the Oracle_Home, and does not find it. The note is written for 11g, but seems to work for 10.2.0.4 also.

We ended up creating a softlink from the ASM_HOME/network/admin/listener.ora to the ORACLE_HOME/network/admin/listener.ora and then DBCA works just fine.