Using ASM and OCFS2 for database storage with Oracle RAC

Even though i always prefer to use ASM for all aspects of database storage (database files, flash recovery area etc) when using oracle real application clusters, there might be rare cases where you want to mix and match ASM and ocfs2 for the same Rac database. A specific case that i came across was that the customer’s backup software did not support direct backups from rman to tape (Hence they need to backup to a file system and then backup the filesystem to tape).

It is supported to have the oracle database files on ASM and the oracle flash recovery area on OCFS2, if you have a requirement to do so. DBCA supports the creation of such a database (Where the database storage is on a ASM disk group and the Flash recovery area is on OCFS2). In the DBCA screen where you choose the location of the Flash recovery area, you will not be able to click on the <BROWSE> button and pick an OCFS2 mount point (because by default it only shows asm disk groups). However you can type the OCFS2 mount point into the field and dbca accepts it and configures it correctly. DBCA will actually keep (As it does with ASM) one copy of the control file and the second member of the redo log groups on the OCFS2 mount point.

I am not advocating that customers use this configuration, because of the obvious difference of performance charachteristics between ASM and OCFS2. If one wants to use such a configuration then they should consider creating additional asm disk groups to keep copies of the control files and online redo log’s (As opposed to keeping them on OCFS2).

11g range partition addition script in perl

Recently I wrote a perl script that adds partitions to an existing table
The scripts makes a few assumptions
– The table exists and is range partitioned on date
– There is one partition per day
– There is a local (pre-fixed or non pre-fixed) index on the table
– There is a partition (named tableshortname_default) that is the upperbound of the date range (This can be split into the partitions that we need).
– The partition names are of the format tableshortname_mmddyyyy

The script takes the following arguments
– The table full name
– The table short name (Which becomes the prefix for the partition names)
– The number of partitions to create

The script does the following
– Queries the data dictionary (user_tab_partitions) to find out the highest partition in the table (ie the highest date for which the partition has been created)
– Uses the perl function Add_Delta_Days (Which is the part of the perl library Date::Calc) to generate the future dates for which partitions need to be created.
– Generates partition creation scripts and executes them.
– When you create a table partition, oracle automatically creates the new local index partition too.

The script has been tested against an oracle 11g database.

#!/usr/bin/perl
#There are two arguments to this script
#The first argument is the tablename
#The second argument is the tableshortname used for prefixing the partition names
#The third argument is the number of days for which you want to create new partitions

#This program
#   Assumes that the table is partitioned on date with one partition per day
#   Assumes that the highest partition is the one defined with maxvalues
#   Assumes that the maxvalue partition is called tableshortname_default
#   Takes the tablename
#   Queries the data dictionary to find the partition that has the highest date
#   Creates alter table split partition statements to create the additional partitions (determined by the 3rd argument)
#When the table is split the local index automatically gets split
#But you will have to rename the index partition(So that it gets a good prefix) (The code for that has to be added to this script)

use DBI;
use Date::Calc qw(Add_Delta_Days);

if ( @ARGV < 3 )
{
print “Syntax : crpartitions.pl tablename tableshortname numberofpartitions \n”;
exit;
}

# Connect to the database
# LongReadLen and LongTruncOk are required to specify the buffer size
# For the high_value column from user_tab_partitions which is a long datatype

my $db = DBI->connect( “dbi:Oracle:OracleSid”, “username”, “password” )
|| die( $DBI::errstr . “\n” );
$db->{AutoCommit}    = 0;
$db->{RaiseError}    = 1;
$db->{ora_check_sql} = 0;
$db->{RowCacheSize}  = 16;
$db->{LongReadLen}   = 5242880;
$db->{LongTruncOk}   = 0;

#Setup tablenames for target table
#Setup a tablename and tableshortname in upper case, to be used in queries

my $l_tablename=shift;
my $l_tableshortname=shift;
my $l_numpartitions=shift;

my $l_tablename_uc=uc($l_tablename);
my $l_tableshortname_uc=uc($l_tableshortname);

# Query user_tab_partitions to determine the highest partition
# And the date that sets the limit for rows that are in that partition

$l_highvalue = $db->selectrow_array(”
select high_value from user_tab_partitions
where table_name = ‘$l_tablename_uc’ and
partition_position = (select max(partition_position)-1 from user_tab_partitions
where table_name = ‘$l_tablename_uc’)
“);

# The output of the query is a string
# eg: TO_DATE(‘ 2008-11-16 00:00:00’, ‘SYYYY-MM-DD HH24:MI:SS’, ‘NLS_CALENDAR=GREGORIA
# Parse the string to get the full date, year, month and day

$l_highdate = substr $l_highvalue,10,19;
$l_year     = substr $l_highdate,0,4;
$l_month    = substr $l_highdate,5,2;
$l_day      = substr $l_highdate,8,2;

# Loop through each new partition you need to create
# Use the Add_Delta_Days function from the Date::Calc perl library
# To add days to the start date to get the upper limit of dates that go into each partition
# Prepare the alter table split partition statement and then invoke the function createPartition
# to execute the statement

for ($i=1 ; $i <= $l_numpartitions; $i++) {
($n_year,$n_month,$n_day)=Add_Delta_Days($l_year,$l_month,$l_day,$i);

$l_alterstmt=”alter table “.$l_tablename_uc.” split partition “.$l_tableshortname_uc.”_default at (TO_DATE(‘”.$n_day.”-“.$n_month.”-“.$n_year.”‘, ‘DD-MM-YYYY’)) into
(PARTITION “.$l_tableshortname_uc.”_”.$n_month.$n_day.$n_year.”,partition “.$l_tableshortname_uc.”_default)”;

&createPartition($db,$l_alterstmt);

}

# Takes the statement passed in as argument and executes it
sub createPartition {

$ldb = shift;
$p_alterstmt=shift;
# execute alter table statement
my $sthd = $ldb->prepare(“$p_alterstmt”)
|| die(“could not prepare alter statement”. $sthd->errstr . “\n” );

$sthd->execute()
|| die(“could not execute drop statement”. $sthd->errstr . “\n” );

}

Determining the network bandwidth required for a dataguard physical standby implementation

References

Network bandwidth Implications of Oracle Dataguard

Dataguard Redo Transport & network configuration

Determine the redo generation rate

– You can query dba_hist_sysstat to find out your redo generation rate in bytes.
– You can choose to use either your average redo generation per hour or peak redo generation per hour (I would recommend that you size for the peak redo generation).
– Let us say that you determined that the redo generation rate, in Bytes, PER DAY, happened to be RedoBytes.
– You have to add a 30% overhead for tcp. So RedoBytesPlusOverhead=RedoBytes*1.3

Convert redo generation rate to Mbps (Megabitspersecond)

– (RedoBytesPlusOverhead*8)/((24*60*60)*(1024*1024))
– This is the theoretical minimum bandwidth that you are going to require.

Other important considerations

– Network latency is a huge factor in the amount of redo you will be able to transport from your primary site to the standby site. This value is unique to your network, so if you have a high latency network you might not be able to sustain the required rate of redo shipping.
– Usually the wide area network between the primary site and standby site is used by more than just dataguard (eg: e-mail etc). So those bandwidth requirements have to be factored in.
– The above two points is why customers should not rely too much on theoretical calculations and to actually deploy dataguard and test the actual redo generation and redo transport performance statistics.
– If you do not deploy a network that can ship redo at a rate of 45 mbps, all that means is that, at times your redo shipping will fall behind (ie your standby site will be behind the primary site) but dataguard still works. In a lot of cases this is acceptable (Based on the customers recovery point objective and recovery time objective).
– There are network tuning best practices outlined in “Dataguard Redo Transport & network configuration” , that you are optimizing the redo transport mechanism and the network. These have to be followed to achieve the best possible network performance.
– There are other techniques like network compression (Hardware compression using wan compression devices, or actual software compression in dataguard 11g) which enable you to reduce the network bandwidth requirements.

Scripts

– You can run the following script to extract redo generation information from dba_hist_sysstat.

set pages 0
set head off
set lines 132
set colsep ~
col curval format 9999999999999999999999
col prevval format 9999999999999999999999
Select
sn.snap_id
,cur_stat.snap_id
,prev_stat.snap_id
,to_char(sn.begin_interval_time,’DD-MON-YY HH24′)
,to_char(sn.end_interval_time,’DD-MON-YY HH24′)
,cur_stat.value curval
,prev_stat.value prevval
,(cur_stat.value-prev_stat.value) RedoGen
from dba_hist_snapshot sn,
(select snap_id,value from dba_hist_sysstat
where stat_name = ‘redo size’) cur_stat
,(select snap_id,value from dba_hist_sysstat where
stat_name = ‘redo size’) prev_stat
Where sn.snap_id = cur_stat.snap_id
and cur_stat.snap_id = prev_stat.snap_id + 1 order by 1;

– Spool the contents into a file RedoInfo.dat

– Create a table in the oracle database named RedoInfo

create table redoinfo (
inid    number,
bdate    date,
edate    date,
totredo    number
);

– Use sql*loader to load the contents of the spool file into redoinfo (At this point some would ask, “why dont i just do a create table as in the same database”, my assumption is that you probably dont want to be creating these temp tables in a production env.).

load data
infile ‘RedoInfo.dat’
append into table RedoInfo
fields terminated by “~” optionally enclosed by ‘”‘
(
field1 filler,
field2 filler,
inid,
bdate  Date “DD-MON-YY HH24”,
edate  Date “DD-MON-YY HH24”,
field3 filler,
field4 filler,
totredo
)

– Then you can run all kinds of queries on this to learn the different charachteristics of your redo generation
– The query below gives you the total redo generation per day and  the Mbps

select to_char(edate,’dd-mon-yy’) Day,sum(totredo)/(1024*1024) TotRedoMb
,(sum(totredo)*1.3)/(1024*1024) RedoPlusOvrHd,((sum(totredo)*1.25)*8)/(1024*1024) Mbits
,round(((sum(totredo)*1.25)*8)/(24*60*60*1024*1024)) Mbps FROM RedoInfo
group by to_char(edate,’dd-mon-yy’)
order by 1;

Installing Grid control 10.2.0.4 with oracle database 10.2.0.4 for the repository

Below is what i think is the quickest (And possibly the only supported way to do this directly (instead of installing 10.2.0.2, finishing the grid control install and upgrading to 10.2.0.4))set of steps to install grid control 10.2.0.4, while using oracle database version 10.2.0.4 for the grid control management repository on Linux x86.

Typically you would install grid control 10.2.0.1, which installs a 10.1.0.4 database as the management repository, then you would upgrade enterprise manager to 10.2.0.4 and then eventually upgrade the database to 10.2.0.4. Apart from the numerous steps in this method you also take more time because the entreprise manager database is fully configured when you install 10.2.0.1 and then it takes even more time updating all this configuration when you install the 10.2.0.4 patch. In the method outlined below all the configuration is done when you run the configuration script in the end, saving you precious time.

  • Install Oracle RDBMS 10.2.0.1
  • Upgrade Oracle RDBMS to 10.2.0.4
  • Create a listener
  • Create a Database
    • When you create/configure the database make sure that, you DO NOT choose the option to enable database control for the instance. (If you choose to enable database control, you will have to de-configure it).
  • Configure the Initialization parameters
  • Install Oracle Grid Control 10.2.0.1, software only.
    • Install Oracle Grid Control 10.2.0.1, software only, Subsection titled ” Enterprise Manager Grid Control Using an Existing Database“.
    • Here be extremely careful when you are configuring the em_using_existing_db.rsp file. When you are setting the value for the parameter s_reposPort=”1521″, remember to use the double quotes around the 1521 or else you will have issues with the installation.
  • Install Oracle Grid Control 10.2.0.4, software only.
    • In the section of the documentation you were following above, follow instructions in “Step 7 Apply the 10.2.0.4 patch set to OMS
  • Apply the 10.2.0.4 patch set to the Agent on the management server
    • Follow instructions in “Step 9, “Apply the 10.2.0.4 patch set to Agent
  • Run the configuration scripts to configure grid control to use the already created 10.2.0.4 database.
    • Follow instructions in “Step 10, 11 Configure Enterprise Manager Grid control by running the ConfigureGc.pl….

Instructions on how to upgrade this installation to Enterprise Manager 10.2.0.5 can be found in my followup post , Upgrading Enterprise Manager to 10.2.0.5.

Some Useful Metalink Notes for Grid control Installation and Upgrades

Documentation Reference for 10.2.0.5 Install and Upgrades

Installing Enterprise Manager 10.2.0.4 using an 11g Database for the repository

Steps to upgrade 10.2.0.2 or higher Repository to 11g

Install grid control 10.2.0.5 on enterprise linux 4

Install grid control 10.2.0.5 on enterprise linux 5

You can find samples of the rsp files below

em_using_existing_db.rsp

patchsetoms.rsp

Grid control agent secure

I recently had some trouble installing and configuring 10.2.0.4 grid control. I was installing grid control 10.2.0.4 using a pre-existing 10.2.0.4 database. I somehow messed up something in the automated response files and for the life of me i could not figure out the password to use for securing the agent (It was definitely not using the passwords i had thought, i used in the response file).

The security best practice is to leave the management server secured. The procedure below is just adhoc, so that you can unsecure and continue working and later reset your password.

So i had to do the following to unsecure the management server.

First figure out if the OMS is secured.

cd <OMS_ORACLE_HOME>/bin
./emctl status oms -secure

If the output of the command shows HTTPS, then you know that the oms is secured .

In order to fix the agent

First stop the agent

On the Oracle Management server

cd <OMS_ORACLE_HOME>/opmn/bin
./opmnctl stopall
cd
./emctl secure unlock
cd <OMS_ORACLE_HOME>/opmn/bin
./opmnctl startall

Once this is done you can unsecure the agent

cd <AGENT_HOME>/bin
./emctl stop agent
./emctl unsecure agent
./emctl start agent

For further information (and to understand how to secure the OMS) please refer the following metalink notes.

How To Secure / Unsecure The Grid Control Components (Agent / OMS) In 10g
Problem: Agent Upload Fails: OMS VERSION NOT CHECKED YET

Real application testing for 9i and 10g

The real applications testing database option that oracle introduced in Oracle 11g, has now been back ported for use with oracle 9i and 10g. Real application testing has two main components

  • Database Replay
  • Sql performance analyzer.

The database replay component helps customers capture actual real life workloads and gives them the ability to replay them on test or development systems to guage the effect of changes they are about to make (eg: change initializtion parameter, upgrade database version, move to new storage etc).

The Sql performance analyzer component helps customers to understand the effect of changes on sql statements. This component allows customers to capture sql statements and execution plans from a system and re-execute the sql statements on a test system (Where changes have been made) and then compare the performance of sql statements before and after the change. So if any sql statement regressions are found, the customer can then use the sql tuning assistant to tune such sql statements.

Based on popular demand oracle has backported the components of real applications testing to earlier releases.

The “Capture” portion of “Database Replay” has been backported for 9i and 10g. Meaning workloads can be captured from 9i or 10g databases and “Replay”‘ed on 11g databases. Please note that workloads cannot be “Replay”‘ed on 9i and 10g.

The “Sql performance Analyzer” component has been backported and made available for use with 9i and 10gR1 customers upgrading to 10gR2 or higher. So sql statements can be captured from 9i and 10gR1 and Re-run in 10gR2 and 11g and then the performance of both executions can be compared to identify regressions.

In any case where your target database is a 10g database, an intermediatory 11g database is required so that the “Database Replay” and “Sql performance analyzer” tasks can be executed on the 10gR2 database.

For details regarding using Real Application Testing with Pre-11g databases check the following documents

Technical note: Real application testing for earlier releases

Technical white paper: Testing the sql performance impact of an oracle 9i to oracle 10gR2 upgrade with SPA

Real Application testing for earlier releases (Metalink note)

Licensing Details

10g & 11g Sql tuning sets

Sql tuning sets is a basic building block for many oracle tuning features, built into the oracle 10g and 11g databases. A sql tuning set captures one or more sql statements issued by database users and their corresponding execution statistics (think, some of the statistics available in v$sqlarea),execution plans and row source statistics (think, some of the information from v$sql_plan etc), and execution context (think, bind varaibles etc…).

Sql tuning sets are used as inputs for the “Sql tuning advisor”. Sql tuning sets can be moved between databases. This comes in handy when you want to tune sql statements captured from a production environment in a development environment (So that you do not do development activities on production). sql tuning sets are used by the ‘sql performance analyzer’ component of ‘Real Application Testing’ and also the ‘sql plan stability’ feature.

Sql tuning sets can be created from the following four sources
– The database cursor cache (sqlarea)
– AWR reports
– An existing sql tuning set
– A sql trace file (generated using the 10046 database event)

Typically You would perform the following operations on a sql tuning set.
– Create a sql tuning set
– Add sql statements to the sql tuning set
– List the contents of a sql tuning set
– Modify a sql tuning set
– Transport a sql tuning set
– Drop a sql tuning set

Creating a sql tuning set

You can use the dbms_sqltune.create_sqlset procedure to create a new sql tuning set.
It takes 3 arguments.

DBMS_SQLTUNE.CREATE_SQLSET (
sqlset_name IN VARCHAR2,
description IN VARCHAR2 := NULL
sqlset_owner IN VARCHAR2 := NULL);

typically you can just specify the sqlset_name and a description, that lets you know why you created the sql tuning set in the first place.

Now i went ahead and used swingbench to run some transactions on the database for 10 minutes. This generates numerous sql statements in the shared pool.

Adding sql statements to a sql tuning set

Once we have run a load on the database, there will be sql statements in the shared pool that we want to analyze.
You can use the dbms_sqltune.load_sqlset procedure to popluate the sql tuning set with sql statements.

DBMS_SQLTUNE.LOAD_SQLSET (
sqlset_name IN VARCHAR2,
populate_cursor IN sqlset_cursor,
load_option IN VARCHAR2 := ‘INSERT’,
update_option IN VARCHAR2 := ‘REPLACE’,
update_condition IN VARCHAR2 := NULL,
update_attributes IN VARCHAR2 := NULL,
ignore_null IN BOOLEAN := TRUE,
commit_rows IN POSITIVE := NULL,
sqlset_owner IN VARCHAR2 := NULL);

For example you might want to create a sql tuning set with sql tuning statements that executed more than 5 times and had buffer_gets greater than 100000

So as you can see we used dbms_sqltune.select_cursor_cache to find the high resource using sql statements from the shared pool. So you have the following four different methods to get sql statements and populate the sql tuning set.

dbms_sqltune.select_cursor_cache
dbms_sqltune.select_sql_trace
dbms_sqltune.select_sql_set
dbms_sqltune.select_sql_workload_repository

All the above procedures return a table of object type sqlset_row

List the contents of a sql tuning set

You can list the statements that have been included in the sql tuning set by invoking the dbms_sqltune.select_sqlset procedure

Modify sql tuning sets

You can delete statements that have been included in the sql tuning set by invoking dbms_sqltune.delete_sqlset

You can update attributes of the sql tuning set by invoking dbms_sqltune.update_sqlset

Transporting a sql tuning set

If you need to move the sql tuning set from one database to another you can follow the procedure below

First create the sql tuning set staging table (This will be table where we temporarily store the information regarding the sql tuning set and then use exp/imp to move it to a different database).

dbms_sqltune.create_stgtab_sqlset
BEGIN
DBMS_SQLTUNE.CREATE_STGTAB_SQLSET( table_name => ‘my_first_stage_tab’ );
END;
/

Copy the sql tuning set into the staging table

dbms_sqltune.pack_stgtab_sqlset
BEGIN
DBMS_SQLTUNE.PACK_STGTAB_SQLSET(
sqlset_name => ‘my_first_sqlset’,
staging_table_name => ‘my_first_stage_tab’);
END;
/

Now you can exp/imp this staging table from production to dev or test.
Once the staging table is in the destination database, you have to move the sql tuning set from the staging table into the data dictionary.

dbms_sqltune.unpack_stgtab_sqlset

BEGIN
DBMS_SQLTUNE.UNPACK_STGTAB_SQLSET(
sqlset_name => ‘%’,
replace => TRUE,
staging_table_name => ‘my_first_stage_tab’);
END;
/

You can also use enterprise manager to perform all the above operations.

Installing Ruby and Rails 2 on Ubuntu Lucid running Oracle 11g

Updated this post from Ubuntu Jaunty to Lucid.

Unless you have been living on one of the planets orbiting alpha centauri (or if you have nothing to do with computers, in which case you would not be reading this post), you must have heard of ruby on rails.

Below are the steps i went through to get ruby and rails installed and configured on a Ubuntu Jaunty (10.04) system.

Make sure that you have installed Oracle 11g or the Oracle 11g instant client on the Ubuntu server, before you start.

Get the ruby packages for ubuntu

sudo apt-get install ruby  ruby-dev  libopenssl-ruby1.8 irb ri rdoc

sudo apt-get install sqlite3

sudo apt-get install libsqlite3-ruby libsqlite3-dev

Download, Install setup Gems

wget  http://production.cf.rubygems.org/rubygems/rubygems-1.3.6.zip
unzip rubygems-1.3.6.zip
cd rubygems-1.3.6
sudo ruby setup.rb

sudo ln -s /usr/bin/gem1.8 /usr/local/bin/gem

Install Rails 2

sudo gem install rails

Download and install the ruby interface for oracle

Download ruby-oci8 from http://rubyforge.org/frs/?group_id=256

http://rubyforge.org/frs/download.php/69454/ruby-oci8-2.0.4.tar.gz

tar -xzvf ruby-oci8-2.0.4.tar.gz

cd ruby-oci8-2.0.4/

make

sudo make install

Get the Enhanced ActiveRecord adapter for oracle

wget http://rubyforge.org/frs/download.php/64456/activerecord-oracle_enhanced-adapter-1.2.2.gem

sudo gem install activerecord-oracle_enhanced-adapter-1.2.2.gem

You are all set to write and test your first ruby program using the rails framework .

Please see my followup post on writing a simple rails program with 11g

11g Snapshot Standby

Oracle 11g provides customers the “Snapshot Standby” database feature, so that customers can leverage their investments in standby database systems. When leveraging the “Snapshot Standby” feature, customers can temporarily open up the standby database and run their tests on this system. For example, customer might want to test an application upgrade that alters tables or insert/update/deletes data.

When you convert a physical standby database to a snapshot standby in 11g, the database automatically creates a guaranteed restore point (Obviously “Flashback Database” has to be enabled in order to do this). Once you are done with your testing, the database gets automatically flashed back to this guaranteed restore point and then the log apply starts again.

Point to note with the snapshot standby database is that, in 11g the log transport continues to work. So all the logs are being received on the standby database and gap resolution continues to work (So in case of a primary failure, the logs just need to get applied on the standby). In 10g you could open up the standby database and test on it, but the log transport used to stop.

Let us look at some of the steps involved in setting up and using a “Snapshot standby” database in 11g. For similar functionality with 10g refer to the following documents.

Using 10g Physical standby database for Read/Write Testing and Reporting

10g Snapshot Standby – Dell Case Study

First follow all the normal steps that you would in setting up a physical standby database.
Make sure that you enable “Flashback Database” on both the primary and the standby.
Make sure that log shipping and apply are working properly.
In our example RK01 is the primary database and RK01DG is the standby database.

Check the salary for empno 7934

Update the emp table on the primary database
Switch the logfile to ensure that the change gets propagated and applied on the standby database.

SQL> Update scott.emp set sal=1310 where empno=7934;
SQL> Commit;
SQL> connect / as sysdba
SQL> alter system switch logfile;

Login using sqlplus to the standby database RK01DG

Check the role of this standby database

Use the command “alter database convert to snapshot standby” to try and switch the database to a “Snapshot Standby”

Since the database is in managed recovery mode you get an error.
Cancel the managed recovery
Then convert the database using the “alter database convert to snapshot standby”
This command leaves the standby database in nomount mode.
Shutdown the database and start it back up (In my screen shot i am starting up in mount mode, you can start it all the way up)

Check the role of the standby database, it should say “Snapshot Standby”
Query v$restore_point to see the details of the guaranteed restore point that it created

List the columns in the employee table on the primary database (RK01). This is just for reference, because as part of our testing we are going to add columns to this table.

Login as the scott user to the standby database and run some transactions

Now that testing is over, we can convert this back to the physical standby database.
Issue the “alter database to convert to physical standby” command to convert this from a “Snapshot Standby” to a “Physical standby” database.

Since the database is in open mode, it complains.
Restart the database in mount mode and then issue the same convert command again

Once the command succeeds the database has to be restarted.
Startup the database in mount mode and put the database in managed recovery mode.
Check the role of the standby database. It should now say “physical standby”.

Run transactions on primary to make sure that log transport and apply are working

Put the standby database in read only mode and check if the transactions got applied on the standby

Also make sure that the changes made during testing have been rolled back.

Those are the basic steps to setup and test a “Snapshot Standby” database.

11g Active Dataguard

Oracle introduced the Active Dataguard option in 11g, to allow customers to have read-only access to the physical standby databases. This option gives customers the flexibility to offload resource intensive queries to the physical standby database, while the log shipping and apply continues its work.

In oracle 10g in order to run queries on database tables, in a physical standby database, one had to stop the log apply and open the database in read-only mode. This means that the log files are being queued up during that period and will have to be applied once the database is reverted to the managed recovery mode.

The steps to follow in order to setup Active dataguard are exactly the same as setting up a physical standby database (Creating a Physical Standby). Once the physical standby database is configured and log transport and log apply are working correctly, you can just do an “alter database open” to enable active dataguard.

Let us look at an example.

I have a primary database RK01 and a regualar physical standby database RK01DG.

If you try to query the table emp while the database is in managed recovery mode you will get an error

Make sure that you can update the table emp from the primary database.

In order to enable Active Dataguard, ie enable the ability to query the database while log apply is in progress. Issue the following commands

The main concept to note here is that you are doing a “alter database open” (As opposed to an “alter database open read only”).

Update the table emp again to check if the values you are updating to on the primary are query able on the standby (ie log apply is continuing)

Check the updated value on the standby

Try updating the emp table on the standby database (It should only allow you to query , not update)

The procedure’s to switchover and failover remain exactly the same as before (Irrespective of whether you are in active dataguard mode or not).