11g Snapshot Standby

Oracle 11g provides customers the “Snapshot Standby” database feature, so that customers can leverage their investments in standby database systems. When leveraging the “Snapshot Standby” feature, customers can temporarily open up the standby database and run their tests on this system. For example, customer might want to test an application upgrade that alters tables or insert/update/deletes data.

When you convert a physical standby database to a snapshot standby in 11g, the database automatically creates a guaranteed restore point (Obviously “Flashback Database” has to be enabled in order to do this). Once you are done with your testing, the database gets automatically flashed back to this guaranteed restore point and then the log apply starts again.

Point to note with the snapshot standby database is that, in 11g the log transport continues to work. So all the logs are being received on the standby database and gap resolution continues to work (So in case of a primary failure, the logs just need to get applied on the standby). In 10g you could open up the standby database and test on it, but the log transport used to stop.

Let us look at some of the steps involved in setting up and using a “Snapshot standby” database in 11g. For similar functionality with 10g refer to the following documents.

Using 10g Physical standby database for Read/Write Testing and Reporting

10g Snapshot Standby – Dell Case Study

First follow all the normal steps that you would in setting up a physical standby database.
Make sure that you enable “Flashback Database” on both the primary and the standby.
Make sure that log shipping and apply are working properly.
In our example RK01 is the primary database and RK01DG is the standby database.

Check the salary for empno 7934

Update the emp table on the primary database
Switch the logfile to ensure that the change gets propagated and applied on the standby database.

SQL> Update scott.emp set sal=1310 where empno=7934;
SQL> Commit;
SQL> connect / as sysdba
SQL> alter system switch logfile;

Login using sqlplus to the standby database RK01DG

Check the role of this standby database

Use the command “alter database convert to snapshot standby” to try and switch the database to a “Snapshot Standby”

Since the database is in managed recovery mode you get an error.
Cancel the managed recovery
Then convert the database using the “alter database convert to snapshot standby”
This command leaves the standby database in nomount mode.
Shutdown the database and start it back up (In my screen shot i am starting up in mount mode, you can start it all the way up)

Check the role of the standby database, it should say “Snapshot Standby”
Query v$restore_point to see the details of the guaranteed restore point that it created

List the columns in the employee table on the primary database (RK01). This is just for reference, because as part of our testing we are going to add columns to this table.

Login as the scott user to the standby database and run some transactions

Now that testing is over, we can convert this back to the physical standby database.
Issue the “alter database to convert to physical standby” command to convert this from a “Snapshot Standby” to a “Physical standby” database.

Since the database is in open mode, it complains.
Restart the database in mount mode and then issue the same convert command again

Once the command succeeds the database has to be restarted.
Startup the database in mount mode and put the database in managed recovery mode.
Check the role of the standby database. It should now say “physical standby”.

Run transactions on primary to make sure that log transport and apply are working

Put the standby database in read only mode and check if the transactions got applied on the standby

Also make sure that the changes made during testing have been rolled back.

Those are the basic steps to setup and test a “Snapshot Standby” database.

11g Active Dataguard

Oracle introduced the Active Dataguard option in 11g, to allow customers to have read-only access to the physical standby databases. This option gives customers the flexibility to offload resource intensive queries to the physical standby database, while the log shipping and apply continues its work.

In oracle 10g in order to run queries on database tables, in a physical standby database, one had to stop the log apply and open the database in read-only mode. This means that the log files are being queued up during that period and will have to be applied once the database is reverted to the managed recovery mode.

The steps to follow in order to setup Active dataguard are exactly the same as setting up a physical standby database (Creating a Physical Standby). Once the physical standby database is configured and log transport and log apply are working correctly, you can just do an “alter database open” to enable active dataguard.

Let us look at an example.

I have a primary database RK01 and a regualar physical standby database RK01DG.

If you try to query the table emp while the database is in managed recovery mode you will get an error

Make sure that you can update the table emp from the primary database.

In order to enable Active Dataguard, ie enable the ability to query the database while log apply is in progress. Issue the following commands

The main concept to note here is that you are doing a “alter database open” (As opposed to an “alter database open read only”).

Update the table emp again to check if the values you are updating to on the primary are query able on the standby (ie log apply is continuing)

Check the updated value on the standby

Try updating the emp table on the standby database (It should only allow you to query , not update)

The procedure’s to switchover and failover remain exactly the same as before (Irrespective of whether you are in active dataguard mode or not).

Using the ILM Assistant

ILMA Installation Guide

ILMA Users Guide

This is a followup post to my earlier Post on how to install Apex and the ILM assistant with 11g.

I know that the images are kind of shoddy because they have been resized. If you are a firefox user and want to see the image , just right click on it and choose “view image”. IE users do whatever it is you do to view images.

Configure ILM Assistant

The Ilm assistant can be accessed by pointing the browser to
Here the install guide says that you have to use the admin login.
This is incorrect (http://forums.oracle.com/forums/thread.jspa?threadID=623966&tstart=0)
Best is to create a separate database login to be used for the ILM assistant.

cd <directorywhereilmappwasunzipped>
sqlplus system/systempassword
create user ILM_DB_ADMIN identified by Admin
default tablespace users temporary tablespace temp
grant create session, advisor, alter any table,alter tablespace, drop any table,manage tablespace to ILM_DB_ADMIN
sqlplus sys/password as sysdba
@grant_privs ILM_DB_ADMIN

Now you can login to the ilm assistant using the username ILM_DB_ADMIN and password Admin.

If you have not install the ilma_demo do that now (The following example uses the tablespaces created by this script)
I also use the sales history sample schema in the following example.
Login as the SH schema and create a table sales 2 (create table sales2 as select * from sales). We will use this table in our example.
Delete all the rows in this table which has a time_id > 01-JUL-2008. (Delete from sales2 where time_id >= to_date(’01-JUL-2008′,’RR’))
Update all Time_Id’s in this table to increment them by 7 years (update sales2 set time_id = add_months(time_id,84))

Using the ILM Assistant

Here is a small example on how the ILM assistant can be used.

There are 3 main steps in configuring Information Lifecycle management for data stored in Oracle database tables.
1) Define Logical Storage Tiers
2) Define A Lifecycle and its stages
3) Associate a table to a Lifecycle

In the first step we are associating tablespaces created on different type of storage to a logical storage tier.
In the second step we provide the date range and other properties like compression and read-only to each stage in the lifecycle of the data.
In the thrid step we are associating lifecycles to each of the tables.

Then you can run simulations after which ILMA can provide you scripts to partition the tables

Defining Logical Storage Tiers


Click on “Logical Storage Tier”

Create a storage tier “High Performance” with the properties shown above. Choose ILMDEMO_5 as the read-write tablespace for this.

Create a second storage tier “Low cost” with the properties shown above. Choose ILMDEMO_9 as the read-write tablespace for this.

Create a storage tier “Online Archive” with the properties shown above. Choose ILMDEMO_7 as the read-only tablespace for this.

Define the Lifecycle

Create a new lifecycle “SH lifecycle” , Retaining details for 3 months, and using the “High Performance” storage

Create a lifecycle stage “Sh Last 2 Qrtrs”, Retaining details for 6 months and using “Low Cost” storage

Create a lifecycle End stage “Sh Older Data”, with the “Online Archive” storage tier.

Associate Lifecycles to Tables

Click on “Lifecycle Tables”
Show all tables

Click on “Candidate” in the “Lifecycle status” column for the tables “SALES2”

Choose “SH Lifecycle” as the lifecycle, Compression factor 1.2, Number of rows 800000000, average row length of 100 (Alternatively you can click on load table statistics so that oracle picks up this info from the table statistics). You can leave the date range as blank and oracle looks at the table and picks up the appropriate min and max values for the date range.

Once the simulation is complete you can click on “pReview Simulation” and see the results.

You can click on “Create a migration script” and ILM will generate you a sql script which can be used to partition the table “SALES2”.

Click to View Partitioning Script

Test Lifecycle Management

Shutdown the database
Change the system clock to move it ahead by about 4 months.
Then log back into the Ilm assistant.
Click on the “Scan for events” button.

The events are broken down into “Overdue Events”, “Todays Events” and “Upcoming Events”.
Since we moved the clock ahead there are a few “Overdue events”.

Under the “Overdue Events” -> “Move data” click on “14 Managed” (Your screen might be different).
You have a few Events for the SALES2 table, some to merge partitions and some to move partitions.

You can accept those changes and choose to “Implement accepted recommendations”.
ILMA will prompt you asking if you want a sql script or want to submit a job.

You can choose one of those options to implement your change.

Click to View Data Movement Script

So now you get the general Idea about how to use the ILM assistant

Hosting the blog

Yesterday I moved this blog to godaddy.com’s hosting services. They already were my domain name registrar, and their rates were pretty good, so instead of having to go through the hassles of transferring the domain name and such (sure I could have kept the domain name where it is and just redirected it to the new hosting service) i decided to give godaddy.com’s hosting service.

Even though i started the blog on a computer at my home, the noise from the computer was annoying, and the prospect of keeping it On all the time, and its power consumption were key in the decision to get it hosted. Costs me 50$ a year as opposed to just the power bills while it was hosted at home.

Of course it was a no brainer to choose linux based hosting, and i was pleasently surprised by the ease of use of setting up the whole thing.

Once I bought the hosting plan
– I followed the “Getting started with Linux shared hosting” http://help.godaddy.com/article/1361 guide from go daddy to set things up.
– First I setup the hosting account. This gives you the linux userid and the password of your choice to access your hosting account.
– I find this annoying that I cannot get shell access (ssh), i can only get ftp access.
– Next I setup the DNS servers. When you create the hosting account, you get an IP address for the host.Since my domain is hosted by godaddy themselves, i just had to update the dns servers to reflect the current DNS servers for godaddy.
– Then I had to wait for approximately 3 hours (The doc warns you that this could take a while) for all the DNS servers (especially the one used by my ISP) to get updated with the proper name resolution for my domain name to the new hosting server.
– To install and configure the wordpress blog you simply go to the “Manage Account” URl and then choose “Your Applications”, you can choose to install wordpress and it walks you through a wizard to configure the mysql database and other wordpress settings.
– That was all and the blog was up and running.
– Since I had only about 9 posts on my original blog hosted at Home I copied and pasted all of the content to the new blog.
– It is also impressive that you can backup your mysql database using phpmysqladmin (You can store away the backup on your home PC), and you can use ftp to backup your php installation.

All in all I am impressed by the self-service style/error free setup.

11g Automatic Memory Management

Automatic shared memory management and Automatic PGA memory management were features in 10g that enabled users to automate the memory allocation for various sub areas in the SGA and PGA.

Oracle introduced Automatic Memory Management in 11g. This now enables to manage total memory allocated to oracle (SGA+PGA) in an automated fashion. You allocate the total memory allocated to Oracle and the Oracle database dynamically exchanges memory between the SGA and the instance PGA as needed (Based on the workload). Oracle also tunes the sizes of individual SGA components and of the individual PGA’s.

Automatic memory management is supported on Linux,Solaris, Windows, HP-UX and AIX.

You can set memory_target to a value that will be a sum of the memory that you want to allocate fot the SGA and the PGA.
You can set memory_max_target to a value higher than memory_target so that you can increase the memory allocated for oracle if you desire.

You should then set the sga_target and pga_aggregate_target to 0, so that Automatic memory management can tune the space allocation,up or down, for PGA and SGA without any restrictions.

If you set a value for sga_target and pga_aggregate_target then they will be considered as minimum values for the SGA and PGA sizes. If you set a value for sga_target and not for pga_aggregate_target you will auto-tune both parameters. Pga_Aggregate_target is automatically initialized to be memory_target – sga_target.

If neither are set then they are auto tuned by allocating 60% to the SGA and 40% to the PGA.

Rac, ASM and Dbca

I was installing Oracle Real Application Clusters on Redhat Linux 4 (x86-64) and ran into an issue I had not come across in my previous installs.

I installed (Separate crs_home,asm_home and oracle_home) and patched it up to

Then I created a listener from the ASM Home.

Once I ran DBCA to create the Rac database from the Oracle Home, i got the following error

DBCA could not startup the ASM instance configured on
this node. To proceed with teh database creation using
ASM you need the ASM instance to be up and running.
Do you want to recreate the ASM instance on this node?
[Yes] [No]

According to metalink note 550509.1 this is because DBCA looks for the listener.ora in the Oracle_Home, and does not find it. The note is written for 11g, but seems to work for also.

We ended up creating a softlink from the ASM_HOME/network/admin/listener.ora to the ORACLE_HOME/network/admin/listener.ora and then DBCA works just fine.

Install Apex and ILM Assistant with 11g

I just installed the oracle Information lifecycle management assistant so that I could test it with 11g.

You first need to install Oracle Application Express (Apex) and then the Oracle ILM assistant.

When you create a database in 11g you can choose to install application express. In the process below, we skip that and install it manually after the database creation.
You have two options to use application express.

  • Use the oracle http server with mod_plsql
  • Use the Embedded Pl/Sql gateway. (From the 11g database)

In this example we use the Embedded Pl/Sql gateway. (This is an http server built into the oracle database)

Create a general purpose database with the 11g Dbca. (We assume that a database listener has been already created and is started up).

Download the apex 3.1.1 software from http://www.oracle.com/technology/products/database/application_express/download.html

Install Oracle application express 3.1.1

cd /apex/
sqlplus / as sysdba
alter user anonymous account unlock;

Setup The port for the Embedded pl/sql gateway

Check if the port is already setup



So xmlhttp is disabled.

Set the port for the http listener

SQL> exec dbms_xdb.sethttpport(8080);

PL/SQL procedure successfully completed.

SQL> commit;

Commit complete.



Install oracle application express

sqlplus / as sysdba
SQL> @apexins users users TEMP /i/

Set the password for the apex user “Admin”

@apxxepwd.sql admin

You can now login to application express as the admin user by accessing “use http://yourhostname:8080/apex/apex_admin” and start using Oracle Application Express.

Install the Oracle ILM assistant

Download the ilm assistant from http://www.oracle.com/technology/software/deploy/ilm/index.html
Download the ilm demo from http://www.oracle.com/technology/obe/11gr1_db/manage/ilm/files/ilm.zip

sqlplus / as sysdba
SQL> alter user flows_030100 identified by flows_030100 account unlock;
SQL> @ilma_install flows_030100 users RK01
SQL> alter user flows_030100 identified by flows_030100 account lock;

You can now access Oracle ILM assistant using the following URL.


If you want to install the ILM demos then
first create the sh schema using sh_main from $ORACLE_HOME/demo/schemas/sales_history
cd to the directory where you unzipped the files from ilma_demo_012.zip

sqlplus / as sysdba

You can then try out the examples at http://www.oracle.com/technology/obe/11gr1_db/manage/ilm/ilm.htm

How to install the wordpress stats plugin for your blog

This plugin gives you access to the same useful stats that the blog users at wordpress.com get. Stuff like number of times a page was accessed.

  • Download the wp-stats plugin from wordpress from http://wordpress.org/extend/plugins/stats/
  • unzip the file and cp stats.php to /<wordpresshome>/wp-content/plugins/.
  • Edit the plugin and replace the line

$stats_wpcom_api_key = ”;
$stats_wpcom_api_key = ‘yourwordpressapikey’; (You can signup at wordpress.com to get you an API key)

11g Data Recovery Advisor

Oracle 11g introduces a new feature called the Data Recovery Advisor. The goal of this advisor is to provide a mechanism for guided error detection and recovery for the oracle database. DRA automatically detects persistent data failures, presents repair options to the user and executes repair options upon user request.

When a user executes a database operation that results in an error, a database integrity check is triggered, which then checks for failures related to that error in the database. If failures are diagnosed they are recorded in the Automatic Diagnostics Repository (ADR).

There following major commands in rman that would be used to detect and correct errors.

  • list failure
  • advise failure
  • repair failure

Let us look at an example where the data recovery advisor is used to recover from database block corruptions.

Create a 11gR1 Database RK01 using DBCA. Set the database to be in archivelog mode.

Use the script $ORACLE_HOME/demo/schema/human_resources/hr_main.sql to create the human resources schema in the database. Use the tablespace “USERS” for the hr schema tables.

Take a full rman backup of the database

export ORACLE_SID=RK01
rman target /

run {
allocate channel d1 type disk;
setlimit channel d1 kbytes 2097150;
backup incremental level 0 format ‘/u01/app/oraback/df_%U’ database
include current controlfile;
sql “alter system archive log current”;
archivelog all format ‘/u01/app/oraback/al_%U’;

Introduce corruption in a datafile by overwriting 3 oracle database blocks.

dd of=/u01/app/oradata/RK01/users01.dbf bs=8192 conv=notrunc seek=12 << EOF
CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt

dd of=/u01/app/oradata/RK01/users01.dbf bs=8192 conv=notrunc seek=84 << EOF
CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt

dd of=/u01/app/oradata/RK01/users01.dbf bs=8192 conv=notrunc seek=36 << EOF
CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt CORRUPT corrupt

Trigger Error Detection by running a validate database

rman target /

validate database;
File Status Marked Corrupt Empty Blocks Blocks Examined High SCN
—- —— ————– ———— ————— ———-
4 FAILED 0 473 640 555120
File Name: /u01/app/oradata/RK01/users01.dbf
Block Type Blocks Failing Blocks Processed
———- ————– —————-
Data 0 43
Index 0 20
Other 3 104

validate found one or more corrupt blocks
See trace file /u01/app/oracle/diag/rdbms/rk01/RK01/trace/RK01_ora_7452.trc for details
channel ORA_DISK_1: starting validation of datafile
channel ORA_DISK_1: specifying datafile(s) for validation
including current control file for validation
channel ORA_DISK_1: validation complete, elapsed time: 00:00:01
List of Control File and SPFILE
File Type Status Blocks Failing Blocks Examined
———— —— ————– —————
Control File OK 0 594
Finished validate at 27-MAY-08

List all the failures

RMAN> list failure;

List of Database Failures

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
146 HIGH OPEN 27-MAY-08 Datafile 4: ‘/u01/app/oradata/RK01/users01.dbf’ contains one or more corrupt blocks

List the details for the failure

RMAN> list failure detail;

List of Database Failures

Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
146 HIGH OPEN 27-MAY-08 Datafile 4: ‘/u01/app/oradata/RK01/users01.dbf’ contains one or more corrupt blocks
Impact: Some objects in tablespace USERS might be unavailable
List of child failures for parent failure ID 146
Failure ID Priority Status Time Detected Summary
———- ——– ——— ————- ——-
161 HIGH OPEN 27-MAY-08 Block 84 in datafile 4: ‘/u01/app/oradata/RK01/users01.dbf’ is media corrupt
Impact: Object LOCATIONS owned by HR might be unavailable
155 HIGH OPEN 27-MAY-08 Block 36 in datafile 4: ‘/u01/app/oradata/RK01/users01.dbf’ is media corrupt
Impact: Object PK_EMP owned by SCOTT might be unavailable
149 HIGH OPEN 27-MAY-08 Block 12 in datafile 4: ‘/u01/app/oradata/RK01/users01.dbf’ is media corrupt
Impact: Object DEPT owned by SCOTT might be unavailable

Ask Rman to provide recovery advice

RMAN> advise failure;


Automated Repair Options
Option Repair Description
—— ——————
1 Recover multiple corrupt blocks in datafile 4
Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/rk01/RK01/hm/reco_2336332336.hm


Ask Rman to do the necessary to repair the failure

RMAN> repair failure;

Strategy: The repair includes complete media recovery with no data loss
Repair script: /u01/app/oracle/diag/rdbms/rk01/RK01/hm/reco_2336332336.hm

contents of repair script:
# block media recovery for multiple blocks
recover datafile 4 block 12, 36, 84;

Do you really want to execute the above repair (enter YES or NO)? YES
executing repair script

Starting recover at 27-MAY-08
using channel ORA_DISK_1

channel ORA_DISK_1: restoring block(s)
channel ORA_DISK_1: specifying block(s) to restore from backup set
restoring blocks of datafile 00004
channel ORA_DISK_1: reading from backup piece /u01/app/oraback/df_01jhdo2k_1_1
channel ORA_DISK_1: piece handle=/u01/app/oraback/df_01jhdo2k_1_1 tag=TAG20080527T093723
channel ORA_DISK_1: restored block(s) from backup piece 1
channel ORA_DISK_1: block restore complete, elapsed time: 00:00:01

starting media recovery
media recovery complete, elapsed time: 00:00:03

Finished recover at 27-MAY-08
repair failure complete

Check that the database is Fine after recovery

RMAN> list failure;

no failures found that match specification

Setting up 11g Streams Schema replication

Setup Two databases

Use DBCA to create two 11g databases on the same server or on two different servers

In this sample let us assume that we are using databases RK01 and RK02

Setup the streams related initialization parameters in both databases

Since it is simpler to set parameters up in a pfile as opposed to a spfile, use the create pfile from spfile command to create pfiles on both databases.

On RK01 add the following lines to the pfile

# Added for streams
*.log_archive_dest_1=’LOCATION=/u01/app/orarch/RK01 MANDATORY’

Add the following lines to the pfile for RK02

#Added for streams
*.log_archive_dest_1=’LOCATION=/u01/app/orarch/RK02 MANDATORY’

use the hr_main script ($ORACLE_HOME/demo/schema/human_resources) to create the hr schema in your first database (RK01). Do not create this schema in RK02, since it will get created when you run the streams schema maintenance script

Login as sys into RK01 and create a directory audit_dir pointing to /tmp/

Create the streams administrator in both databases

GRANT DBA TO strmadmin IDENTIFIED BY manager;


Create database links in both databases (As strmadmin)

On RK01

IDENTIFIED BY manager USING ‘rk02’;

On RK02


Create tnsnames.ora entries to provide connectivity between the databases

Start the database with the new pfile (That has the streams related parameters set)

Set the databases to be in Archivelog mode

Setup Streams

Use DBMS_STREAMS_ADM.MAINTAIN_SCHEMAS to create a script to setup schema replication

schema_names => ‘hr’,
source_directory_object => ‘audit_dir’,
destination_directory_object => ‘audit_dir’,
source_database => ‘RK01’,
destination_database => ‘RK02’,
capture_name => ‘capture_rk01’,
capture_queue_table => ‘rep_capture_queue_table’,
capture_queue_name => ‘rep_capture_queue’,
capture_queue_user => null,
apply_name => ‘apply_rk01’,
apply_queue_table => ‘rep_dest_queue_table’,
apply_queue_name => ‘rep_dest_queue’,
apply_queue_user => null,
propagation_name => ‘prop_rk01’,
log_file => ‘exp.log’,
bi_directional => true,
include_ddl => true,
instantiation => dbms_streams_adm.instantiation_schema_network,
perform_actions => false,
script_name => ‘schema_replication_ii.sql’,
script_directory_object => ‘audit_dir’

Login as sys on RK01 and run the script

Test replication