11.2.0.2 Grid infrastructure, private interconnect bonding new feature HAIP

I have been heads down on an Exadata Poc , and only now, got a chance to browse through the new features in 11.2.0.2. The out of place upgrade feature looks interesting.

This new feature in grid infrastructure installation had me really overjoyed (Anyone who has had the pleasure of configuring IPMP, Auto Port Aggregation, Etherchannel etc (based on the o/s) and setting it up correctly to work with Rac, will understand my Joy) . Starting with 11.2.0.2 you do not have to bond the interfaces (If you have redundant GigE nics you are going to use for your private interconnect) you are going to use as the private interconnect. If you have two different interface names to be used for the private interconnect you can provide both the interface names to the oracle grid infrastructure installer and oracle clusterware will create a Highly Available IP Address (HAIP).

Oracle Clusterware, Rac and ASM uses these  load balanced highly available interfaces for communication.

Details can be read at http://download.oracle.com/docs/cd/E11882_01/install.112/e17212/prelinux.htm#BABJHGBE for Linux.

HAIP info can also be found in the 11R2 Clusterware white paper.

Julian Dyke has a blog post that says that MULTICAST has to be enabled for the Network interfaces to enable this to work.

Enabling multicast on the interconnect network is a requirement with 11.2.0.2 Rac.My Oracle Support Notes 1228471.1,1212703.1 details how Multicast can be enabled and checked.

11gR2 rac installation on 64 bit Linux step by step

Yesterday i completed a 11g Release 2 real application clusters installation on 64 bit Oracle Enterprise Linux 4. The installation process is very similar to the 10g and 11gr1 installations, but much simpler. This was a two node cluster. There are some new concepts that are introduced in 11gR2 real application clusters. Below are some of my notes on 11gr2 new features for Rac and detailed steps that i followed to complete the installation.

Some new concepts in 11gR2 Rac


Oracle clusterware and ASM now are installed into the Same Oracle Home, and is now called the grid infrastructure install.

Raw devices are no longer supported for use for anything (Read oracle cluster registry, voting disk, asm disks), for new installs.

OCR and Voting disk can now be stored in ASM, or a certified cluster file system.

The redundancy level of your ASM diskgroup (That you choose to place voting disk on) determines the number of voting disks you can have.
You can place

  • Only One voting disk on an ASM diskgroup configured as external redundancy
  • Only Three voting disks on an ASM diskgroup configured as normal redundancy
  • Only Five voting disks on an ASM diskgroup configured as high redundancy


The contents of the voting disks are automatically backed up into the OCR

ACFS (Asm cluster file system) is only supported on Oracle Enterprise Linux 5 (And RHEL5), not on OEL4.

There is a new service called cluster time synchronization service that can keep the clocks on all the servers in the cluster synchronized (In case you dont have network time protocol (ntp) configured)

Single Client Access Name (SCAN), is a hostname in the DNS server that will resolve to 3 (or at least one) ip addresses in your public network. This hostname is to be used by client applications to connect to the database (As opposed to the vip hostnames you were using in 10g and 11gr1). SCAN provides location independence to the client connections connecting to the database. SCAN makes node additions and removals transparent to the client application (meaning you dont have to edit your tnsnames.ora entries every time you add or remove a node from the cluster).

Oracle Grid Naming Service (GNS), provides a mechanism to make the allocation and removal of VIP addresses a dynamic process (Using dynamic Ip addresses).

Intelligent Platform Management Interface (IPMI) integration, provides a new mechanism to fence server’s in the cluster, when the server is not responding.

The installer can now check the O/S requirements, report on the requirements that are not met, and give you fixup scripts to fix some of them (like setting kernel parameters).

The installer can also help you setup SSH between the cluster nodes.

There is a new deinstall utility that cleans up a existing or failed install.

And the list goes on an on.

I have broken up the installation process into 3 distinct documents, which can be found below

Installing 11gr2 grid infrastructure

Installing 11gr2 Real Application Clusters

Creating the 11gr2 Clustered database