Purpose
This document will provide the reader with step-by-step instructions on how to install a cluster, install Oracle Real Application Clusters (RAC) and start a cluster database on IBM AIX HACMP/ES (CRM) 4.4.x. For additional explanation or information on any of these steps, please see the references listed at the end of this document. This note does not cover IBM SP2 platform.
Disclaimer: If there are any errors or issues prior to step 3.3, please contact IBM Support. The information contained here is as accurate as possible at the time of writing.
1.3 Installing Cluster Interconnect and Public Network Hardware
2.4.1 Create volume groups to be shared concurrently on one node
3.1 Configure the shared disks and UNIX preinstallation tasks
3.2 Using the Oracle Universal Installer for Real Application Clusters
3.3 Create a RAC Database using the Oracle Database Configuration Assistant
Configuring the Clusters Hardware
1.1 Minimal Hardware list / System Requirements
For a two node cluster the following would be a minimum recommended hardware list. Check the RAC/IBM AIX certification matrix for RAC updates on currently supported hardware/software.
1.1.1 Hardware
IBM servers – two IBM servers capable of running AIX 4.3.3 or 5L 64bit
For IBM or third-party storage products, Cluster interconnects, Public networks, Switch options, Memory, swap & CPU requirements consult with the operating system vendor or hardware vendor.
Memory, swap & CPU requirements
Each server must have a minimum of 512Mb of memory, at least 1Gb swap space or twice the physical memory whichever is greater.
To determine system memory use:-
$ /usr/sbin/lsattr -E -l sys0 -a realmem
To determine swap space use:-
$ /usr/sbin/lsps -a
64-bit processors are required.
1.1.2 Software
When using IBM AIX 4.3.3:
HACMP/ES CRM 4.4.x
Only RAW Logical Volumes (Raw Devices) for Database Files supported
Oracle Server Enterprise Edition 9i Release 1 (9.0.1) or 9i Release 2 (9.2.0)
When using IBM AIX 5.1 (5L):
For Database Files residing on RAW Logical Volumes (Raw Devices):
HACMP/ES CRM 4.4.x
For Database files residing on Parallel Filesystem (GPFS):
HACMP/ES 4.4.x (HACMP/CRM is not required)
GPFS 1.5
IBM Patch PTF12 and IBM patch IY34917 or IBM Patch PTF13
Oracle Server Enterprise Edition 9i Release 2 (9.2.0)
Oracle Server Enterprise Edition 9i for AIX 4.3.3 and 5L are in separate CD packs and include Real Application Cluster (RAC)
1.1.3 Patches
The IBM Cluster nodes might require patches in the following areas:
IBM AIX Operating Environment patches
Storage firmware patches or microcode updates
Patching considerations:
Make sure all cluster nodes have the same patch levels
Do not install any firmware-related patches without qualified assistance
Always obtain the most current patch information
Read all patch README notes carefully.
For a list of required operating system patches check the sources in Note 211537.1 and contact IBM corporation for additional patch requirements.
To see all currently installed patches use the following command:
% /usr/sbin/instfix -i
To verify installation of a specific patch use:
% /usr/sbin/instfix -ivk <Patchnumber>
e.g.: % /usr/sbin/instfix -ivk IY30927
1.2 Installing Disk Arrays
Follow the procedures for an initial installation of the disk enclosures or arrays, prior to installing the IBM AIX operating system environment and HACMP software. Perform this procedure in conjunction with the procedures in the HACMP for AIX 4.X.1 Installation Guide and your server hardware manual.
1.3 Installing Cluster Interconnect and Public Network Hardware
The cluster interconnect and public network interfaces do not need to be configured prior to the HACMP installation but must be configured and available before the cluster can be configured.
If not already installed, install host adapters in your cluster nodes. For the procedure on installing host adapters, see the documentation that shipped with your host adapters and node hardware. Install the transport cables (and optionally, transport junctions), depending on how many nodes are in your cluster:
A cluster with more than two nodes requires two cluster transport junctions. These transport junctions are Ethernet-based switches (customer-supplied).
You install the cluster software and configure the interconnect after you have installed all other hardware.
Creating a Cluster
2.1 IBM HACMP/ES Software Installation
The HACMP/ES 4.X.X installation and configuration process is completed in several major steps. The general process is:
install hardware
install the IBM AIX operating system software
install the latest IBM AIX maintenance level and required patches
install HACMP/ES 4.X.X on each node
install HACMP/ES required patches
configure the cluster topology
synchronize the cluster topology
configure cluster resources
synchronize cluster resources
Follow the instructions in the HACMP for AIX 4.X.X Installation Guide for detailed instructions on insalling the required HACMP packages. The required/suggested packages include the following:
cluster.adt.es.client.demos
cluster.adt.es.client.include
cluster.adt.es.server.demos
cluster.clvm.rte HACMP for AIX Concurrent
cluster.cspoc.cmds HACMP CSPOC commands
cluster.cspoc.dsh HACMP CSPOC dsh and perl
cluster.cspoc.rte HACMP CSPOC Runtime Commands
cluster.es.client.lib ES Client Libraries
cluster.es.client.rte ES Client Runtime
cluster.es.client.utils ES Client Utilities
cluster.es.clvm.rte ES for AIX Concurrent Access
cluster.es.cspoc.cmds ES CSPOC Commands>
cluster.es.cspoc.dsh ES CSPOC dsh and perl
cluster.es.cspoc.rte ES CSPOC Runtime Commands
cluster.es.hc.rte ES HC Daemon
cluster.es.server.diag ES Server Diags
cluster.es.server.events ES Server Events
cluster.es.server.rte ES Base Server Runtime
cluster.es.server.utils ES Server Utilities
cluster.hc.rte HACMP HC Daemon
cluster.msg.En_US.cspoc HACMP CSPOC Messages – U.S.
cluster.msg.en_US.cspoc HACMP CSPOC Messages – U.S.
cluster.msg.en_US.es.client
cluster.msg.en_US.es.server
cluster.msg.en_US.haview HACMP HAView Messages – U.S.
cluster.vsm.es ES VSM Configuration Utility
cluster.clvm.rte HACMP for AIX Concurrent
cluster.es.client.rte ES Client Runtime
cluster.es.clvm.rte ES for AIX Concurrent Access
cluster.es.hc.rte ES HC Daemon
cluster.es.server.events ES Server Events
cluster.es.server.rte ES Base Server Runtime
cluster.es.server.utils ES Server Utilities
cluster.hc.rte HACMP HC Daemon
cluster.man.en_US.client.data
cluster.man.en_US.cspoc.data
cluster.man.en_US.es.data ES Man Pages – U.S. English
cluster.man.en_US.server.data
rsct.basic.hacmp RS/6000 Cluster Technology
rsct.basic.rte RS/6000 Cluster Technology
rsct.basic.sp RS/6000 Cluster Technology
rsct.clients.hacmp RS/6000 Cluster Technology
rsct.clients.rte RS/6000 Cluster Technology
rsct.clients.sp RS/6000 Cluster Technology
rsct.basic.rte RS/6000 Cluster Technology
You can verify the installed HACMP software with the “clverify” command.
# /usr/sbin/cluster/diag/clverify
At the “clverify>” prompt enter “software” then at the “clverify.software>” prompt enter “lpp”. You should see a message similar to:
Checking AIX files for HACMP for AIX-specific modifications…
*/etc/inittab not configured for HACMP for AIX.
If IP Address Takeover is configured, or the Cluster Manager is to be started on boot, then /etc/inittab must contain the proper HACMP for AIX entries.
Command completed.
——— Hit Return To Continue ———
Contact IBM support if there were any failure messages or problems executing the “clverify” command.
2.2 Configuring the Cluster Topology
Using the “smit hacmp” command:
# smit hacmp
Note: The following is an example of a generic HACMP configuration to be used as an example only. See the HACMP installation and planning documentation for specific examples. All questions concerning the configuration of your cluster should be directed to IBM Support. This configuration does not include an example of a IP takeover network. “smit” fastpaths are being used to navigate the “smit hacmp” configuration menus. Each one of these configuration screens are obtainable from “smit hacmp”. All configuration is done from one node and then synchronized to the other participating nodes.
Add the cluster definition:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Cluster -> Add a Cluster Definintion
Fastpath:
# smit cm_config_cluster.add Add a Cluster Definition
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] **NOTE: Cluster Manager MUST BE RESTARTED in order for changes to be acknowledged.**
* Cluster ID [0]
* Cluster Name [cluster1]
The “Cluster ID” and “Cluster Name” are arbitrary. The “Cluster ID” must be a valid number between 0 and 99999 and the “Cluster Name” can be any alpha string up to 32 characters in length.
Configuring Nodes:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add Cluster Nodes
FastPath:
# smit cm_config_nodes.addAdd Cluster Nodes
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Node Names [node1 node2]
“Node Names” should be the hostnames of the nodes. They must be alpha numeric and contain no more than 32 characters. All nodes participating in the cluster must be entered on this screen separated by a space.
Next to be configured is the network adapters. This example will utilize two ethernet adapters on each node as well as one RS232 serial port on each node for heartbeat.Node NameaddressIP Label (/etc/hosts)Type node1192.168.0.1node1srvcservice192.168.1.1node1stbystandby/dev/tty0serial node2192.168.0.2node2srvcservice192.168.1.2node2stbystandby/dev/tty0serial
The following screens are configuration settings needed to configure the above networks into the cluster configuration:
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Configure Nodes -> Add an Adapter
FastPath:
# smit cm_confg_adapters.add Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node1srvc] * Network Type [ether] + * Network Name [ipa] + * Network Attribute public + * Adapter Function service + Adapter Identifier [] Adapter Hardware Address []
Node Name [node1] +
It is important to note that the “Adapter IP Label” must match what is in the “/etc/hosts” file otherwise the adapter will not map to a valid IP address and the cluster will not synchronize. The “Network Name” is an arbitrary name for the network configuration. All the adapters in this ether configuration should have the same “Network Name“. This name is used to determine what adapters will be used in the event of an adapter failure. Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node1stby] * Network Type [ether] + * Network Name [ipa] + * Network Attribute public + * Adapter Function standby + Adapter Identifier [] Adapter Hardware Address []
Node Name [node1] + Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node2srvc] * Network Type [ether] + * Network Name [ipa] + * Network Attribute public + * Adapter Function service + Adapter Identifier [] Adapter Hardware Address []
Node Name [node2] + Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node2stby] * Network Type [ether] + * Network Name [ipa] + * Network Attribute public + * Adapter Function standby + Adapter Identifier [] Adapter Hardware Address []
Node Name [node2] +
The following is the serial configuration: Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node1_tty] * Network Type [rs232] + * Network Name [serial] + * Network Attribute serial + * Adapter Function service + Adapter Identifier [/dev/tty0] Adapter Hardware Address []
Node Name [node1] + Add an Adapter
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Adapter IP Label [node2_tty] * Network Type [rs232] + * Network Name [serial] + * Network Attribute serial + * Adapter Function service + Adapter Identifier [/dev/tty0] Adapter Hardware Address []
Node Name [node2] +
Since this is not on the same network as the ethernet cards the “Network Name” is different. The same name is used for the network name.
Use “smit mktty” to configure the RS232 adapters:
# smit mktty Add a TTY
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[TOP] [Entry Fields] TTY type tty TTY interface rs232 Description Asynchronous Terminal Parent adapter sa0 * PORT number [0] + Enable LOGIN disable +
BAUD rate [9600] +
PARITY [none] +
BITS per character [8] +
Number of STOP BITS [1] +
TIME before advancing to next port setting [0] +#
TERMINAL type [dumb]
FLOW CONTROL to be used [xon] +
[MORE…31]
Be sure that “Enable LOGIN” is set to the default of “disable”. The “PORT number” is the value that is to be used in the /dev/tt# where “#” is the port number. So if you defined this as “0” the device would be “/dev/tty0”.
2.3 Synchronizing the Cluster Topology
After the topology is configured it needs to be synchronized. The synchronization performs topology sanity checks as well as pushes the configuration data to each of the nodes in the cluster configuration. For the synchronization to work user equivalence must be configured for the root user. There is several ways to do this. One way would be to create a “.rhosts” file on each node in the “/” directory.
Example of a “.rhosts” file:
node1 root node2 root
Be sure permissions on the “/.rhosts” file is 600.
# chmod 600 /.rhosts
Use a remote command such as “rcp” to test equivalence from each node:
From node1:
# rcp /etc/group node2:/tmp
Frome node2:
# rcp /etc/group node1:/tmp
View your IBM operating system documentation for more information or contact IBM support if you have any questions or problems setting up user equivalence for the root user.
Smit HACMP -> Cluster Configuration -> Cluster Topology -> Synchronize Cluster Topology
FastPath:
# smit configchk.dialog Synchronize Cluster Topology
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[TOP] [Entry Fields] Ignore Cluster Verification Errors? [No] + * Emulate or Actual? [Actual] +
Note:
Only the local node’s default configuration files
keep the changes you make for topology DARE
emulation. Once you run your emulation, to
restore the original configuration rather than
running an actual DARE, run the SMIT command,
“Restore System Default Configuration from Active
Configuration.”
We recommend that you make a snapshot before
running an emulation, just in case uncontrolled
cluster events happen during emulation.
NOTE:
If the Cluster Manager is active on this node,
synchronizing the Cluster Topology will cause
the Cluster Manager to make any changes take
effect once the synchronization has successfully
completed.
[BOTTOM]
2.4 Configuring Cluster Resources
In a RAC configuration only one resource group is required. This resource group is a concurrent group for the shared volume group. The following are the steps to add a concurrent resource group for a shared volume group:
First there needs to be a volume group that is shared between the nodes.
SHARED LOGICAL VOLUME MANAGER , SHARED CONCURRENT DISKS ( NO VSD )
The two instances of the same cluster database have a concurrent access on the same external disks. This is real concurrent access and not a shared one like in the VSD environment. Because several instances access at the same time the same files and data, locks have to be managed. These locks, at the CLVM layer (including memory cache), are managed by HACMP.
1) Check if the target disks are physically linked to the two machines of the cluster, and seen by both.
Type the lspv command on both machines.
Note : the hdisk number can be different, depending on the others nodes disk configurations. Use the second field of the output (PVid) of lspv to be sure you are dealing with the same physical disk from two hosts. Although hdisk inconsistency may not be a problem IBM suggests using ghost disks to ensure hdisk numbers match between the nodes. Contact IBM for further information on this topic.
2.4.1 Create volume groups to be shared concurrently on one node
# smit vg
Select “Add a Volume Group”
Type or select values in entry fields. Add a Volume Group
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] VOLUME GROUP name [oracle_vg] Physical partition SIZE in megabytes 32 + * PHYSICAL VOLUME names [hdisk5] + Activate volume group AUTOMATICALLY no + at system restart? Volume Group MAJOR NUMBER [57] +# Create VG Concurrent Capable? yes + Auto-varyon in Concurrent Mode? no +
The “PHYSICAL VOLUME names” must be physical disks that are shared between the nodes. We do not want the volume group automatically activated at system startup because HACMP activates it. Also “Auto-varyon in Concurrent Mode?” should be set to “no” because HACMP varies it on in concurrent mode.
You must choose the major number to be sure the volume groups have the same major number in all the nodes (attention, before choosing this number, you must be sure it’s free on all the nodes).
To check all defined major number, type:
% ls –al /dev/*
crw-rw—- 1 root system 57, 0 Aug 02 13:39 /dev/oracle_vg
The major number for oracle_vg volume group is 57. Ensure that 57 is available on all the other nodes and is not used by another device. If it is free then make use of the same on all nodes.
On this volume group, create all the logical volumes and file systems you need for the cluster database.
2.4.2 Create Shared RAW Logical Volumes if not using GPFS. See section 2.4.6 for details about GPFS.
mklv -y’db_name_cntrl1_110m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_cntrl2_110m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_system_400m’ -w’n’ -s’n’ -r’n’ usupport_vg 13 hdisk5 mklv -y’db_name_users_120m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_drsys_90m’ -w’n’ -s’n’ -r’n’ usupport_vg 3 hdisk5 mklv -y’db_name_tools_12m’ -w’n’ -s’n’ -r’n’ usupport_vg 1 hdisk5 mklv -y’db_name_temp_100m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_undotbs1_312m’ -w’n’ -s’n’ -r’n’ usupport_vg 10 hdisk5 mklv -y’db_name_undotbs2_312m’ -w’n’ -s’n’ -r’n’ usupport_vg 10 hdisk5 mklv -y’db_name_log11_120m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_log12_120m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_log21_120m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_log22_120m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_indx_70m’ -w’n’ -s’n’ -r’n’ usupport_vg 3 hdisk5 mklv -y’db_name_cwmlite_100m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5 mklv -y’db_name_example_160m’ -w’n’ -s’n’ -r’n’ usupport_vg 5 hdisk5 mklv -y’db_name_oemrepo_20m’ -w’n’ -s’n’ -r’n’ usupport_vg 1 hdisk5 mklv -y’db_name_spfile_5m’ -w’n’ -s’n’ -r’n’ usupport_vg 1 hdisk5 mklv -y’db_name_srvmconf_100m’ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5
Substitute your database name in place of the “db_name” value. When the volume group was created a partition size of 32 megabytes was used. The seventh field is the number of partitions that make up the file so for example if “db_name_cntrl1_110m” needs to be 110 megabytes we would need 4 partitions.
The raw partitions are created in the “/dev” directory and it is the character devices that will be used. The “mklv -y’db_name_cntrl1_110m‘ -w’n’ -s’n’ -r’n’ usupport_vg 4 hdisk5” creates two files:
/dev/db_name_cntrl1_110m /dev/rdb_name_cntrl1_110m
Change the permissions on the character devices so the software owner owns them:
# chown oracle:dba /dev/rdb_name*
2.4.3 Import the Volume Group on to the Other Nodes
Use “importvg” to import the oracle_vg volume group on all of the other nodes
On the first machine, type:
% varyoffvg oracle_vg
On the other nodes, import the definition of the volume group using “smit vg” :
Select “Import a Volume Group”
Type or select values in entry fields.
Press Enter AFTER making all desired changes. Import a Volume Group
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] VOLUME GROUP name [oracle_vg] * PHYSICAL VOLUME name [hdisk5] + Volume Group MAJOR NUMBER [57] +# Make this VG Concurrent Capable? no + Make default varyon of VG Concurrent? no +
It is possible that the physical volume name (hdisk) could be different on each node. Check the PVID of the disk using “lspv“, and be sure to pick the hdisk that has the same PVID as the disk used to create the volume group on the first node. Also make sure the same major number is used as well.. This number has to be undefined on all the nodes. The “Make default varyon of VG Concurrent?” option should be set to “no”. The volume group was created concurrent capable so the option “Make this VG Concurrent Capable?” can be left at “no”. The command line for importing the volume group after varying it off on the node where the volume group was orginally created on would be:
% importvg -V<major #> -y <vgname> h disk# % chvg -an <vgname> % varyoffvg <vgname>
After importing the volume group onto each node be sure to change the ownership of the character devices to the software owner:
# chown oracle:dba /dev/rdb_name*
2.4.4 Add a Concurrent Cluster Resource Group
The shared resource in this example is “oracle_vg”. To create the concurrent resource group that will manage “oracle_vg” do the following:
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Define Resource Groups -> Add a Resource Group
FastPath:
# smit cm_add_grp Add a Resource Group
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Resource Group Name [shared_vg] * Node Relationship concurrent + * Participating Node Names [node1 node2] +
The “Resource Group Name” is arbitrary and is used when selecting the resource group for configuration. Because we are configuring a shared resources the “Node Relationship” is “concurrent” meaning a group of nodes that will share the resource. “Participating Node Names” is a space separated list of the nodes that will be sharing the resource.
2.4.5 Configure the Concurrent Cluster Resource Group
Once the resource group is added it can then be configured with:
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Change/Show Resources for a Resource Group
FastPath:
# smit cm_cfg_res.select Configure Resources for a Resource Group
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[TOP] [Entry Fields] Resource Group Name concurrent_group Node Relationship concurrent Participating Node Names opcbaix1 opcbaix2
Service IP label [] +
Filesystems [] +
Filesystems Consistency Check fsck +
Filesystems Recovery Method sequential +
Filesystems to Export [] +
Filesystems to NFS mount [] +
Volume Groups [] +
Concurrent Volume groups [oracle_vg] + Raw Disk PVIDs [00041486eb90ebb7] +
AIX Connections Service [] +
AIX Fast Connect Services [] +
Application Servers [] +
Highly Available Communication Links [] +
Miscellaneous Data []
Inactive Takeover Activated false +
9333 Disk Fencing Activated false +
SSA Disk Fencing Activated false +
Filesystems mounted before IP configured false +
[BOTTOM]
Note that the settings for “Resource Group Name“, “Node Relationship” and “Participating Node Names” comes from the data entered in the previous menu. “Concurrent Volume groups” needs to be a pre-created volume group on shared storage. The “Raw Disk PVIDs” are the physical volumes IDs for each of the disks that make up the “Concurrent Volume groups“. It is important to note that you a resource group manage multiple concurrent resources. In such a case separate each volume group name with a space. Also, the “Raw Disk PVIDs” will be a space delimited list of all the physical volume IDs that make up the concurrent volume group list. Alternatively each volume group can be configured in its own concurrent resource group.
2.4.6 Creating Parallel Filesystems (GPFS)
With AIX 5.1 (5L) you can also place your files on GPFS (RAW Logical Volumes are not a requirement of GPFS). In this case create GPFS capable of holding all required Database Files, Controlfiles and Logfiles.
2.5 Synchronizing the Cluster Resources
After configuring the resource group a resource synchronization is needed.
Smit HACMP -> Cluster Configuration -> Cluster Resources -> Synchronize Cluster Resources
FastPath:
# smit clsyncnode.dialogType or select values in entry fields. Press Enter AFTER making all desired changes.
[TOP] [Entry Fields] Ignore Cluster Verification Errors? [No] + Un/Configure Cluster Resources? [Yes] + * Emulate or Actual? [Actual] +
Note: Only the local node’s default configuration files keep the changes you make for resource DARE emulation. Once you run your emulation, to restore the original configuration rather than running an actual DARE, run the SMIT command, “Restore System Default Configuration from Active Configuration.” We recommend that you make a snapshot before running an emulation, just in case uncontrolled cluster events happen during emulation. [BOTTOM]
Just keep the defaults.
2.6 Joining Nodes Into the Cluster
After the cluster topology and resources are configured the nodes can join the cluster. It is important to start one node at a time unless using C-SPOC (Cluster-Single Poing of Control). For more information on using C-SPOC consult IBM’s HACMP specific documentation. The use of C-SPOC will not be covered in this document.
Start cluster services by doing the following:
Smit HACMP -> Cluster Services -> Start Cluster Services
FastPath:
# smit clstart.dialogType or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Start now, on system restart or both now +
BROADCAST message at startup? false + Startup Cluster Lock Services? false + Startup Cluster Information Daemon? true +
Setting “Start now, on system restart or both” to “now” will start the HACMP daemons immediately. “restart” will update the “/etc/inittab” with an entry to start the daemons at reboot and “both” will do exactly that, update the “/etc/inittab” and start the daemons immediately. “BROADCAST message at startup? ” can either be “true” or “false“. If set to “true” wall type message will be displayed when the node is joining the cluster. “Startup Cluster Lock Services?” should be set to “false” for a RAC configuration. Setting this parameter to “true” will prevent the cluster from working but the added daemon is not used. If “clstat” is going to be used to to monitor the cluster the “Startup Cluster Information Daemon?” will need to be set to “true“.
View the “/etc/hacmp.out” file for startup messages. When you see something similar to the following it is safe to start the cluster services on the other nodes:
May 23 09:31:43 EVENT COMPLETED: node_up_complete node1
When joining nodes into the cluster the other nodes will report a successful join in their “/tmp/hacmp.out” files:
May 23 09:34:11 EVENT COMPLETED: node_up_complete node1
2.7 Basic Cluster Administration
The “/tmp/hacmp.out” is the best place to look for cluster information. “clstat” can also be used to verify cluster health. The “clstat” program can take a while to update with the latest cluster information and at times does not work at all. Also you must have the “Startup Cluster Information Daemon?” set to “true” when starting cluster services. Use the following command to start “clstat”:
# /usr/es/sbin/cluster/clstat clstat – HACMP for AIX Cluster Status Monitor ———————————————
Cluster: cluster1 (0) Tue Jul 2 08:38:06 EDT 2002 State: UP Nodes: 2 SubState: STABLE Node: node1 State: UP Interface: node1 (0) Address: 192.168.0.1 State: UP
Node: node2 State: UP Interface: node2 (0) Address: 192.168.0.2 State: UP
One other way to check the cluster status is by querying the “snmpd” daemon with “snmpinfo”:
# /usr/sbin/snmpinfo -m get -o /usr/es/sbin/cluster/hacmp.defs -v ClusterSubstate.0
This should return “32”:
clusterSubState.0 = 32
If other values are returned from any node consult your IBM HACMP documentation or contact IBM support.
You can get a quick view of the HACMP specific daemons with:
Smit HACMP -> Cluster Services -> Show Cluster Services COMMAND STATUS
Command: OK stdout: yes stderr: no
Before command completion, additional instructions may appear below.
Subsystem Group PID Status clstrmgrES cluster 22000 active clinfoES cluster 21394 active clsmuxpdES cluster 14342 active cllockdES lock inoperative clresmgrdES 29720 active
Starting & Stopping Cluster Nodes
To join and evict nodes from the cluster use:
Smit HACMP -> Cluster Services -> Start Cluster Services
See section 2.6 for more information on joining a node into the cluster.
Use the following to evict a node from the cluster:
Smit HACMP -> Cluster Services -> Stop Cluster Services
FastPath:
# smit clstop.dialog Stop Cluster Services
Type or select values in entry fields. Press Enter AFTER making all desired changes.
[Entry Fields] * Stop now, on system restart or both now +
BROADCAST cluster shutdown? true + * Shutdown mode graceful +
(graceful or graceful with takeover, forced)
See section 2.6 “Joining Nodes Into the Cluster” for and explanation of “Stop now, on system restart or both“ and “BROADCAST cluster shutdown?“. The “Shutdown mode” determines whether or not resources are going to move between nodes if a shutdown occurs. “forced” is new with 4.4.1 of HACMP and will leave applications running that are controlled by HACMP events when the shutdown occurs. “graceful” will bring everything down but cascading and rotating resources are not switched where as with “graceful with takeover” these resources will be switched at shutdown.
Log Files for HACMP/ES All cluster reconfiguration information during cluster startup and shutdown goes into the “/tmp/hacmp.out”.
3.0 Preparing for the installation of RAC
The Real Application Clusters installation process includes four major tasks.
Configure the shared disks and UNIX preinstallation tasks.
Run the Oracle Universal Installer to install the Oracle9i Enterprise Edition and the Oracle9i Real Application Clusters software.
Create and configure your database.
3.1 Configure the shared disks and UNIX preinstallation tasks
3.1.1 Configure the shared disks
Real Application Clusters requires that all each instance be able to access a set of unformatted devices on a shared disk subsystem if GPFS is not being used. These shared disks are also referred to as raw devices. If your platform supports an Oracle-certified cluster file system, however, you can store the files that Real Application Clusters requires directly on the cluster file system.
Note: If you are using Parallel Filesystem (GPFS), however, you can store the files that Real Application Clusters requires directly on the cluster file system !
The Oracle instances in Real Application Clusters write data onto the raw devices to update the control file, server parameter file, each datafile, and each redo log file. All instances in the cluster share these files.
The Oracle instances in the RAC configuration write information to raw devices defined for:
The control file
The spfile.ora
Each datafile
Each ONLINE redo log file
Server Manager (SRVM) configuration information
It is therefore necessary to define raw devices for each of these categories of file. The Oracle Database Configuration Assistant (DBCA) will create a seed database expecting the following configuration:Raw VolumeFile SizeSample File NameSYSTEM tablespace400 Mbdb_name_raw_system_400mUSERS tablespace120 Mbdb_name_raw_users_120mTEMP tablespace100 Mbdb_name_raw_temp_100mUNDOTBS tablespace per instance312 Mbdb_name_raw_undotbsx_312mCWMLITE tablespace100 Mbdb_name_raw_cwmlite_100mEXAMPLE160 Mbdb_name_raw_example_160mOEMREPO20 Mbdb_name_raw_oemrepo_20mINDX tablespace70 Mbdb_name_raw_indx_70mTOOLS tablespace12 Mbdb_name_raw_tools_12mDRYSYS tablespace90 Mbdb_name_raw_drsys_90mFirst control file110 Mbdb_name_raw_controlfile1_110mSecond control file110 Mbdb_name_raw_controlfile2_110mTwo ONLINE redo log files per instance120 Mb x 2db_name_thread_lognumber_120mspfile.ora5 Mbdb_name_raw_spfile_5msrvmconfig100 Mbdb_name_raw_srvmconf_100m
Note: Automatic Undo Management requires an undo tablespace per instance therefore you would require a minimum of 2 tablespaces as described above. By following the naming convention described in the table above, raw partitions are identified with the database and the raw volume type (the data contained in the raw volume). Raw volume size is also identified using this method.
Note: In the sample names listed in the table, the string db_name should be replaced with the actual database name, thread is the thread number of the instance, and lognumber is the log number within a thread.
On the node from which you run the Oracle Universal Installer, create an ASCII file identifying the raw volume objects as shown above. The DBCA requires that these objects exist during installation and database creation. When creating the ASCII file content for the objects, name them using the format:
database_object=raw_device_file_path
When you create the ASCII file, separate the database objects from the paths with equals (=) signs as shown in the example below:
system1=/dev/rdb_name_system_400m spfile1=/dev/rdb_name_spfile_5m users1=/dev/rdb_name_users_120m temp1=/dev/rdb_name_emp_100m undotbs1=/dev/rdb_name_undotbs1_312m undotbs2=/dev/rdb_name_undotbs2_312m example1=/dev/rdb_name_example_160m cwmlite1=/dev/rdb_name_cwmlite_100m indx1=/dev/rdb_name_indx_70m tools1=/dev/rdb_name_tools_12m drsys1=/dev/rdb_name_drsys_90m control1=/dev/rdb_name_cntrl1_110m control2=/dev/rdb_name_cntrl2_110m redo1_1=/dev/rdb_name_log11_120m redo1_2=/dev/rdb_name_log12_120m redo2_1=/dev/rdb_name_log21_120m redo2_2=/dev/rdb_name_log22_120m
You must specify that Oracle should use this file to determine the raw device volume names by setting the following environment variable where filename is the name of the ASCII file that contains the entries shown in the example above:
csh:
setenv DBCA_RAW_CONFIG filename
ksh, bash or sh:
DBCA_RAW_CONFIG=filename; export DBCA_RAW_CONFIG
3.1.2 UNIX Preinstallation Steps
Note: In addition, you can run the installPrep.sh script provided in Note 189256.1 which catches most unix environment problems.
After configuring the raw volumes, perform the following steps prior to installation as root user:
Add the Oracle USER
Make sure you have an osdba group defined in the /etc/group file on all nodes of your cluster. To designate an osdba group name and group number and osopergroup during installation, these group names must be identical on all nodes of your UNIX cluster that will be part of the Real Application Clusters database. The default UNIX group name for the osdba and osoper groups is dba. There also needs be an oinstall group which the software owner should have as its primary group. A typical entry would therefore look like the following:
dba::101:oracle oinstall::102:root,oracle
The following is an example of the command used to create the “dba” group with a group ID of “101”:
# mkgroup -‘A’ id=’101′ users=’oracle’ dba
Create an oracle account on each node so that the account:
Is a member of the osdba group (dba in this example)
Has oinstall as its primary group
Is used only to install and update Oracle software
Has write permissions on remote directories
The following is an example of the smit command used to create the “oracle” user:
Smit -> Security & Users -> Users -> Add a User
Fastpath:
# smit mkuserType or select values in entry fields. Press Enter AFTER making all desired changes.
[TOP] [Entry Fields] * User NAME [oracle] User ID [101] # ADMINISTRATIVE USER? false + Primary GROUP [oinstall] +
Group SET [] +
ADMINISTRATIVE GROUPS [] +
ROLES [] +
Another user can SU TO USER? true +
SU GROUPS [ALL] +
HOME directory [/home/oracle] Initial PROGRAM [/bin/ksh]
User INFORMATION []
EXPIRATION date (MMDDhhmmyy) [0]
Note that the primary group is not “dba”. The “use” of “oinstall” is optional but recommended. For more information on the use of the “oinstall” group see the : Oracle9i Installation Guide Release 2 (9.X.X.X.0) for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solarisdocumentation.
Create a mount point directory on each node to serve as the top of your Oracle software directory structure so that:
The name of the mount point on each node is identical to that on the initial node
The oracle account has read, write, and execute privileges
On the node from which you will run the Oracle Universal Installer, set up user equivalence by adding entries for all nodes in the cluster, including the local node, to the .rhosts file of the oracle account, or the /etc/hosts.equiv file.
As oracle account user, check for user equivalence for the oracle account by performing a remote login (rlogin) to each node in the cluster.
As oracle account user, if you are prompted for a password, you have not given the oracle account the same attributes on all nodes. You must correct this because the Oracle Universal Installer cannot use the rcp command to copy Oracle products to the remote node’s directories without user equivalence.
Establish system environment variables
Set a local bin directory in the user’s PATH, such as /usr/local/bin, or /opt/bin. It is necessary to have execute permissions on this directory.
Set the DISPLAY variable to point to the system’s (from where you will run OUI) IP address, or name, X server, and screen.
Set a temporary directory path for TMPDIR with at least 20 Mb of free space to which the OUI has write permission.
Establish Oracle environment variables: Set the following Oracle environment variables:Environment VariableSuggested valueORACLE_BASEeg /u01/app/oracleORACLE_HOMEeg /u01/app/oracle/product/901ORACLE_TERMxtermNLS_LANGAMERICAN-AMERICA.UTF8 for exampleORA_NLS33$ORACLE_HOME/ocommon/nls/admin/dataPATHShould contain $ORACLE_HOME/binCLASSPATH$ORACLE_HOME/JRE:$ORACLE_HOME/jlib $ORACLE_HOME/rdbms/jlib: $ORACLE_HOME/network/jlib
Create the directory /var/opt/oracle and set ownership to the oracle user.
Note: There is a verification script InstallPrep.sh available which may be downloaded and run prior to the installation of Oracle Real Application Clusters. This script verifies that the system is configured correctly according to the Installation Guide. The output of the script will report any further tasks that need to be performed before successfully installing Oracle 9.x DataServer (RDBMS). This script performs the following verifications:-
ORACLE_HOME Directory Verification UNIX User/umask Verification UNIX Group Verification Memory/Swap Verification TMP Space Verification Real Application Cluster Option Verification Unix Kernel Verification
./InstallPrep.sh You are currently logged on as oracle Is oracle the unix user that will be installing Oracle Software? y or n y Enter the unix group that will be used during the installation Default: dba dba
Enter Location where you will be installing Oracle Default: /u01/app/oracle/product/oracle9i /u01/app/oracle/product/9.2.0.1 Your Operating System is AIX Gathering information… Please wait
Checking unix user … user test passed
Checking unix umask … umask test passed
Checking unix group … Unix Group test passed
Checking Memory & Swap… Memory test passed
/tmp test passed
Checking for a cluster… AIX Cluster test Cluster has been detected You have 2 cluster members configured and 2 are curently up No cluster warnings detected Processing kernel parameters… Please wait Running Kernel Parameter Report… Check the report for Kernel parameter verification
Completed.
/tmp/Oracle_InstallPrep_Report has been generated
Please review this report and resolve all issues before attempting to install the Oracle Database Software
3.2 Using the Oracle Universal Installer for Real Application Clusters
Follow these procedures to use the Oracle Universal Installer to install the Oracle Enterprise Edition and the Real Application Clusters software. Oracle9i is supplied on multiple CD-ROM disks. During the installation process it is necessary to switch between the CD-ROMS. OUI will manage the switching between CDs. For the latest RAC/IBM certification matrix see here.
To install the Oracle Software, perform the following:.
Login as the root user and mount the first CD-ROM if installing from CD-ROM
# mount -rv cdrfs /dev/cd0 /cdrom
Execute the “rootpre.sh” shell script on the CD-ROM mount point or the location of Disk1 if installing from a disk stage. See the Oracle9i Installation Guide Release 2 (9.X.X.X.0) for UNIX Systems: AIX-Based Systems, Compaq Tru64 UNIX, HP 9000 Series HP-UX, Linux Intel and Sun SPARC Solaris documentation for more information on creating disk stages.
# /<Location_Of_Install_Media>/rootpre.sh
Login as the oracle user and execute the “runInstaller”. See Note:153960.1 if you experience problems starting the runInstaller.
$ /<Location_Of_Install_Media>/runInstaller
At the OUI Welcome screen, click Next.
A prompt will appear for the Inventory Location (if this is the first time that OUI has been run on this system). This is the base directory into which OUI will install files. The Oracle Inventory definition can be found in the file /etc/oraInst.loc. Click OK.
Verify the UNIX group name of the user who controls the installation of the Oracle9i software. If an instruction to run /tmp/orainstRoot.sh appears, the pre-installation steps were not completed successfully. Typically, the /var/opt/oracle directory does not exist or is not writeable by oracle. Run /tmp/orainstRoot.sh to correct this, forcing Oracle Inventory files, and others, to be written to the ORACLE_HOME directory. Once again this screen only appears the first time Oracle9i products are installed on the system. Click Next.
The File Location window will appear. Do NOT change the Source field. The Destination field defaults to the ORACLE_HOME environment variable. Click Next.
Select the Products to install. In this example, select the Oracle9i Server then click Next.
Select the installation type. Choose the Enterprise Edition option. The selection on this screen refers to the installation operation, not the database configuration. The next screen allows for a customized database configuration to be chosen. Click Next.
Select the configuration type. In this example you choose the Advanced Configuration as this option provides a database that you can customize, and configures the selected server products. Select Customized and click Next.
Select the other nodes on to which the Oracle RDBMS software will be installed. It is not necessary to select the node on which the OUI is currently running. ClickNext.
NOTE: If choosing to create a shared Oracle Home on GPFS, you will only choose one node within the Cluster Node Selection screen of OUI for the installation of the binaries.
Identify the raw partition in to which the Oracle9i Real Application Clusters (RAC) configuration information will be written. It is recommended that this raw partition is a minimum of 100MB in size.
An option to Upgrade or Migrate an existing database is presented. Do NOT select the radio button. The Oracle Migration utility is not able to upgrade a RAC database, and will error if selected to do so.
The Summary screen will be presented. Confirm that the RAC database software will be installed and then click Install. The OUI will install the Oracle9i software on to the local node, and then copy this information to the other nodes selected.
Once Install is selected, the OUI will install the Oracle RAC software on to the local node, and then copy software to the other nodes selected earlier. This will take some time. During the installation process, the OUI does not display messages indicating that components are being installed on other nodes – I/O activity may be the only indication that the process is continuing.
3.3 Create a RAC Database using the Oracle Database Configuration Assistant
The Oracle Database Configuration Assistant (DBCA) will create a database for you (for an example of manual database creation see Database Creation in Oracle9i RAC). The DBCA creates your database using the optimal flexible architecture (OFA). This means the DBCA creates your database files, including the default server parameter file, using standard file naming and file placement practices. The primary phases of DBCA processing are:-
Verify that you correctly configured the shared disks for each tablespace (for non-cluster file system platforms)
Create the database
Configure the Oracle network services
Start the database instances and listeners
Oracle Corporation recommends that you use the DBCA to create your database. This is because the DBCA preconfigured databases optimize your environment to take advantage of Oracle9i features such as the server parameter file and automatic undo management. The DBCA also enables you to define arbitrary tablespaces as part of the database creation process. So even if you have datafile requirements that differ from those offered in one of the DBCA templates, use the DBCA. You can also execute user-specified scripts as part of the database creation process.
The DBCA and the Oracle Net Configuration Assistant also accurately configure your Real Application Clusters environment for various Oracle high availability features and cluster administration tools.
DBCA will launch as part of the installation process, but can be run manually by executing the command dbca from the $ORACLE_HOME/bin directory on UNIX platforms. The RAC Welcome Page displays. Choose Oracle Cluster Database option and select Next.
The Operations page is displayed. Choose the option Create a Database and click Next.
The Node Selection page appears. Select the nodes that you want to configure as part of the RAC database and click Next. If nodes are missing from the Node Selection then perform clusterware diagnostics by executing the $ORACLE_HOME/bin/lsnodes -v command and analyzing its output. Refer to your vendor’s clusterware documentation if the output indicates that your clusterware is not properly installed. Resolve the problem and then restart the DBCA.
The Database Templates page is displayed. The templates other than New Database include datafiles. Choose New Database and then click Next.
The Show Details button provides information on the database template selected.
DBCA now displays the Database Identification page. Enter the Global Database Name and Oracle System Identifier (SID). The Global Database Name is typically of the form name.domain, for example mydb.us.oracle.com while the SID is used to uniquely identify an instance (DBCA should insert a suggested SID, equivalent to name1 where name was entered in the Database Name field). In the RAC case the SID specified will be used as a prefix for the instance number. For example, MYDB, would become MYDB1, MYDB2 for instance 1 and 2 respectively.
The Database Options page is displayed. Select the options you wish to configure and then choose Next. Note: If you did not choose New Database from the Database Template page, you will not see this screen.
The Additional database Configurations button displays additional database features. Make sure both are checked and click OK.
Select the connection options desired from the Database Connection Options page. Note: If you did not choose New Database from the Database Template page, you will not see this screen. Click Next.
DBCA now displays the Initialization Parameters page. This page comprises a number of Tab fields. Modify the Memory settings if desired and then select the File Locations tab to update information on the Initialization Parameters filename and location. Then click Next.
The option Create persistent initialization parameter file is selected by default. If you have a cluster file system, then enter a file system name, otherwise araw device name for the location of the server parameter file (spfile) must be entered. Then click Next.
The button File Location Variables… displays variable information. Click OK.
The button All Initialization Parameters… displays the Initialization Parameters dialog box. This box presents values for all initialization parameters and indicates whether they are to be included in the spfile to be created through the check box, included (Y/N). Instance specific parameters have an instance value in the instance column. Complete entries in the All Initialization Parameters page and select Close. Note: There are a few exceptions to what can be altered via this screen. Ensure all entries in the Initialization Parameters page are complete and select Next.
DBCA now displays the Database Storage Window. This page allows you to enter file names for each tablespace in your database.
The file names are displayed in the Datafiles folder, but are entered by selecting the Tablespaces icon, and then selecting the tablespace object from the expanded tree. Any names displayed here can be changed. A configuration file can be used, see section 3.2.1, (pointed to by the environment variable DBCA_RAW_CONFIG). Complete the database storage information and click Next.
The Database Creation Options page is displayed. Ensure that the option Create Database is checked and click Finish.
The DBCA Summary window is displayed. Review this information and then click OK.
Once the Summary screen is closed using the OK option, DBCA begins to create the database according to the values specified.
A new database now exists. It can be accessed via Oracle SQL*PLUS or other applications designed to work with an Oracle RAC database.
4.0 Administering Real Application Clusters Instances
Oracle Corporation recommends that you use SRVCTL to administer your Real Application Clusters database environment. SRVCTL manages configuration information that is used by several Oracle tools. For example, Oracle Enterprise Manager and the Intelligent Agent use the configuration information that SRVCTL generates to discover and monitor nodes in your cluster. Before using SRVCTL, ensure that your Global Services Daemon (GSD) is running after you configure your database. To use SRVCTL, you must have already created the configuration information for the database that you want to administer. You must have done this either by using the Oracle Database Configuration Assistant (DBCA), or by using the srvctl add command as described below.
If this is the first Oracle9i database created on this cluster, then you must initialize the clusterwide SRVM configuration. Firstly, create or edit the file/var/opt/oracle/srvConfig.loc file and add the entry srvconfig_loc=path_name.where the path name is a small cluster-shared raw volume eg
$ vi /var/opt/oracle/srvConfig.loc srvconfig_loc=/dev/rrac_srvconfig_100m
Then execute the following command to initialize this raw volume (Note: This cannot be run while the gsd is running. Prior to 9i Release 2 you will need to kill the …/jre/1.1.8/bin/… process to stop the gsd from running. From 9i Release 2 use the gsdctl stop command):
$ srvconfig -init
The first time you use the SRVCTL Utility to create the configuration, start the Global Services Daemon (GSD) on all nodes so that SRVCTL can access your cluster’s configuration information. Then execute the srvctl add command so that Real Application Clusters knows what instances belong to your cluster using the following syntax:
For Oracle RAC v9.0.1:
$ gsd Successfully started the daemon on the local node.
$ srvctl add db -p db_name -o oracle_home
Then for each instance enter the command from either node:
$ srvctl add instance -p db_name -i sid -n node
To display the configuration details for, example, databases racdb1/2, on nodes racnode1/2 with instances racinst1/2 run:-
$ srvctl config racdb1 racdb2
$ srvctl config -p racdb1 racnode1 racinst1 racnode2 racinst2
$ srvctl config -p racdb1 -n racnode1 racnode1 racinst1
Examples of starting and stopping RAC follow:-
$ srvctl start -p racdb1 Instance successfully started on node: racnode2 Listeners successfully started on node: racnode2 Instance successfully started on node: racnode1 Listeners successfully started on node: racnode1
$ srvctl stop -p racdb2 Instance successfully stopped on node: racnode2 Instance successfully stopped on node: racnode1 Listener successfully stopped on node: racnode2 Listener successfully stopped on node: racnode1
$ srvctl stop -p racdb1 -i racinst2 -s inst Instance successfully stopped on node: racnode2
$ srvctl stop -p racdb1 -s inst PRKO-2035 : Instance is already stopped on node: racnode2 Instance successfully stopped on node: racnode1
For Oracle RAC v9.2.0+:
$ gsdctl start Successfully started the daemon on the local node.
$ srvctl add database -d db_name -o oracle_home [-m domain_name] [-s spfile]
Then for each instance enter the command:
$ srvctl add instance -d db_name -i sid -n node
To display the configuration details for, example, databases racdb1/2, on nodes racnode1/2 with instances racinst1/2 run:-
$ srvctl config racdb1 racdb2
$ srvctl config -p racdb1 -n racnode1 racnode1 racinst1 /u01/app/oracle/product/9.2.0.1
$ srvctl status database -d racdb1 Instance racinst1 is running on node racnode1 Instance racinst2 is running on node racnode2
Examples of starting and stopping RAC follow:-
$ srvctl start database -d racdb2
$ srvctl stop database -d racdb2
$ srvctl stop instance -d racdb1 -i racinst2
$ srvctl start instance -d racdb1 -i racinst2
$ gsdctl stat GSD is running on local node
$ gsdctl stop
For further information on srvctl and gsdctl see the Oracle9i Real Application Clusters Administration manual.
Comments