top of page
Writer's pictureHanh Nguyen

Managing Raw Disks in AIX to use with Oracle ASM

Purpose

Effective way to manage the raw disks in AIX to be used by ASM. Steps listed in this document will also help prevent data corruption by accidental overwrites due to user errors caused by OS disk naming convention.

Managing Raw Disks in AIX to use with Oracle ASM ( lkdev, rendev )

Introduction : Traditionally AIX storage devices were made available for use by assigning disk devices to Volume Groups (VGs) and then defining Logical Volumes (LVs) in the VGs. When a disk is assigned to a VG the Logical Volume Manager (LVM) writes information to the disk and to the AIX Object Data Manager (ODM). Information on the disk identifies that the disk was assigned to the LVM, other information in the ODM identifies which VG the disk belongs to. System components other than the LVM can use the same convention. For example GPFS uses the same area of the disk to identify disks assigned to it. AIX commands can use the identifying information on the disk or in the ODM to help prevent a disk already in use from being reassigned to another use. The information can also be used to display informative information to help identify disk usage. For example, the lspv command will display the VG name of disks that are assigned to a VG.

When using Oracle ASM, which does its own disk management, the disk devices are typically assigned directly to the Oracle application and are not managed by the LVM. The Oracle OCR and VOTING disks are also commonly assigned directly to storage devices and are not managed by the LVM. In these cases the identifying information associated with disks that are managed by the LVM is not present. AIX has special functionality to help manage these disks and to help prevent an AIX administrator from inadvertently reassigning a disk already in use by Oracle and inadvertently corrupting the Oracle data.

Where possible, AIX commands that write to the LVM information block have special checking added to check if the disk is already in used by Oracle to prevent these disks from being assigned to the LVM, which would result in the Oracle data becoming corrupted. These commands, if checking is done, and the AIX levels where checking was added is listed in Table 1.AIX CommandsAIX 5.3AIX 6.1AIX 7.1DescriptionmkvgTL07 or newerAllAllPrevent reassigning disk used by oracleextendvgTL07 or newerAllAllPrevent reassigning disk used by oraclechdev … -a pv=yes

chdev … -a pv=clear–––No checking

Table 1 – AIX commands which write control information on the disk and if and when checking for Oracle disk signatures was added  Note in Table 1 that the chdev command with attribute pv=yes or pv=clear which will write to the VGDA information block on the disk, does not have the checking for the Oracle disk signature. It is therefore extremely important that this command is not used on a disk which is already being used by Oracle, as it could lead to corruption of the Oracle data. To help prevent this additional functionality was added to AIX.

AIX 6.1 and AIX 7.1 LVM commands contain new functionality that can be used to better manage AIX devices used by Oracle. This new functionality includes commands to better identify shared disks across multiple nodes, the ability to assign a meaningful name to a device, and a locking mechanism that the system administrator can use when the disk is assigned to Oracle to help prevent the accidental reuse of a disk at a later time. This new functionality is listed in Table 2 along with the minimum AIX level providing that functionality.AIX CommandAIX 5.3AIX 6.1AIX 7.1DescriptionlspvN/ATL07TL01New AIX lspv command option “-u” provides additional identification and state informationlkdevN/ATL07TL01New AIX lkdev command that can lock a device so that any attempt to modify the device characteristics will fail.rendevN/ATL06TL00New command to rename device.

Table 2 – New AIX Commands Useful for Managing AIX Devices Used by Oracle ASM

The use of each of the commands in Table 2 is described in the sections below.

AIX lkdev Command: 

The AIX lkdev command should be used by the system administrator when a disk is assigned to Oracle to lock the disk device to prevent the device from inadvertently being altered by a system administrator at a later time. The lkdev command locks the specified device so that any attempt to modify the device attributes (chdev, chpath) or remove the device or one of its paths (rmdev, rmpath) will be denied. This is intended to get the attention of the administrator and warn that the device is already being used. The “-d” option of the lkdev command can be used to remove the lock if the disk is no long being used by Oracle. The lspv command with the “-u” option indicates if the disk device is locked. The example section of this note shows how to use lkdev and the related lspv output.

AIX rendev Command: 

The AIX rendev command can be used to assign meaningful names to disks used by the Oracle Database, Cluster Ready Services (CRS) and ASM. This is useful in identifying disk usage because there is no indication in output from AIX disk commands indicating that a disk is being used by Oracle. For example the lsps command does not indicate the disk is used by Oracle. The command can be used to assign a meaningful name to the Oracle CRS OCR and VOTING disks, whether they are accessed as raw devices (prior to 11gR2) or through the ASM (11gR2 and later).

The rendev command will allow a disk to be dynamically renamed if the disk is not currently open. For example: hdisk11 could be renamed to hdiskASMd001. Once the device is renamed it can no longer be access by the old name. That means that any administrator who later tries to do some command related to that disk would have to use the new meaningful name, and therefore would clearly see that the disk belongs to ASM. Also, when a system administrator issues a “lspv” the meaningful name will be listed and it will be clear which disks are being used by Oracle.

For non-RAC installations a system administrator should identify the disks that will be managed by ASM and assign meaningful names. Any name that is 15 characters or less and not already used in the system can be used, but it is recommended that you keep the “hdisk” prefix on the device name, as this will allow the default ASM discovery string to find the disks, and make it obvious it is a hdisk device. The ASM disk discovery process will find the disks even though the names have changed as long as the new names match the ASM discovery string.

For RAC installations the disks are shared across nodes but the names of the shared disk devices are not necessarily the same across all of the nodes in the cluster. The ASM can manage identifying storage devices even if the device names don’t match across the nodes in the cluster, however it is useful to make the names for each shared disk device consistent across the cluster. The new “-u” option of the lspv command is useful in identifying disks across nodes of a cluster.

When rendev is used to rename a disk both the block and character mode devices are renamed. If the device being renamed is in the Available state, the rendev command must unconfigure the device before renaming it. If the unconfigure operation fails, the renaming will also fail. If the unconfigure succeeds, the rendev command will configure the device, after renaming it, to restore it to the Available state. In the process of unconfiguring and reconfiguring the device the ownership and permissions will be reset to the default values. So after renaming the disk device the ownership and permission should be checked and if necessary changed to values required by your Oracle RAC installation. Device settings stored in the AIX ODM, for example reserve_policy, will not be changed by the renaming process.

Some disk multipathing solutions my have problems with device renaming. At least some versions of EMC PowerPath and some IBM SDDPCM tools (IBM storage MPIO tools) have dependencies on disk names which can lead to problems when disks are renamed. For this reason, device renaming should only be used with the AIX native MPIO device driver, unless confirming with the storage vendor your storage solution is compatible with renaming the disks.

AIX lspv Command: 

The “-u” option of the AIX lspv command provides additional device identification information, the UDID and UUID. The lspv command will also indicate if the device is locked. These IDs are unique to a disk and can be used to identify a shared disk across nodes of the cluster. For example, if you want to rename a device to use a meaningful name that is the same for that disk on all nodes. Which identification information is available is dependant on the storage and device driver being used. Also, newer device driver and system versions may add identification information that was not previously available. When present either if these IDs can be used to identify a disk across nodes.

The Unique device identifier (UDID) is the unique_id attribute from the ODM CuAt ObjectClass. The UDID is present for PowerPath, AIX MPIO, and some other devices. The Universally Unique Identifier (UUID) is present for AIX MPIO devices

Using the UDID or UUID to identify the disks instead of the PVID is useful because when Oracle uses a disk it overwrites the area on disk where the PVID is stored. The PVID is also saved in the ODM so if the PVID was on the disk when AIX first configured the device then lspv will still display the PVID. However, if after Oracle is using the disk a new node is added to the cluster it will not see a PVID on the disk and therefore can not set the PVID in the ODM. This sometimes has lead to a situation when the system administrator incorrectly uses the ‘chdev …. –a pv=yes’ command to set the PVID which corrupts the Oracle data on that disk.

Because the ASM uses the same area of the disk where AIX stores the PVID, a PVID should never be written to a disk after the disk has been assigned to the ASM, as this would corrupt the ASM disk header. If the PVID was written to the disk before the disk was assigned to ASM, and all the nodes in the cluster discovered the disk while the PVID was still on the disk, then the PVID will remain in the AIX ODM and will show up in the output of the lspv command. However because this method can not be used when adding nodes after the disks are assigned to the ASM, and because of the risk of overwriting the ASM disk header if someone inadvertently adds a PVID after the disk is assigned to ASM the UDID or UUID should be used to identify shared disks.

Examples: 

The following examples show how to use the AIX rendev and lkdev commands with Oracle RAC and ASM.

 Example 1:  This example uses rendev and lkdev commands to rename and lock ASM disk devices on an existing Oracle RAC cluster. In this example the disks are already owned by ASM, so we use an ASM view to match up the disk names across the cluster instead of ‘lspv –u’.

Example 2:  This example shows how to add a new node to an existing RAC cluster, using ‘lspv –u’ to identify the disks on the new node. Example 1:  In this example rendev and lkdev are used to rename and lock the ASM disks on an existing Oracle RAC cluster. In this example the cluster has four nodes, and Oracle data, OCR and Voting files are all in the ASM. The ASM disks can be locked while the cluster is active, but instances and clusterware need to be shutdown to rename the ASM disks. ASM uses the discovery string and control information on the ASM disks to identify the disks, so the names of the ASM disk devices can change without confusing ASM; no ASM commands need to be issued. In the example meaningful disk device names that start with “hdisk” are used, so there is no need to change the ASM discovery string. Because the ASM disk names don’t need to match across the cluster the changes can be made one node at a time, while the Oracle database and clusterware are active on the other nodes. The following commands show the renaming and locking on the first node. The other nodes can be changed in a similar way.

The following commands and output show the OCR, Voting files are all located in the ASM. The SQL output from the ASM shows the disks available to the ASM. All of the available disks except for hdisk19-hdisk21 are currently in use.

root@rac222 # ocrcheck -config Oracle Cluster Registry configuration is : Device/File Name : +OCR

root@rac222 # crsctl query css votedisk ## STATE File Universal Id File Name Disk group — —– —————– ——— ——— 1. ONLINE 995e7ba027194fe0bfa6b09c7187ab17 (/dev/rhdisk14) [VOTE] 2. ONLINE 60c6827b050c4ff4bf4d1a25cacfa4ef (/dev/rhdisk15) [VOTE] 3. ONLINE 7ce12429d5a14fe6bfb67046eb1e40fd (/dev/rhdisk16) [VOTE]

SQL> select name, path, mode_status, state from v$asm_disk order by name;

NAME PATH MODE_STATUS STATE —————- ——————– —————- —————- DATA_0000 /dev/rhdisk4 ONLINE NORMAL DATA_0001 /dev/rhdisk5 ONLINE NORMAL DATA_0002 /dev/rhdisk6 ONLINE NORMAL DATA_0003 /dev/rhdisk7 ONLINE NORMAL DATA_0004 /dev/rhdisk8 ONLINE NORMAL DATA_0005 /dev/rhdisk9 ONLINE NORMAL DATA_0006 /dev/rhdisk10 ONLINE NORMAL DATA_0007 /dev/rhdisk11 ONLINE NORMAL DATA_0008 /dev/rhdisk12 ONLINE NORMAL DATA_0009 /dev/rhdisk13 ONLINE NORMAL OCR_0000  /dev/rhdisk17 ONLINE NORMAL OCR_0001  /dev/rhdisk18 ONLINE NORMAL VOTE_0000 /dev/rhdisk14 ONLINE NORMAL VOTE_0001 /dev/rhdisk15 ONLINE NORMAL VOTE_0002 /dev/rhdisk16 ONLINE NORMAL /dev/rhdisk21 ONLINE NORMAL /dev/rhdisk20 ONLINE NORMAL /dev/rhdisk19 ONLINE NORMAL

The following commands and output show that the Oracle RAC cluster is active on all nodes. So the cluster and CRS are stopped on node rac222 where we will make the changes.

root@rac222 # crsctl check cluster -all ************************************************************** rac222: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac223: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac232: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac233: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** root@rac222 # crsctl stop cluster :

root@rac222 # crsctl stop crs : With Oracle RAC and the CRS stopped all of the ASM disks devices can be renamed. For the new meaningful name we keep “hdisk” in the start of the name so it is clear these are disk devices and so the default ASM discovery string will still search the correct disks. We include “ASM” in the name to indicate the disks are owned by ASM, followed by “d” for datafiles or “c” for clusterware files, followed by a number for uniqueness. The name is arbitrary, so can be adapted to what is meaningful in the context of your configuration. Then ownership of the devices is set back to the original values required by Oracle, oracle:dba in this example.

root@rac222 # rendev -n hdiskASMd001 -l hdisk4 root@rac222 # rendev -n hdiskASMd002 -l hdisk5 root@rac222 # rendev -n hdiskASMd003 -l hdisk6 root@rac222 # rendev -n hdiskASMd004 -l hdisk7 root@rac222 # rendev -n hdiskASMd005 -l hdisk8 root@rac222 # rendev -n hdiskASMd006 -l hdisk9 root@rac222 # rendev -n hdiskASMd007 -l hdisk10 root@rac222 # rendev -n hdiskASMd008 -l hdisk11 root@rac222 # rendev -n hdiskASMd009 -l hdisk12 root@rac222 # rendev -n hdiskASMd010 -l hdisk13 root@rac222 # rendev -n hdiskASMc011 -l hdisk14 root@rac222 # rendev -n hdiskASMc012 -l hdisk15 root@rac222 # rendev -n hdiskASMc013 -l hdisk16 root@rac222 # rendev -n hdiskASMc014 -l hdisk17 root@rac222 # rendev -n hdiskASMc015 -l hdisk18 root@rac222 # rendev -n hdiskASMc016 -l hdisk19 root@rac222 # rendev -n hdiskASMc017 -l hdisk20 root@rac222 # rendev -n hdiskASMc018 -l hdisk21

root@rac222 # chown oracle:dba /dev/rhdiskASM*

Now the CRS and Oracle RAC can be restarted.

root@rac222 # crsctl start crs CRS-4123: Oracle High Availability Services has been started.

We check the CRS is ONLINE and verify the OCR and voting files.

root@rac222 # crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online

root@rac222 # ocrcheck Status of Oracle Cluster Registry is as follows : Version : 3 Total space (kbytes) : 262120 Used space (kbytes) : 4220 Available space (kbytes) : 257900 ID : 133061953 Device/File Name : +OCR Device/File integrity check succeeded

Device/File not configured

Device/File not configured

Device/File not configured

Device/File not configured

Cluster registry integrity check succeeded

Logical corruption check succeeded

root@rac222 # crsctl query css votedisk ## STATE File Universal Id                   File Name Disk group — —– —————–                   ——— ——— 1. ONLINE 995e7ba027194fe0bfa6b09c7187ab17 (/dev/rhdiskASMc011) [VOTE] 2. ONLINE 60c6827b050c4ff4bf4d1a25cacfa4ef (/dev/rhdiskASMc012) [VOTE] 3. ONLINE 7ce12429d5a14fe6bfb67046eb1e40fd (/dev/rhdiskASMc013) [VOTE]

We can now verify the names and status of the of the disks in ASM.

SQL> select name, path, mode_status, state from v$asm_disk order by name;

NAME             PATH                 MODE_STATUS      STATE —————- ——————– —————- —————- DATA_0000        /dev/rhdiskASMd001   ONLINE           NORMAL DATA_0001        /dev/rhdiskASMd002   ONLINE           NORMAL DATA_0002        /dev/rhdiskASMd003   ONLINE           NORMAL DATA_0003        /dev/rhdiskASMd004   ONLINE           NORMAL DATA_0004        /dev/rhdiskASMd005   ONLINE           NORMAL DATA_0005        /dev/rhdiskASMd006   ONLINE           NORMAL DATA_0006        /dev/rhdiskASMd007   ONLINE           NORMAL DATA_0007        /dev/rhdiskASMd008   ONLINE           NORMAL DATA_0008        /dev/rhdiskASMd009   ONLINE           NORMAL DATA_0009        /dev/rhdiskASMd010   ONLINE           NORMAL OCR_0000         /dev/rhdiskASMc014   ONLINE           NORMAL OCR_0001         /dev/rhdiskASMc015   ONLINE           NORMAL VOTE_0000        /dev/rhdiskASMc011   ONLINE           NORMAL VOTE_0001        /dev/rhdiskASMc012   ONLINE           NORMAL VOTE_0002        /dev/rhdiskASMc013   ONLINE           NORMAL /dev/rhdiskASMc016   ONLINE           NORMAL /dev/rhdiskASMc017   ONLINE           NORMAL /dev/rhdiskASMc018   ONLINE           NORMAL

18 rows selected.

We confirm the cluster is still ONLINE on all nodes.

root@rac222 # crsctl check cluster -all ************************************************************** rac222: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac223: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac232: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rac233: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************

Now we lock the ASM disk devices. This can be done while Oracle RAC is active on the cluster. For each of the ASM disk devices lock the device as show below for the first device. Then use the lspv command to check the status of the disks.

root@rac222 # lkdev –l hdiskASMd001 –a hdiskASMd001 locked (repeat for each of the ASM disks)

root@rac222 # lspv hdisk0       00c1894ca63631c2 None hdisk1       00c1892cab6286eb None hdisk2       00c1892cab72bef3 None hdisk3       00c1892cac105522 None hdiskASMd001 00c1892c11ce6578 None locked hdiskASMd002 00c1892c11ce6871 None locked hdiskASMd003 00c1892c11ce6b66 None locked hdiskASMd004 00c1892c11ce6e39 None locked hdiskASMd005 00c1892c11ce7129 None locked hdiskASMd006 00c1892c11ce73fb None locked hdiskASMd007 00c1892c11ce76d4 None locked hdiskASMd008 00c1892c11ce79a3 None locked hdiskASMd009 00c1892c11ce7d63 None locked hdiskASMd010 00c1892c11ce8044 None locked hdiskASMc011 00c1894c211cc685 None locked hdiskASMc012 00c1894c211cc7f2 None locked hdiskASMc013 00c1894c211cc94c None locked hdiskASMc014 00c1894c211ccaa8 None locked hdiskASMc015 00c1894c211ccbfb None locked hdiskASMc016             none None locked hdiskASMc017             none None locked hdiskASMc018 00c1894c211cd008 None locked hdisk22      00c1894c992baec7 None hdisk23      00c1894c992bb220 None hdisk24      00c1894c992bb5a3 None hdisk25      00c1894c992bb8c7 None hdisk26      00c1894c992bbc0f None hdisk27      00c1894c992bbfdc None hdisk28      00c1894cbfa5952b oravg active hdisk29      00c1894c05f04253 rootvg active hdisk30      00c1894c4a3c24bc oravg active hdisk31      00c1894cce049074 oravg active hdisk32      00c1894c4a3bf8b0 None hdisk33      00c1894c5567ae8d None hdisk34      00c1894c4a3c1347 None

Note the indication of which disks are locked in the last column of the lspv output listed above. Also, note that on this system lspv is showing a PVID (second column) for some of the ASM disks but not for two of the disks (hdiskASMc016/017). The ASM disks do not have a PVID on the physical disk, because that area is reused by the ASM. The values being shown by lspv are the values in the AIX ODM. When AIX first identifies a new disk device is saves information about that device, including the PVID in the ODM. So depending on the state of the disk when AIX first configured it determines if a PVID is in the AIX ODM or not.

Example 2:  This example shows how to add a new node to an existing RAC cluster, using ‘lspv –u’ to identify the disks on the new node, and then rename and lock them. When a new node is added to an existing cluster using ASM, the PVIDs will not be present on the ASM disks. Since the disks are already in use by the ASM on the other nodes there is no PVID on the disk when AIX on the new node configures the disk devices. In this case ‘lspv –u’ is used to identify the disks owned by ASM. The example only show how to rename the disks with names matching the other cluster nodes and lock the disks in preparation to install the Oracle RAC.

In this example the shared disks (LUNs) have been mapped to the new node (rac223) and the AIX command cfgmgr was run on rac223 to configure the new disks. Following is the resulting lspv output.

Before running cfgmgr: root@rac223 # lspv hdisk0 00c1894c05e93592 rootvg active hdisk1 00c1894c4a3bdd1d oravg  active hdisk2 00c1894c4a3bf1dc oravg  active hdisk3 00c1894ca63631c2 None hdisk4 00c1892cab6286eb None hdisk5 00c1892cab72bef3 None hdisk6 00c1892cac105522 None

root@rac223 # cfgmgr

After running cfgmgr root@rac223 # lspv hdisk0 00c1894c05e93592 rootvg active hdisk1 00c1894c4a3bdd1d oravg  active hdisk2 00c1894c4a3bf1dc oravg  active hdisk3 00c1894ca63631c2 None hdisk4 00c1892cab6286eb None hdisk5 00c1892cab72bef3 None hdisk6 00c1892cac105522 None hdisk7             none None hdisk8             none None hdisk9             none None hdisk10            none None hdisk11            none None hdisk12            none None hdisk13            none None hdisk14            none None hdisk15            none None hdisk16            none None hdisk17            none None hdisk18            none None hdisk19            none None hdisk20            none None hdisk21            none None hdisk22            none None hdisk23            none None hdisk24            none None

In the above lspv output note that hdisk7 through hdisk24 was configured by running cfgmgr. These are the ASM shared disks that were mapped to the new Oracle RAC node rac223. Also not that the PVID field is “none” for these disks. This is because there was no PVID on the disk when cfgmgr was run. At this point we could just set the correct permissions for Oracle RAC on these disks, and install the Oracle RAC clusterware and the ASM would we correctly identify the disks and how they should be used. However we want to assign meaningful names so it is clear they are ASM disks and so they match across the other nodes in the cluster. To do this we will use the UUID field from the ‘lspv –u’ command to match the disk with the disk name used on an existing node in the cluster. Do not try to add a PVID onto these disks with the ‘chdev … -a pv=yes` as that would overwrite the ASM data and corrupt the disk.

Show the current disk name and UUID on rac223

root@rac223 # lspv -u|awk ‘/none/{printf(“%-12s %sn”,$1,$5)}’ hdisk7      10a04b5f-befd-05b1-b142-88054ee86886 hdisk8      efed151e-73a6-77dd-c685-dc4bfb6d42bd hdisk9      6183f585-d9c2-0938-cef9-10daa59b5d21 hdisk10     9812adda-fbe9-e424-8299-b8274d6da44d hdisk11     c3a8eb24-b547-edad-5179-2a31985ca5d2 hdisk12     23647cf1-cf55-1a8b-e96f-f4339454b8cc hdisk13     4a6096eb-42fd-40d0-44d8-09faee2822f4 hdisk14     082d65b5-6c95-23fb-8660-f8de93acc204 hdisk15     647307f9-fa7b-57a6-704a-bd1bae9e1aa0 hdisk16     7e08f927-7507-a381-c42b-3dd1df5a5a78 hdisk17     ee197d2e-a3eb-58f0-18c8-617293053ae8 hdisk18     39405380-77b2-3c05-fbf2-e9cbc661accb hdisk19     a3295eaa-a8a1-b341-6b02-4a9015a99f23 hdisk20     3986170d-077a-b6a6-b0e7-99022ffa098a hdisk21     e750c8b3-bb48-9d75-5fa7-9f4d736126b3 hdisk22     9315a8b3-6b0e-1a8b-642c-83187dee282c hdisk23     346319af-0190-a46e-8d51-6f139e19e5d9 hdisk24     67fafb19-0464-6199-e8be-a48e177d71be

Show the disk name and UUID on existing cluster node rac222

root@rac222 # lspv -u|awk ‘/ASM/{printf(“%-12s %sn”,$1,$5)}’ hdiskASMd001   10a04b5f-befd-05b1-b142-88054ee86886 hdiskASMd002   efed151e-73a6-77dd-c685-dc4bfb6d42bd hdiskASMd003   6183f585-d9c2-0938-cef9-10daa59b5d21 hdiskASMd004   9812adda-fbe9-e424-8299-b8274d6da44d hdiskASMd005   c3a8eb24-b547-edad-5179-2a31985ca5d2 hdiskASMd006   23647cf1-cf55-1a8b-e96f-f4339454b8cc hdiskASMd007   4a6096eb-42fd-40d0-44d8-09faee2822f4 hdiskASMd008   082d65b5-6c95-23fb-8660-f8de93acc204 hdiskASMd009   647307f9-fa7b-57a6-704a-bd1bae9e1aa0 hdiskASMd010   7e08f927-7507-a381-c42b-3dd1df5a5a78 hdiskASMc011   ee197d2e-a3eb-58f0-18c8-617293053ae8 hdiskASMc012   39405380-77b2-3c05-fbf2-e9cbc661accb hdiskASMc013   a3295eaa-a8a1-b341-6b02-4a9015a99f23 hdiskASMc014   3986170d-077a-b6a6-b0e7-99022ffa098a hdiskASMc015   e750c8b3-bb48-9d75-5fa7-9f4d736126b3 hdiskASMc016   9315a8b3-6b0e-1a8b-642c-83187dee282c hdiskASMc017   346319af-0190-a46e-8d51-6f139e19e5d9 hdiskASMc018   67fafb19-0464-6199-e8be-a48e177d71be

Now the UUID is used to join the rac223 disk name with the rac222 name for the same disk. root@rac223 # lspv -u|awk ‘/none/{printf(“%s %sn”,$1,$5)}’ > /tmp/rac223_name_uuid root@rac223 # rsh rac222 lspv -u|awk ‘/ASM/{printf(“%s %sn”,$1,$5)}’ > /tmp/rac222_name_uuid root@rac223 # cat /tmp/rac223_name_uuid /tmp/rac222_name_uuid|sort -k2|awk ‘{printf(“%-12s %sn”,$1,$2)}’ hdisk14      082d65b5-6c95-23fb-8660-f8de93acc204 hdiskASMd008 082d65b5-6c95-23fb-8660-f8de93acc204 hdisk7       10a04b5f-befd-05b1-b142-88054ee86886 hdiskASMd001 10a04b5f-befd-05b1-b142-88054ee86886 hdisk12      23647cf1-cf55-1a8b-e96f-f4339454b8cc hdiskASMd006 23647cf1-cf55-1a8b-e96f-f4339454b8cc hdisk23      346319af-0190-a46e-8d51-6f139e19e5d9 hdiskASMc017 346319af-0190-a46e-8d51-6f139e19e5d9 hdisk18      39405380-77b2-3c05-fbf2-e9cbc661accb hdiskASMc012 39405380-77b2-3c05-fbf2-e9cbc661accb hdisk20      3986170d-077a-b6a6-b0e7-99022ffa098a hdiskASMc014 3986170d-077a-b6a6-b0e7-99022ffa098a hdisk13      4a6096eb-42fd-40d0-44d8-09faee2822f4 hdiskASMd007 4a6096eb-42fd-40d0-44d8-09faee2822f4 hdisk9       6183f585-d9c2-0938-cef9-10daa59b5d21 hdiskASMd003 6183f585-d9c2-0938-cef9-10daa59b5d21 hdisk15      647307f9-fa7b-57a6-704a-bd1bae9e1aa0 hdiskASMd009 647307f9-fa7b-57a6-704a-bd1bae9e1aa0 hdisk24      67fafb19-0464-6199-e8be-a48e177d71be hdiskASMc018 67fafb19-0464-6199-e8be-a48e177d71be hdisk16      7e08f927-7507-a381-c42b-3dd1df5a5a78 hdiskASMd010 7e08f927-7507-a381-c42b-3dd1df5a5a78 hdisk22      9315a8b3-6b0e-1a8b-642c-83187dee282c hdiskASMc016 9315a8b3-6b0e-1a8b-642c-83187dee282c hdisk10      9812adda-fbe9-e424-8299-b8274d6da44d hdiskASMd004 9812adda-fbe9-e424-8299-b8274d6da44d hdisk19      a3295eaa-a8a1-b341-6b02-4a9015a99f23 hdiskASMc013 a3295eaa-a8a1-b341-6b02-4a9015a99f23 hdisk11      c3a8eb24-b547-edad-5179-2a31985ca5d2 hdiskASMd005 c3a8eb24-b547-edad-5179-2a31985ca5d2 hdisk21      e750c8b3-bb48-9d75-5fa7-9f4d736126b3 hdiskASMc015 e750c8b3-bb48-9d75-5fa7-9f4d736126b3 hdisk17      ee197d2e-a3eb-58f0-18c8-617293053ae8 hdiskASMc011 ee197d2e-a3eb-58f0-18c8-617293053ae8 hdisk8       efed151e-73a6-77dd-c685-dc4bfb6d42bd hdiskASMd002 efed151e-73a6-77dd-c685-dc4bfb6d42bd

Based on the above paring the disks on rac223 are renamed to match the rac222 name.

root@rac223 # rendev -l hdisk14 -n hdiskASMd008 hdiskASMd008 root@rac223 # rendev -l hdisk7 -n hdiskASMd001 hdiskASMd001 root@rac223 # rendev -l hdisk12 -n hdiskASMd006 hdiskASMd006 root@rac223 # rendev -l hdisk23 -n hdiskASMc017 hdiskASMc017 root@rac223 # rendev -l hdisk18 -n hdiskASMc012 hdiskASMc012 root@rac223 # rendev -l hdisk20 -n hdiskASMc014 hdiskASMc014 root@rac223 # rendev -l hdisk13 -n hdiskASMd007 hdiskASMd007 root@rac223 # rendev -l hdisk9 -n hdiskASMd003 hdiskASMd003 root@rac223 # rendev -l hdisk15 -n hdiskASMd009 hdiskASMd009 root@rac223 # rendev -l hdisk24 -n hdiskASMc018 hdiskASMc018 root@rac223 # rendev -l hdisk16 -n hdiskASMd010 hdiskASMd010 root@rac223 # rendev -l hdisk22 -n hdiskASMc016 hdiskASMc016 root@rac223 # rendev -l hdisk10 -n hdiskASMd004 hdiskASMd004 root@rac223 # rendev -l hdisk19 -n hdiskASMc013 hdiskASMc013 root@rac223 # rendev -l hdisk11 -n hdiskASMd005 hdiskASMd005 root@rac223 # rendev -l hdisk21 -n hdiskASMc015 hdiskASMc015 root@rac223 # rendev -l hdisk17 -n hdiskASMc011 hdiskASMc011 root@rac223 # rendev -l hdisk8 -n hdiskASMd002 hdiskASMd002

Now the we set the disk permissions, and any disk device attributes that may be required, for example the reserve_policy. Then the ASM device can be locked.

Set the ownership: root@rac223 # chown oracle:dba /dev/rhdiskASM*

Set the any required disk attributeson all ASM disks, for example the reserve_policy:

chdev -l hdiskASMd001 -a reserve_policy=no_reserve hdiskASMd001 changed (repeat for other disks)

Lock each of the ASM disk devices, for example:

lkdev –l hdiskASMd001 –a hdiskASMd001 locked (repeat for other disks)

Check lspv:

root@rac223 # lspv|awk ‘{printf(“%-12s %-16s %-6s %sn”,$1,$2,$3,$4,$5,$6)}’ hdisk0       00c1894c05e93592 rootvg active hdisk1       00c1894c4a3bdd1d oravg active hdisk2       00c1894c4a3bf1dc oravg active hdisk3       00c1894ca63631c2 None hdisk4       00c1892cab6286eb None hdisk5       00c1892cab72bef3 None hdisk6       00c1892cac105522 None hdiskASMd001 none             None locked hdiskASMd002 none             None locked hdiskASMd003 none             None locked hdiskASMd004 none             None locked hdiskASMd005 none             None locked hdiskASMd006 none             None locked hdiskASMd007 none             None locked hdiskASMd008 none             None locked hdiskASMd009 none             None locked hdiskASMd010 none             None locked hdiskASMc011 none             None locked hdiskASMc012 none             None locked hdiskASMc013 none             None locked hdiskASMc014 none             None locked hdiskASMc015 none             None locked hdiskASMc016 none             None locked hdiskASMc017 none             None locked hdiskASMc018 none             None locked

Now the Oracle RAC software can be installed.

References : 

Principal Author :

Dennis Massanari, IBM – onsite engineer at Oracle.

Reviewers :  Sundar Matpadi and Nitin Vengurlekar, Oracle Corporation

References to Related AIX Documentation:


3 views0 comments

Recent Posts

See All

12c Grid Infrastructure Quick Reference

This is a 12c Grid Infrastructure quick reference DETAILS “crsctl stat res -t” output example This example is from a 12c GI Flex Cluster...

OCR – Vote disk Maintenace

Prepare the disks For OCR or voting disk addition or replacement, new disks need to be prepared. Please refer to Clusteware/Gird...

ความคิดเห็น


bottom of page