This is a 12c Grid Infrastructure quick reference
DETAILS
“crsctl stat res -t” output example
This example is from a 12c GI Flex Cluster with Flex ASM, 4 nodes (racnode1f-racnode4f) are Hub nodes and 4 nodes (racnode6f-racnode9f)) are Leaf nodes
$ <GI_HOME>/bin/crsctl stat res -t ——————————————————————————– Name Target State Server State details ——————————————————————————– Local Resources ——————————————————————————– ora.ASMNET1LSNR_ASM.lsnr ====>> new in 12c, Flex ASM listener resource ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ora.DGGHS.dg ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ora.DGHS.GHCHKPT.advm ONLINE ONLINE racnode1f Volume device /dev/asm/ghchkpt-425 is online,STABLE ONLINE ONLINE racnode2f Volume device /dev/asm/ghchkpt-425 is online,STABLE ONLINE ONLINE racnode3f Volume device /dev/asm/ghchkpt-425 is online,STABLE ONLINE ONLINE racnode4f Volume device /dev/asm/ghchkpt-425 is online,STABLE ora.DGHS.GHVOL253214.advm ONLINE ONLINE racnode1f Volume device /dev/asm/ghvol253214-425 is online,STABLE ONLINE ONLINE racnode2f Volume device /dev/asm/ghvol253214-425 is online,STABLE ONLINE ONLINE racnode3f Volume device /dev/asm/ghvol253214-425 is online,STABLE ONLINE ONLINE racnode4f Volume device /dev/asm/ghvol253214-425 is online,STABLE ora.DGHS.GHVOL324964.advm ONLINE ONLINE racnode1f Volume device /dev/asm/ghvol324964-425 is online,STABLE ONLINE ONLINE racnode2f Volume device /dev/asm/ghvol324964-425 is online,STABLE ONLINE ONLINE racnode3f Volume device /dev/asm/ghvol324964-425 is online,STABLE ONLINE ONLINE racnode4f Volume device /dev/asm/ghvol324964-425 is online,STABLE ora.DGHS.dg ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ora.DGSYS.dg ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ora.LISTENER.lsnr ====>> same as in pre-12c, on Hub node only ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ONLINE ONLINE racnode4f STABLE ora.LISTENER_LEAF.lsnr ====>> new in 12c, Leaf node listener resource, on Leaf nodes only ONLINE ONLINE racnode6f STABLE ONLINE ONLINE racnode7f STABLE ONLINE ONLINE racnode8f STABLE ONLINE ONLINE racnode9f STABLE ora.dghs.ghchkpt.acfs ONLINE ONLINE racnode1f mounted on /ghs/GHcheckpoints,STABLE ONLINE ONLINE racnode2f mounted on /ghs/GHcheckpoints,STABLE ONLINE ONLINE racnode3f mounted on /ghs/GHcheckpoints,STABLE ONLINE ONLINE racnode4f mounted on /ghs/GHcheckpoints,STABLE ora.dghs.ghvol253214.acfs ONLINE ONLINE racnode1f mounted on /ghs/images/ic101dbd,STABLE ONLINE ONLINE racnode2f mounted on /ghs/images/ic101dbd,STABLE ONLINE ONLINE racnode3f mounted on /ghs/images/ic101dbd,STABLE ONLINE ONLINE racnode4f mounted on /ghs/images/ic101dbd,STABLE ora.dghs.ghvol324964.acfs ONLINE ONLINE racnode1f mounted on /ghs/images/ic101db,STABLE ONLINE ONLINE racnode2f mounted on /ghs/images/ic101db,STABLE ONLINE ONLINE racnode3f mounted on /ghs/images/ic101db,STABLE ONLINE ONLINE racnode4f mounted on /ghs/images/ic101db,STABLE ora.helper ====>> new in 12c, on both Hub and Leaf node ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ONLINE ONLINE racnode4f STABLE ONLINE ONLINE racnode6f STABLE ONLINE ONLINE racnode7f STABLE ONLINE ONLINE racnode8f STABLE ONLINE ONLINE racnode9f STABLE ora.net1.network ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ONLINE ONLINE racnode4f STABLE ora.ons ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ONLINE ONLINE racnode4f STABLE ora.proxy_advm ====>> new in 12c, ADVM Proxy Instance resource ONLINE ONLINE racnode1f STABLE ONLINE ONLINE racnode2f STABLE ONLINE ONLINE racnode3f STABLE ONLINE ONLINE racnode4f STABLE ——————————————————————————– Cluster Resources ——————————————————————————– ora.LISTENER_SCAN1.lsnr 1 ONLINE ONLINE racnode1f STABLE ora.LISTENER_SCAN2.lsnr 1 ONLINE ONLINE racnode2f STABLE ora.LISTENER_SCAN3.lsnr 1 ONLINE ONLINE racnode3f STABLE ora.MGMTLSNR ====>> new in 12c, Management Database listener resource 1 ONLINE ONLINE racnode2f 169.254.206.41 10.3.0.232,STABLE ora.asm 1 ONLINE ONLINE racnode1f STABLE 2 ONLINE ONLINE racnode2f STABLE 3 ONLINE ONLINE racnode3f STABLE ora.c101cdb1.db 1 ONLINE ONLINE racnode1f Open,STABLE 2 ONLINE ONLINE racnode2f Open,STABLE ora.c101cdb1.sc101pdb1.svc 1 ONLINE ONLINE racnode2f STABLE ora.c101cdb1.sc101pdb2.svc 1 ONLINE ONLINE racnode2f STABLE ora.cvu 1 ONLINE ONLINE racnode2f STABLE ora.ghs ====>> new in 12c, Grid Home Server resource 1 ONLINE ONLINE racnode2f STABLE ora.ghsvip.havip ====>> new in 12c, Grid Home Server HA VIP resource 1 ONLINE ONLINE racnode2f STABLE ora.gns 1 ONLINE ONLINE racnode2f STABLE ora.gns.vip 1 ONLINE ONLINE racnode2f STABLE ora.racnode1f.vip 1 ONLINE ONLINE racnode1f STABLE ora.racnode2f.vip 1 ONLINE ONLINE racnode2f STABLE ora.racnode3f.vip 1 ONLINE ONLINE racnode3f STABLE ora.racnode4f.vip 1 ONLINE ONLINE racnode4f STABLE ora.mgmtdb ====>> new in 12c, Management Database resource 1 ONLINE ONLINE racnode2f Open,STABLE ora.oc4j 1 ONLINE ONLINE racnode2f STABLE ora.scan1.vip 1 ONLINE ONLINE racnode1f STABLE ora.scan2.vip 1 ONLINE ONLINE racnode2f STABLE ora.scan3.vip 1 ONLINE ONLINE racnode3f STABLE ——————————————————————————–
Note: ora.gsd is removed in 12c
“oifcfg getif” output example
$ oifcfg getif eth0 10.182.208.0 global public eth1 192.168.1.0 global cluster_interconnect,asm ====>> asm network is new in 12c
Example of GI processes
$ ps -ef| grep ocw root 2110 1 0 Dec12 ? 00:05:12 /ocw/c101/bin/ohasd.bin reboot grid 2250 1 0 Dec12 ? 00:01:45 /ocw/c101/bin/oraagent.bin grid 2263 1 0 Dec12 ? 00:02:47 /ocw/c101/bin/evmd.bin grid 2265 1 0 Dec12 ? 00:00:01 /ocw/c101/bin/mdnsd.bin grid 2280 1 0 Dec12 ? 00:00:10 /ocw/c101/bin/gpnpd.bin grid 2294 2263 0 Dec12 ? 00:00:00 /ocw/c101/bin/evmlogger.bin -o /ocw/c101/log/[HOSTNAME]/evmd/evmlogger.info -l /ocw/c101/log/[HOSTNAME]/evmd/evmlogger.log grid 2299 1 1 Dec12 ? 00:13:05 /ocw/c101/bin/gipcd.bin root 2311 1 0 Dec12 ? 00:01:14 /ocw/c101/bin/orarootagent.bin root 2470 1 0 Dec12 ? 00:00:17 /ocw/c101/bin/cssdmonitor root 2482 1 0 Dec12 ? 00:00:18 /ocw/c101/bin/cssdagent grid 2493 1 0 Dec12 ? 00:08:13 /ocw/c101/bin/ocssd.bin root 2558 1 0 Dec12 ? 00:02:50 /ocw/c101/bin/octssd.bin reboot root 2661 1 0 Dec12 ? 00:08:38 /ocw/c101/bin/osysmond.bin root 2668 1 0 Dec12 ? 00:04:08 /ocw/c101/bin/crsd.bin reboot grid 2761 1 0 Dec12 ? 00:04:08 /ocw/c101/bin/oraagent.bin root 2765 1 0 Dec12 ? 00:08:04 /ocw/c101/bin/orarootagent.bin grid 2866 1 0 Dec12 ? 00:00:08 /ocw/c101/bin/scriptagent.bin grid 2882 1 0 Dec12 ? 00:00:01 /ocw/c101/bin/tnslsnr MGMTLSNR -no_crs_notify -inherit root 2891 1 0 Dec12 ? 00:00:30 /ocw/c101/bin/gnsd.bin -trace-level 1 -ip-address 10.1.0.13 -startup-endpoint ipc://GNS_lf6233f_3922_cd98265012b2258f grid 2920 1 0 Dec12 ? 00:01:43 /ocw/c101/jdk/bin/java -server -Xcheck:jni -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Dstdstream.filesize=100 -Dstdstream.filenumber=10 -DTRACING.ENABLED=false -Doracle.wlm.dbwlmlogger.logging.level=INFO -Dport.rmi=23792 -jar /ocw/c101/oc4j/j2ee/home/oc4j.jar -config /ocw/c101/oc4j/j2ee/home/OC4J_DBWLM_config/server.xml -out /ocw/c101/oc4j/j2ee/home/log/oc4j.out -err /ocw/c101/oc4j/j2ee/home/log/oc4j.err grid 5408 1 0 Dec12 ? 00:00:01 /ocw/c101/bin/tnslsnr ASMNET1LSNR_ASM -no_crs_notify -inherit root 6618 1 0 Dec12 ? 00:03:18 /ocw/c101/bin/ologgerd -M -d /ocw/c101/crf/db/lf6232f grid 11940 1 0 09:34 ? 00:00:44 /ocw/c101/jdk/bin/java -server -Xcheck:jni -Xms128M -Xmx384M -Djava.awt.headless=true -Ddisable.checkForUpdate=true -Dstdstream.filesize=100 -Dstdstream.filenumber=10 -DTRACING.ENABLED=false -Dport.rmi=23795 -jar /ocw/c101/oc4j/j2ee/home/oc4j.jar -config /ocw/c101/oc4j/j2ee/home/OC4J_GH_config/server.xml -out /ocw/c101/oc4j/j2ee/home/log/gh_oc4j.out -err /ocw/c101/oc4j/j2ee/home/log/gh_oc4j.err oracle 15670 1 0 10:13 ? 00:00:05 /ocw/c101/bin/oraagent.bin root 18956 8077 0 11:00 pts/0 00:00:00 grep ocw grid 20028 1 0 Dec12 ? 00:00:00 /ocw/c101/bin/tnslsnr LISTENER_SCAN1 -no_crs_notify -inherit grid 27881 1 0 Dec12 ? 00:00:00 /ocw/c101/bin/tnslsnr LISTENER -no_crs_notify -inherit grid 29547 1 0 Dec12 ? 00:00:00 /ocw/c101/opmn/bin/ons -d grid 29548 29547 0 Dec12 ? 00:00:06 /ocw/c101/opmn/bin/ons -d
12c new instances in GI home
grid 2589 1 0 Dec12 ? 00:00:07 asm_pmon_+ASM1 ====>> ASM instance, this isn’t new in 12c grid 2981 1 0 Dec12 ? 00:00:06 mdb_pmon_-MGMTDB ====>> Management Database, this is new in 12c grid 7870 1 0 Dec12 ? 00:00:04 apx_pmon_+APX1 ====>> ADVM Proxy instance, this is new in 12c
srvctl command reference
Usage: srvctl [-V] Usage: srvctl add database -db <db_unique_name> -oraclehome <oracle_home> [-dbtype {RACONENODE | RAC | SINGLE} [-server <server_list>] [-instance <inst_name>] [-timeout <timeout >]] [-domain <domain_name>] [-spfile <spfile>] [-pwfile <password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-startoption <start_opti ons>] [-stopoption <stop_options>] [-startconcurrency <start_concurrency>] [-stopconcurrency <stop_concurrency>] [-dbname <db_name>] [-policy {AUTOMATIC | MANUAL | NORESTART}] [ -serverpool “<serverpool_list>” [-pqpool <pq_server_pools>]] [-node <node_name>] [-diskgroup “<diskgroup_list>”] [-acfspath “<acfs_path_list>”] [-eval] [-verbose] Usage: srvctl config database [-db <db_unique_name> [-all] ] [-verbose] Usage: srvctl start database -db <db_unique_name> [-startoption <start_options>] [-startconcurrency <start_concurrency>] [-node <node>] [-eval] [-verbose] Usage: srvctl stop database -db <db_unique_name> [-stopoption <stop_options>] [-stopconcurrency <stop_concurrency>] [-force] [-eval] [-verbose] Usage: srvctl status database -db <db_unique_name> [-force] [-verbose] Usage: srvctl enable database -db <db_unique_name> [-node <node_name>] Usage: srvctl disable database -db <db_unique_name> [-node <node_name>] Usage: srvctl modify database -db <db_unique_name> [-dbname <db_name>] [-instance <instance_name>] [-oraclehome <oracle_home>] [-user <oracle_user>] [-server <server_list>] [-ti meout <timeout>] [-domain <domain>] [-spfile <spfile>] [-pwfile <password_file_path>] [-role {PRIMARY | PHYSICAL_STANDBY | LOGICAL_STANDBY | SNAPSHOT_STANDBY}] [-startoption <st art_options>] [-stopoption <stop_options>] [-startconcurrency <start_concurrency>] [-stopconcurrency <stop_concurrency>][-policy {AUTOMATIC | MANUAL | NORESTART}] [-serverpool ” <serverpool_list>” [-node <node_name>]] [-pqpool <pq_server_pools>] [-diskgroup “<diskgroup_list>”|-nodiskgroup] [-acfspath “<acfs_path_list>”] [-force] [-eval] [-verbose] Usage: srvctl remove database -db <db_unique_name> [-force] [-noprompt] [-verbose] Usage: srvctl getenv database -db <db_unique_name> [-envs “<name>[,…]”] Usage: srvctl setenv database -db <db_unique_name> {-envs “<name>=<val>[,…]” | -env “<name>=<val>”} Usage: srvctl unsetenv database -db <db_unique_name> -envs “<name>[,…]” Usage: srvctl predict database -db <database_name> [-verbose] Usage: srvctl convert database -db <db_unique_name> -dbtype RAC [-node <node>] Usage: srvctl convert database -db <db_unique_name> -dbtype RACONENODE [-instance <inst_name>] [-timeout <timeout>] Usage: srvctl relocate database -db <db_unique_name> {[-node <target>] [-timeout <timeout>] | -abort [-revert]} [-verbose] Usage: srvctl upgrade database -db <db_unique_name> -oraclehome <oracle_home> Usage: srvctl downgrade database -db <db_unique_name> -oraclehome <oracle_home> -targetversion <to_version> Usage: srvctl add instance -db <db_unique_name> -instance <inst_name> -node <node_name> [-force] Usage: srvctl start instance -db <db_unique_name> {-node <node_name> [-instance <inst_name>] | -instance <inst_name_list>} [-startoption <start_options>] Usage: srvctl stop instance -db <db_unique_name> {-node <node_name> | -instance <inst_name_list>} [-stopoption <stop_options>] [-force] Usage: srvctl status instance -db <db_unique_name> {-node <node_name> | -instance <inst_name_list>} [-force] [-verbose] Usage: srvctl enable instance -db <db_unique_name> -instance “<inst_name_list>” Usage: srvctl disable instance -db <db_unique_name> -instance “<inst_name_list>” Usage: srvctl modify instance -db <db_unique_name> -instance <inst_name> -node <node_name> Usage: srvctl remove instance -db <db_unique_name> -instance <inst_name> [-force] [-noprompt] Usage: srvctl add service -db <db_unique_name> -service <service_name> {-preferred “<preferred_list>” [-available “<available_list>”] [-tafpolicy {BASIC | NONE | PRECONNECT}] | -serverpool <pool_name> [-cardinality {UNIFORM | SINGLETON}] } [-netnum <net_num>] [-role “[PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]”] [-policy {AUTOMATI C | MANUAL}] [-notification {TRUE|FALSE}] [-dtp {TRUE|FALSE}] [-clbgoal {SHORT|LONG}] [-rlbgoal {NONE|SERVICE_TIME|THROUGHPUT}] [-failovertype {NONE|SESSION|SELECT|TRANSACTION}] [-failovermethod {NONE|BASIC}] [-failoverretry <failover_retries>] [-failoverdelay <failover_delay>] [-edition <edition>] [-pdb <pluggable_database>] [-global {TRUE|FALSE}] [-m axlag <max_lag_time>] [-sql_translation_profile <sql_translation_profile>] [-commit_outcome {TRUE|FALSE}] [-retention <retention>] [-replay_init_time <replay_initiation_time>] [ -session_state {STATIC|DYNAMIC}] [-pqservice <pq_service>] [-pqpool <pq_pool_list>] [-force] [-eval] [-verbose] Usage: srvctl add service -db <db_unique_name> -service <service_name> -newinst {-preferred “<new_pref_inst>” | -available “<new_avail_inst>”} [-force] [-verbose] Usage: srvctl config service -db <db_unique_name> [-service <service_name>] [-verbose] Usage: srvctl enable service -db <db_unique_name> -service “<service_name_list>” [-instance <inst_name> | -node <node_name>] [-global_override] Usage: srvctl disable service -db <db_unique_name> -service “<service_name_list>” [-instance <inst_name> | -node <node_name>] [-global_override] Usage: srvctl status service -db <db_unique_name> [-service “<service_name_list>”] [-force] [-verbose] Usage: srvctl predict service -db <database_name> -service <service_name> [-verbose] Usage: srvctl modify service -db <db_unique_name> -service <service_name> -oldinst <old_inst_name> -newinst <new_inst_name> [-force] Usage: srvctl modify service -db <db_unique_name> -service <service_name> -available <avail_inst_name> -toprefer [-force] Usage: srvctl modify service -db <db_unique_name> -service <service_name> -modifyconfig -preferred “<preferred_list>” [-available “<available_list>”] [-force] Usage: srvctl modify service -db <db_unique_name> -service <service_name> [-serverpool <pool_name>] [-pqservice <pqsvc_name>] [-pqpool <pq_pool_list>] [-cardinality {UNIFORM | S INGLETON}] [-tafpolicy {BASIC|NONE}] [-role [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-policy {AUTOMATIC | MANUAL}][-notification {TRUE|FALSE}] [-dtp { TRUE|FALSE}] [-clbgoal {SHORT|LONG}] [-rlbgoal {NONE|SERVICE_TIME|THROUGHPUT}] [-failovertype {NONE|SESSION|SELECT|TRANSACTION}] [-failovermethod {NONE|BASIC}] [-failoverretry < integer>] [-failoverdelay <integer>] [-edition <edition>] [-pdb <pluggable_database>] [-sql_translation_profile <sql_translation_profile>] [-commit_outcome {TRUE|FALSE}] [-reten tion <retention>] [-replay_init_time <replay_initiation_time>] [-session_state {STATIC|DYNAMIC}] [-global_override] [-eval] [-verbose] [-force] Usage: srvctl relocate service -db <db_unique_name> -service <service_name> {-oldinst <old_inst_name> -newinst <new_inst_name> | -currentnode <current_node> -targetnode <target_ node>} [-pq] [-force] [-eval] [-verbose] Usage: srvctl remove service -db <db_unique_name> -service <service_name> [-instance <inst_name>] [-global_override] [-force] Usage: srvctl start service -db <db_unique_name> [-service “<service_name_list>”] [-node <node_name> | -instance <inst_name>] [-pq] [-global_override] [-startoption <start_opti ons>] [-eval] [-verbose] Usage: srvctl stop service -db <db_unique_name> [-service “<service_name_list>”] [-node <node_name> | -instance <inst_name>] [-pq] [-global_override] [-force] [-eval] [-verbose ] Usage: srvctl add nodeapps { { -node <node_name> -address {<vip_name>|<ip>}/<netmask>[/if1[|if2…]] [-skip]} | { -subnet <subnet>/<netmask>[/if1[|if2…]] } } [-emport <em_port >] [-onslocalport <ons_local_port>] [-onsremoteport <ons_remote_port>] [-remoteservers <host>[:<port>][,<host>[:<port>]…]] [-verbose] Usage: srvctl config nodeapps [-viponly] [-onsonly] Usage: srvctl modify nodeapps {[-node <node_name> -address {<vip_name>|<ip>}/<netmask>[/if1[|if2…]] [-skip]] | [-subnet <subnet>/<netmask>[/if1[|if2|…]]]} [-nettype {STATIC| DHCP|AUTOCONFIG|MIXED}] [-emport <em_port>] [ -onslocalport <ons_local_port> ] [-onsremoteport <ons_remote_port> ] [-remoteservers <host>[:<port>][,<host>[:<port>]…]] [-verbos e] Usage: srvctl start nodeapps [-node <node_name>] [-adminhelper] [-verbose] Usage: srvctl stop nodeapps [-node <node_name>] [-adminhelper | -relocate] [-force] [-verbose] Usage: srvctl status nodeapps [-node <node_name>] Usage: srvctl enable nodeapps [-adminhelper] [-verbose] Usage: srvctl disable nodeapps [-adminhelper] [-verbose] Usage: srvctl remove nodeapps [-force] [-noprompt] [-verbose] Usage: srvctl getenv nodeapps [-viponly] [-onsonly] [-envs “<name>[,…]”] Usage: srvctl setenv nodeapps {-envs “<name>=<val>[,…]” | -env “<name>=<val>”} [-viponly | -onsonly] [-verbose] Usage: srvctl unsetenv nodeapps -envs “<name>[,…]” [-viponly | -onsonly] [-verbose] Usage: srvctl add vip -node <node_name> -netnum <network_number> -address {<name>|<ip>}/<netmask>[/if1[|if2…]] [-skip] [-verbose] Usage: srvctl config vip {-node <node_name> | -vip <vip_name>} Usage: srvctl disable vip -vip <vip_name> [-verbose] Usage: srvctl enable vip -vip <vip_name> [-verbose] Usage: srvctl remove vip -vip <vip_name_list> [-force] [-noprompt] [-verbose] Usage: srvctl getenv vip -vip <vip_name> [-envs “<name>[,…]”] Usage: srvctl start vip {-node <node_name> | -vip <vip_name>} [-verbose] Usage: srvctl stop vip {-node <node_name> | -vip <vip_name>} [-force] [-relocate] [-verbose] Usage: srvctl relocate vip -vip <vip_name> [-node <node_name>] [-force] [-verbose] Usage: srvctl status vip {-node <node_name> | -vip <vip_name>} [-verbose] Usage: srvctl setenv vip -vip <vip_name> {-envs “<name>=<val>[,…]” | -env “<name>=<val>”} [-verbose] Usage: srvctl unsetenv vip -vip <vip_name> -envs “<name>[,…]” [-verbose] Usage: srvctl predict vip -vip <vip_name> [-verbose] Usage: srvctl add network [-netnum <net_num>] -subnet <subnet>/<netmask>[/if1[|if2…]] [-nettype {STATIC|DHCP|AUTOCONFIG|MIXED}] [-leaf] [-verbose] Usage: srvctl config network [-netnum <network_number>] Usage: srvctl modify network [-netnum <network_number>] [-subnet <subnet>/<netmask>[/if1[|if2…]]] [-nettype {STATIC|DHCP|AUTOCONFIG|MIXED}] [-iptype {IPV4|IPV6|BOTH}] [-verbos e] Usage: srvctl remove network {-netnum <network_number> | -all} [-force] [-verbose] Usage: srvctl predict network [-netnum <network_number>] [-verbose] Usage: srvctl add asm [-listener <lsnr_name>] [-pwfile <password_file_path>] [-flex [-count {<number_of_instances>|ALL}]|-proxy] Usage: srvctl start asm [-proxy] [-node <node_name>] [-startoption <start_options>] Usage: srvctl stop asm [-proxy] [-node <node_name>] [-stopoption <stop_options>] [-force] Usage: srvctl config asm [-proxy] [-detail] Usage: srvctl status asm [-proxy] [-node <node_name>] [-detail] [-verbose] Usage: srvctl enable asm [-proxy] [-node <node_name>] Usage: srvctl disable asm [-proxy] [-node <node_name>] Usage: srvctl modify asm [-listener <lsnr_name>] [-pwfile <password_file_path>] [-count {<number_of_instances>|ALL}] [-force] Usage: srvctl remove asm [-proxy] [-force] Usage: srvctl relocate asm -currentnode <current_node> [-targetnode <target_node>] [-force] Usage: srvctl getenv asm [-envs “<name>[,…]”] Usage: srvctl setenv asm {-envs “<name>=<val>[,…]” | -env “<name>=<value>”} Usage: srvctl unsetenv asm -envs “<name>[,…]” Usage: srvctl predict asm [-node <node_name>] [-verbose] Usage: srvctl add ioserver [-count <number_of_ioserver_instances>] Usage: srvctl start ioserver [-node <node_name>] Usage: srvctl stop ioserver [-node <node_name>] [-force] Usage: srvctl relocate ioserver -currentnode <current_node> [-targetnode <target_node>] [-force] Usage: srvctl config ioserver [-detail] Usage: srvctl status ioserver [-node <node_name>] [-detail] Usage: srvctl enable ioserver [-node <node_name>] Usage: srvctl disable ioserver [-node <node_name>] Usage: srvctl modify ioserver [-count <number_of_ioserver_instances>] [-force] Usage: srvctl remove ioserver [-force] Usage: srvctl getenv ioserver [-envs “<name>[,…]”] Usage: srvctl setenv ioserver {-envs “<name>=<val>[,…]” | -env “<name>=<value>”} Usage: srvctl unsetenv ioserver -envs “<name>[,…]” Usage: srvctl start diskgroup -diskgroup <dg_name> [-node “<node_list>”] Usage: srvctl stop diskgroup -diskgroup <dg_name> [-node “<node_list>”] [-force] Usage: srvctl status diskgroup -diskgroup <dg_name> [-node “<node_list>”] [-detail] [-verbose] Usage: srvctl enable diskgroup -diskgroup <dg_name> [-node “<node_list>”] Usage: srvctl disable diskgroup -diskgroup <dg_name> [-node “<node_list>”] Usage: srvctl remove diskgroup -diskgroup <dg_name> [-force] Usage: srvctl predict diskgroup -diskgroup <diskgroup_name> [-verbose] Usage: srvctl add listener [-listener <lsnr_name>] {[-netnum <net_num>] [-oraclehome <path>] [-user <oracle_user>] | -asmlistener [-subnet <subnet>] | -leaflistener [-subnet <su bnet>]} [-skip] [-endpoints “[TCP:]<port>[, …][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]”] Usage: srvctl config listener [-listener <lsnr_name> | -asmlistener | -leaflistener] [-all] Usage: srvctl start listener [-listener <lsnr_name>] [-node <node_name>] Usage: srvctl stop listener [-listener <lsnr_name>] [-node <node_name>] [-force] Usage: srvctl status listener [-listener <lsnr_name>] [-node <node_name>] [-verbose] Usage: srvctl enable listener [-listener <lsnr_name>] [-node <node_name>] Usage: srvctl disable listener [-listener <lsnr_name>] [-node <node_name>] Usage: srvctl modify listener [-listener <lsnr_name>] [-oraclehome <path>] [-endpoints “[TCP:]<port>[, …][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]”] [-user <oracle_user>] [-netnum <net_num>] Usage: srvctl remove listener [-listener <lsnr_name> | -all] [-force] Usage: srvctl getenv listener [-listener <lsnr_name>] [-envs “<name>[,…]”] Usage: srvctl setenv listener [-listener <lsnr_name>] {-envs “<name>=<val>[,…]” | -env “<name>=<value>”} Usage: srvctl unsetenv listener [-listener <lsnr_name>] -envs “<name>[,…]” Usage: srvctl predict listener -listener <listener_name> [-verbose] Usage: srvctl add scan -scanname <scan_name> [-netnum <network_number>] Usage: srvctl config scan [[-netnum <network_number>] [-scannumber <ordinal_number>] | -all] Usage: srvctl start scan [-netnum <network_number>] [-scannumber <ordinal_number>] [-node <node_name>] Usage: srvctl stop scan [-netnum <network_number>] [-scannumber <ordinal_number>] [-force] Usage: srvctl relocate scan -scannumber <ordinal_number> [-netnum <network_number>] [-node <node_name>] Usage: srvctl status scan [[-netnum <network_number>] [-scannumber <ordinal_number>] | -all] [-verbose] Usage: srvctl enable scan [-netnum <network_number>] [-scannumber <ordinal_number>] Usage: srvctl disable scan [-netnum <network_number>] [-scannumber <ordinal_number>] Usage: srvctl modify scan -scanname <scan_name> [-netnum <network_number>] Usage: srvctl remove scan [-netnum <network_number>] [-force] [-noprompt] Usage: srvctl predict scan -scannumber <scan_ordinal_number> [-netnum <network_number>] [-verbose] Usage: srvctl add scan_listener [-netnum <network_number>] [-listener <lsnr_name_prefix>] [-skip] [-endpoints [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<p ort>]] [-invitednodes <node_list>] [-invitedsubnets <subnet_list>] Usage: srvctl config scan_listener [[-netnum <network_number>] [-scannumber <ordinal_number>] | -all] Usage: srvctl start scan_listener [-netnum <network_number>] [-scannumber <ordinal_number>] [-node <node_name>] Usage: srvctl stop scan_listener [-netnum <network_number>] [-scannumber <ordinal_number>] [-force] Usage: srvctl relocate scan_listener -scannumber <ordinal_number> [-netnum <network_number>] [-node <node_name>] Usage: srvctl status scan_listener [[-netnum <network_number>] [-scannumber <ordinal_number>] | -all] [-verbose] Usage: srvctl enable scan_listener [-netnum <network_number>] [-scannumber <ordinal_number>] Usage: srvctl disable scan_listener [-netnum <network_number>] [-scannumber <ordinal_number>] Usage: srvctl modify scan_listener {-update|-endpoints [TCP:]<port>[/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]} [-netnum <network_number>] [-invitednodes <node_ list>] [-invitedsubnets <subnet_list>] Usage: srvctl remove scan_listener [-netnum <network_number>] [-force] [-noprompt] Usage: srvctl predict scan_listener -scannumber <scan_ordinal_number> [-netnum <network_number>] [-verbose] Usage: srvctl add srvpool -serverpool <pool_name> [-min <min>] [-max <max>] [-importance <importance>] [-servers “<server_list>” | -category <server_category>] [-force] [-eval] [-verbose] Usage: srvctl config srvpool [-serverpool <pool_name>] Usage: srvctl status srvpool [-serverpool <pool_name>] [-detail] Usage: srvctl status server -servers “<server_list>” [-detail] Usage: srvctl relocate server -servers “<server_list>” -serverpool <pool_name> [-force] [-eval] [-verbose] Usage: srvctl modify srvpool -serverpool <pool_name> [-min <min>] [-max <max>] [-importance <importance>] [-servers “<server_list>” | -category <server_category>] [-force] [-eva l] [-verbose] Usage: srvctl remove srvpool -serverpool <pool_name> [-eval] [-verbose] Usage: srvctl add oc4j [-verbose] Usage: srvctl config oc4j Usage: srvctl start oc4j [-verbose] Usage: srvctl stop oc4j [-force] [-verbose] Usage: srvctl relocate oc4j [-node <node_name>] [-verbose] Usage: srvctl status oc4j [-node <node_name>] [-verbose] Usage: srvctl enable oc4j [-node <node_name>] [-verbose] Usage: srvctl disable oc4j [-node <node_name>] [-verbose] Usage: srvctl modify oc4j -rmiport <oc4j_rmi_port> [-verbose] [-force] Usage: srvctl remove oc4j [-force] [-verbose] Usage: srvctl predict oc4j [-verbose] Usage: srvctl add gridhomeserver -storage <base_path> -diskgroup <diskgroup_list> Usage: srvctl config gridhomeserver Usage: srvctl start gridhomeserver [-node <node_name>] Usage: srvctl stop gridhomeserver Usage: srvctl relocate gridhomeserver [-node <node_name>] Usage: srvctl status gridhomeserver Usage: srvctl enable gridhomeserver [-node <node_name>] Usage: srvctl disable gridhomeserver [-node <node_name>] Usage: srvctl modify gridhomeserver [-storage <base_path>] [-port <rmi_port> [-force]] Usage: srvctl remove gridhomeserver [-force] Usage: srvctl add havip -id <id> -address {<name>|<ip>} [-netnum <network_number>] [-description <text>] [-skip] Usage: srvctl config havip [-id <id>] Usage: srvctl start havip -id <id> [-node <node_name>] Usage: srvctl stop havip -id <id> [-node <node_name>] [-force] Usage: srvctl relocate havip -id <id> [-node <node_name>] [-force] Usage: srvctl status havip [-id <id>] Usage: srvctl enable havip -id <id> [-node <node_name>] Usage: srvctl disable havip -id <id> [-node <node_name>] Usage: srvctl modify havip -id <id> [-address {<name>|<ip>} [-netnum <network_number>] [-skip]] [-description <text>] Usage: srvctl remove havip -id <id> [-force] Usage: srvctl add exportfs -name <expfs_name> -id <id> -path <exportpath> [-clients <export_clients>] [-options <export_options>] Usage: srvctl config exportfs [-name <expfs_name> | -id <havip id>] Usage: srvctl start exportfs {-name <expfs_name> | -id <havip id> } Usage: srvctl stop exportfs {-name <expfs_name> | -id <havip id> } [-force] Usage: srvctl status exportfs [-name <expfs_name> |-id <havip id>] Usage: srvctl enable exportfs -name <expfs_name> Usage: srvctl disable exportfs -name <expfs_name> Usage: srvctl modify exportfs -name <expfs_name> [-path <exportpath>] [-clients <export_clients>] [-options <export_options>] Usage: srvctl remove exportfs -name <expfs_name> [-force] Usage: srvctl add gridhomeclient -clientdata <file> Usage: srvctl config gridhomeclient Usage: srvctl start gridhomeclient [-node <node_name>] Usage: srvctl stop gridhomeclient Usage: srvctl relocate gridhomeclient [-node <node_name>] Usage: srvctl status gridhomeclient Usage: srvctl enable gridhomeclient [-node <node_name>] Usage: srvctl disable gridhomeclient [-node <node_name>] Usage: srvctl modify gridhomeclient [-clientdata <file>] [-port <rmi_port> [-force]] Usage: srvctl remove gridhomeclient [-force] Usage: srvctl start home -oraclehome <oracle_home> -statefile <state_file> -node <node_name> Usage: srvctl stop home -oraclehome <oracle_home> -statefile <state_file> -node <node_name> [-stopoption <stop_options>] [-force] Usage: srvctl status home -oraclehome <oracle_home> -statefile <state_file> -node <node_name> Usage: srvctl add filesystem -device <volume_device> -path <mountpoint_path> [-volume <volume_name>] [-diskgroup <dg_name>] [-node <node_list> | -serverpool <serverpool_list>] [ -user <user>] [-fstype {ACFS|EXT3|EXT4}] [-fsoptions <options>] [-description <description>] [-appid <application_id>] [-autostart {ALWAYS|NEVER|RESTORE}] Usage: srvctl config filesystem [-device <volume_device>] Usage: srvctl start filesystem -device <volume_device> [-node <node_name>] Usage: srvctl stop filesystem -device <volume_device> [-node <node_name>] [-force] Usage: srvctl status filesystem [-device <volume_device>] [-verbose] Usage: srvctl enable filesystem -device <volume_device> Usage: srvctl disable filesystem -device <volume_device> Usage: srvctl modify filesystem -device <volume_device> [-user <user>] [-path <mountpoint_path>] [[-node <node_list>] | [-serverpool <serverpool_list>]] [-fsoptions <options>] [ -description <description>] [-autostart {ALWAYS|NEVER|RESTORE}] [-force] Usage: srvctl remove filesystem -device <volume_device> [-force] Usage: srvctl predict filesystem -device <volume_device> [-verbose] Usage: srvctl config volume [-volume <volume_name>] [-diskgroup <group_name>] [-device <volume_device>] Usage: srvctl start volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node <node_list>] Usage: srvctl stop volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} [-node <node_list>] [-force] Usage: srvctl status volume [-device <volume_device>] [-volume <volume_name>] [-diskgroup <group_name>] [-node <node_list> | -all] Usage: srvctl enable volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} Usage: srvctl disable volume {-volume <volume_name> -diskgroup <group_name> | -device <volume_device>} Usage: srvctl start gns [-loglevel <log_level>] [-node <node_name>] [-verbose] Usage: srvctl stop gns [-node <node_name>] [-force] [-verbose] Usage: srvctl config gns [-detail] [-subdomain] [-multicastport] [-node <node_name>] [-port] [-status] [-version] [-query <name>] [-list] [-clusterguid] [-clustername] [-cluster type] [-loglevel] [-network] [-verbose] Usage: srvctl status gns [-node <node_name>] [-verbose] Usage: srvctl enable gns [-node <node_name>] [-verbose] Usage: srvctl disable gns [-node <node_name>] [-verbose] Usage: srvctl relocate gns [-node <node_name>] [-verbose] Usage: srvctl add gns {-vip {<vip_name> | <ip>} [-skip] [-domain <domain>] [-verbose] | -clientdata <filename>} Usage: srvctl modify gns {-loglevel <log_level> | [-resolve <address>] [-verify <name>] [-parameter <name>:<value>[,<name>:<value>…]] [-vip <vip_name> [-skip]] [-clientdata <f ilename>] [-verbose]} Usage: srvctl remove gns [-force] [-verbose] Usage: srvctl import gns -instance <filename> Usage: srvctl export gns {-instance <filename> | -clientdata <filename>} Usage: srvctl update gns {-advertise <name> -address <address> [-timetolive <time_to_live>] | -delete <name> [-address <address>] | -alias <alias> -name <name> [-timetolive <tim e_to_live>] | -deletealias <alias> | -createsrv <service> -target <target> -protocol <protocol> [-weight <weight>] [-priority <priority>] [-port <port_number>] [-timetolive <tim e_to_live>] [-instance <instance_name>] | -deletesrv <service_name> -target <target> -protocol <protocol> [-instance <instance_name>] | -createtxt <name> -target <target> [-time tolive <time_to_live>] [-namettl <name_ttl>] | -deletetxt <name> -target <target> | -createptr <name> -target <target> [-timetolive <time_to_live>] [-namettl <name_ttl>] | -dele teptr <name> -target <target>} [-verbose] Usage: srvctl add cvu [-checkinterval <check_interval_in_minutes>] Usage: srvctl config cvu Usage: srvctl start cvu [-node <node_name>] Usage: srvctl stop cvu [-force] Usage: srvctl relocate cvu [-node <node_name>] Usage: srvctl status cvu [-node <node_name>] Usage: srvctl enable cvu [-node <node_name>] Usage: srvctl disable cvu [-node <node_name>] Usage: srvctl modify cvu -checkinterval <check_interval_in_minutes> Usage: srvctl remove cvu [-force] Usage: srvctl add mgmtdb [-domain <domain_name>] Usage: srvctl config mgmtdb [-verbose] [-all] Usage: srvctl start mgmtdb [-startoption <start_option>] [-node <node_name>] Usage: srvctl stop mgmtdb [-stopoption <stop_option>] [-force] Usage: srvctl status mgmtdb [-verbose] Usage: srvctl enable mgmtdb [-node <node_name>] Usage: srvctl disable mgmtdb [-node <node_name>] Usage: srvctl modify mgmtdb [-pwfile <password_file_path>] [-spfile <server_parameter_file>] Usage: srvctl remove mgmtdb [-force] [-noprompt] [-verbose] Usage: srvctl getenv mgmtdb [-envs “<name>=<value>[,…]”] Usage: srvctl setenv mgmtdb {-envs “<name>=<value>[,…]” | -env “<name=value>”} Usage: srvctl unsetenv mgmtdb -envs “<name>[,..]” Usage: srvctl relocate mgmtdb [-node <node_name>] Usage: srvctl add mgmtlsnr [-endpoints “[TCP:]<port>[, …][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]”] [-skip] Usage: srvctl config mgmtlsnr [-all] Usage: srvctl start mgmtlsnr [-node <node_name>] Usage: srvctl stop mgmtlsnr [-node <node_name>] [-force] Usage: srvctl status mgmtlsnr [-verbose] Usage: srvctl enable mgmtlsnr [-node <node_name>] Usage: srvctl disable mgmtlsnr [-node <node_name>] Usage: srvctl modify mgmtlsnr -endpoints “[TCP:]<port>[,…][/IPC:<key>][/NMP:<pipe_name>][/TCPS:<s_port>] [/SDP:<port>]” Usage: srvctl remove mgmtlsnr [-force] Usage: srvctl getenv mgmtlsnr [ -envs “<name>[,…]”] Usage: srvctl setenv mgmtlsnr { -envs “<name>=<val>[,…]” | -env “<name>=<value>”} Usage: srvctl unsetenv mgmtlsnr -envs “<name>[,…]”
CVU command reference
USAGE:
cluvfy comp <component-name> <component-specific options> [-verbose]
SYNTAX (for Components): cluvfy comp nodereach -n <node_list> [-srcnode <srcnode>] [-verbose] cluvfy comp nodecon [-n <node_list>] [-networks <network_list>] [-verbose] cluvfy comp cfs [-n <node_list>] -f <file_system> [-verbose] cluvfy comp ssa [-n <node_list>|-flex -hub <hub_list> [-leaf <leaf_list>]] [-s <storageID_list>] [-t {software|data|ocr_vdisk}] [-asm [-asmdev <asm_device_list>]] [-r {10.1|10.2|11.1|11.2|12.1}] [-verbose] cluvfy comp space [-n <node_list>] -l <storage_location> -z <disk_space>{B|K|M|G} [-verbose] cluvfy comp sys [-n <node_list>|-flex -hub <hub_list> [-leaf <leaf_list>]] -p {crs|ha|database} [-r {10.1|10.2|11.1|11.2|12.1}] [-osdba <osdba_group>] [-orainv <orainventory_group>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy comp clu [-n <node_list>] [-verbose] cluvfy comp clumgr [-n <node_list>] [-verbose] cluvfy comp ocr [-n <node_list>] [-method {sudo|root}] [-location <dir_path>] [-user <user_name>] [-verbose] cluvfy comp olr [-n <node_list>] [-verbose] cluvfy comp ha [-verbose] cluvfy comp freespace [-n <node_list>] cluvfy comp crs [-n <node_list>] [-verbose] cluvfy comp nodeapp [-n <node_list>] [-verbose] cluvfy comp admprv [-n <node_list>] -o user_equiv [-sshonly] -o crs_inst [-asmgrp <asmadmin_group>] [-orainv <orainventory_group>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] -o db_inst [-osdba <osdba_group>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] -o db_config -d <oracle_home> [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy comp peer -n <node_list> [-refnode <refnode>] [-r {10.1|10.2|11.1|11.2|12.1}] [-osdba <osdba_group>] [-orainv <orainventory_group>] [-verbose] cluvfy comp software [-n <node_list>] [-d <oracle_home>] [-r {10.1|10.2|11.1|11.2|12.1}] [-verbose] cluvfy comp acfs [-n <node_list>] [-f <file_system>] [-verbose] cluvfy comp asm [-n <node_list>] [-verbose] cluvfy comp gpnp [-n <node_list>] [-verbose] cluvfy comp gns -precrsinst [-domain <gns_domain>] -vip <gns_vip> [-networks <network_list>] [-n <node_list>] [-verbose] gns -postcrsinst [-verbose] cluvfy comp scan [-verbose] cluvfy comp ohasd [-n <node_list>] [-verbose] cluvfy comp clocksync [-n <node_list>] [-noctss] [-verbose] cluvfy comp vdisk [-n <node_list>] [-verbose] cluvfy comp healthcheck [-collect {cluster|database}] [-db <db_unique_name>] [-bestpractice|-mandatory] [-deviations] [-html] [-save [-savedir <save_dir>]] cluvfy comp dhcp -clustername <cluster_name> [-vipresname <application_vip_resource_names>] [-port <dhcp_port>] [-n <node_list>] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-networks <network_list>] [-verbose] cluvfy comp dns -server -domain <gns_sub_domain> -vipaddress <gns_vip_address> [-port <dns_port>] [-method {sudo|root}] [-location <dir_path>] [-user <user_name>] [-verbose] dns -client -domain <gns_sub_domain> -vip <gns_vip> [-port <dns_port>] [-last] [-method {sudo|root}] [-location <dir_path>] [-user <user_name>] [-n <node_list>] [-verbose] cluvfy comp baseline -compare <baseline1,baseline2,..> baseline -collect {all|cluster|database} [-n <node_list>] [-db <db_unique_name>] [-bestpractice|-mandatory] [-binlibfilesonly] [-reportname <report_name>] [-savedir <save_dir>]
USAGE:
cluvfy stage {-pre|-post} <stage-name> <stage-specific options> [-verbose]
SYNTAX (for Stages): cluvfy stage -pre cfs -n <node_list> -s <storageID_list> [-verbose] cluvfy stage -pre crsinst -file <config_file> [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] crsinst -upgrade [-n <node_list>|-flex -hub <hub_list> [-leaf <leaf_list>]] [-rolling] [-src_crshome <src_crshome>] -dest_crshome <dest_crshome> -dest_version <dest_version> [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] crsinst -n <node_list>|-flex -hub <hub_list> [-leaf <leaf_list>] [-r {10.1|10.2|11.1|11.2|12.1}] [-c <ocr_location_list>] [-q <voting_disk_list>] [-osdba <osdba_group>] [-orainv <orainventory_group>] [-asm -presence {local|near|far} [-asmgrp <asmadmin_group>] [-asmdev <asm_device_list>] [-asmcredentials <client_data_file>]] [-crshome <crs_home>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-networks <network_list>] [-dhcp -clustername <cluster_name> [-dhcpport <dhcp_port>]] [-verbose] cluvfy stage -pre acfscfg -n <node_list> [-asmdev <asm_device_list>] [-verbose] cluvfy stage -pre dbinst -n <node_list> [-r {10.1|10.2|11.1|11.2|12.1}] [-osdba <osdba_group>] [-osbackup <osbackup_group>] [-osdg <osdg_group>] [-oskm <oskm_group>] [-d <oracle_home>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] dbinst -upgrade -src_dbhome <src_dbhome> [-dbname <dbname-list>] -dest_dbhome <dest_dbhome> -dest_version <dest_version> [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy stage -pre dbcfg -n <node_list> -d <oracle_home> [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy stage -pre hacfg [-osdba <osdba_group>] [-orainv <orainventory_group>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy stage -pre nodeadd -n <node_list> [-vip <vip_list>]|-flex [-hub <hub_list> [-vip <vip_list>]] [-leaf <leaf_list>] [-fixup] [-fixupnoexec] [-method {sudo|root} [-location <dir_path>] [-user <user_name>]] [-verbose] cluvfy stage -post hwos -n <node_list> [-s <storageID_list>] [-verbose] cluvfy stage -post cfs -n <node_list> -f <file_system> [-verbose] cluvfy stage -post crsinst -n <node_list> [-method {sudo|root}] [-location <dir_path>] [-user <user_name>] [-verbose] cluvfy stage -post acfscfg -n <node_list> [-verbose] cluvfy stage -post hacfg [-verbose] cluvfy stage -post nodeadd -n <node_list> [-verbose] cluvfy stage -post nodedel -n <node_list> [-verbose]
oifcfg command reference
Name: oifcfg – Oracle Interface Configuration Tool.
Usage: oifcfg iflist [-p [-n]] Shows the available interfaces that you can configure with ‘setif’ by querying the operating system to find which network interfaces are present on this node.
where -p displays a heuristic assumption of the interface type (PRIVATE, PUBLIC, or UNKNOWN). -n displays the netmask. oifcfg setif {-node <nodename> | -global} {<if_name>/<subnet>:<if_type>[,<if_type>…]}[,…] oifcfg getif [-node <nodename> | -global] [ -if <if_name>[/<subnet>] [-type <if_type>] ] oifcfg delif {{-node <nodename> | -global} [<if_name>[/<subnet>]] [-force] | -force} oifcfg [-help]
<nodename> – name of the host, as known to a communications network <if_name> – name by which the interface is configured in the system <subnet> – subnet address of the interface <if_type> – one or more comma-separated interface types { cluster_interconnect | public | asm} ====>> asm type is new in 12c
ghctl command reference
Usage: ghctl add diskgroup -diskgroup <diskgroup_name> Usage: ghctl add client -client <cluster_name> -toclientdata <path> [-maproles <role>=<username>[+<username>…][,<role>=<username>[+<username>…]…] | -cloneclient <cluster_name> ] Usage: ghctl add role -role <role_name> [-hasRoles <roles>] Usage: ghctl add image -image <imagename> -workingcopy <workingcopyname> [-imagetype <ORACLEDBSOFTWARE | SOFTWARE>] [-series <seriesname>] Usage: ghctl add series -series <series_name> [-image <image_name>] Usage: ghctl add workingcopy -workingcopy <workingcopy_name> [-image <image_name>] [-oraclebase <oraclebase_path>] [-path <where_path>] [-user <user_name>] [-dbname <unique_db_name> [-dbtype {RACONENODE | RAC}] -storage <storage_path> -dbtemplate <file_path> {-node <node_list> | -serverpool <pool_name> [-pqpool <pool_name> | -newpqpool <pool_name> -pqcardinality <cardinality>] | -newpool <pool_name> -cardinality <cardinality> [-pqpool <pool_name> | -newpqpool <pool_name> -pqcardinality <cardinality>]}] [-client <cluster_name>] Usage: ghctl config client [-client <cluster_name>] Usage: ghctl config role [-role <rolename>] Usage: ghctl config server Usage: ghctl config image [-image <image_name> [-privs]] Usage: ghctl config series [-series <series_name> | -image <image_name>] Usage: ghctl config workingcopy [-workingcopy <workingcopy_name> [-privs] | -image <image_name>] Usage: ghctl import client -clientdata <file_path> Usage: ghctl import image -image <image_name> -path <path> [-mountpath <path>] [-imagetype <ORACLEDBSOFTWARE | SOFTWARE>] Usage: ghctl export client -client <cluster_name> -clientdata <file_path> Usage: ghctl delete client -client <cluster_name> Usage: ghctl delete image -image <imagename> Usage: ghctl delete role -role <role_name> Usage: ghctl delete series -series <series_name> [-force] Usage: ghctl delete workingcopy -workingcopy <workingcopy_name> [-dbname <unique_db_name>] Usage: ghctl promote image -image <image_name> -state {TESTABLE|RESTRICTED|PUBLISHED} Usage: ghctl insertimage series -series <series_name> -image <image_name> [-before <image_name>] Usage: ghctl deleteimage series -series <series_name> -image <image_name> Usage: ghctl allow image -image <image_name> {-user <username> [-client <cluster_name>] | -role <rolename>} Usage: ghctl disallow image -image <image_name> {-user <username> [-client <cluster_name>] | -role <rolename>} Usage: ghctl modify client -client <cluster_name> [-enabled {TRUE|FALSE}] [-maproles <role>=<username>[+<username>…][,<role>=<username>[+<username>…]…]] Usage: ghctl grant role {-role <role_name> { -user <user_name> [-client <cluster_name>] | -grantee <role_name>}} | {[-client <cluster_name>] -maproles <role>=<username>[+<username>…][,<role>=<username>[+<username>…]…]} Usage: ghctl revoke role {-role <role_name> { -user <user_name> [-client <cluster_name>] | -grantee <role_name>}} | {[-client <cluster_name>] -maproles <role>=<username>[+<username>…][,<role>=<username>[+<username>…]…]} Usage: ghctl remove diskgroup -diskgroup <diskgroup_name> Usage: ghctl move database -sourcewc <workingcopy_name> -patchedwc <workingcopy_name> [-dbname <unique_db_name>] [-nonrolling]
agctl command reference
Usage: agctl add apache_tomcat <instance_name> –catalina_home <catalina_home> –java_home <java_home> [–catalina_base <catalina_base>] [–jre_home <jre_home>] [–webserver <webserver_instance>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] Usage: agctl check apache_tomcat <instance_name> [–node <node_name>] Usage: agctl config apache_tomcat [<instance_name>] Usage: agctl disable apache_tomcat <instance_name> [–node <node_name>] Usage: agctl enable apache_tomcat <instance_name> [–node <node_name>] Usage: agctl modify apache_tomcat <instance_name> [–catalina_home <catalina_home>] [–java_home <java_home>] [–catalina_base <catalina_base>] [–jre_home <jre_home>] [–webserver <webserver_instance>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] Usage: agctl relocate apache_tomcat <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl remove apache_tomcat <instance_name> [–force] Usage: agctl start apache_tomcat <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl status apache_tomcat [<instance_name>] [–node <node_name>] Usage: agctl stop apache_tomcat <instance_name> [–force] Usage: agctl query releaseversion Lists Oracle Grid Infrastructure Bundled Agents release version Usage: agctl query deployment Lists Oracle Grid Infrastructure Bundled Agents deployment type
Usage: agctl add apache_webserver <instance_name> –apache_home <apache_home> [–configuration_file <configuration_file>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] Usage: agctl check apache_webserver <instance_name> [–node <node_name>] Usage: agctl config apache_webserver [<instance_name>] Usage: agctl disable apache_webserver <instance_name> [–node <node_name>] Usage: agctl enable apache_webserver <instance_name> [–node <node_name>] Usage: agctl modify apache_webserver <instance_name> [–apache_home <apache_home>] [–configuration_file <configuration_file>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] Usage: agctl relocate apache_webserver <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl remove apache_webserver <instance_name> [–force] Usage: agctl start apache_webserver <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl status apache_webserver [<instance_name>] [–node <node_name>] Usage: agctl stop apache_webserver <instance_name> [–force] Usage: agctl query releaseversion Lists Oracle Grid Infrastructure Bundled Agents release version Usage: agctl query deployment Lists Oracle Grid Infrastructure Bundled Agents deployment type
Usage: agctl add goldengate <instance_name> –gg_home <GoldenGate_Home> [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [–instance_type <instance_type> [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–environment_vars <name1=value1[,…]>] –monitor_extracts <ext1[,…]> –monitor_replicats <rep1[,…]> Usage: agctl check goldengate <instance_name> [–node <node_name>] Usage: agctl config goldengate [<instance_name>] Usage: agctl disable goldengate <instance_name> [–node <node_name>] Usage: agctl enable goldengate <instance_name> [–node <node_name>] Usage: agctl modify goldengate <instance_name> –gg_home <GoldenGate_Home> [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [–instance_type <instance_type> [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–environment_vars <name1=value1[,…]>] –monitor_extracts <ext1[,…]> –monitor_replicats <rep1[,…]> Usage: agctl relocate goldengate <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl remove goldengate <instance_name> [–force] Usage: agctl start goldengate <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl status goldengate [<instance_name>] [–node <node_name>] Usage: agctl stop goldengate <instance_name> [–force] Usage: agctl query releaseversion Lists Oracle Grid Infrastructure Bundled Agents release version Usage: agctl query deployment Lists Oracle Grid Infrastructure Bundled Agents deployment type
Usage: agctl add siebel_gateway <instance_name> –siebel_home <siebel_home> [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–oracle_client_home <oracle_client_home>] Usage: agctl check siebel_gateway <instance_name> [–node <node_name>] Usage: agctl config siebel_gateway [<instance_name>] Usage: agctl disable siebel_gateway <instance_name> [–node <node_name>] Usage: agctl enable siebel_gateway <instance_name> [–node <node_name>] Usage: agctl modify siebel_gateway <instance_name> [–siebel_home <siebel_home>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–oracle_client_home <oracle_client_home>] Usage: agctl relocate siebel_gateway <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl remove siebel_gateway <instance_name> [–force] Usage: agctl start siebel_gateway <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl status siebel_gateway [<instance_name>] [–node <node_name>] Usage: agctl stop siebel_gateway <instance_name> [–force] Usage: agctl query releaseversion Lists Oracle Grid Infrastructure Bundled Agents release version Usage: agctl query deployment Lists Oracle Grid Infrastructure Bundled Agents deployment type
Usage: agctl add siebel_server <instance_name> –siebel_home <siebel_home> –enterprise_name <enterprise_name –server_name <server_name [–gateway_resource <gateway_resource>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–oracle_client_home <oracle_client_home>] Usage: agctl check siebel_server <instance_name> [–node <node_name>] Usage: agctl config siebel_server [<instance_name>] Usage: agctl disable siebel_server <instance_name> [–node <node_name>] Usage: agctl enable siebel_server <instance_name> [–node <node_name>] Usage: agctl modify siebel_server <instance_name> [–siebel_home <siebel_home>] [–enterprise_name <enterprise_name] [–server_name <server_name] [–gateway_resource <gateway_resource>] [{–server_pool <server_pool> | –nodes <node1,node2[,…]>}] [{–vip_name <vip_name> | –network <network_number> –ip <ip_address> –user <user> –group <group>}] [–oracle_home <oracle_home>] [–databases <database1[,…]>] [–db_services <db_service1[,…]>] [–filesystems <filesystem1[,…]>] [–attribute <name1=value1[,…]>] [–oracle_client_home <oracle_client_home>] Usage: agctl relocate siebel_server <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl remove siebel_server <instance_name> [–force] Usage: agctl start siebel_server <instance_name> [{–server_pool <server_pool> | –node <node_name>}] Usage: agctl status siebel_server [<instance_name>] [–node <node_name>] Usage: agctl stop siebel_server <instance_name> [–force] Usage: agctl query releaseversion Lists Oracle Grid Infrastructure Bundled Agents release version Usage: agctl query deployment Lists Oracle Grid Infrastructure Bundled Agents deployment type
crsctl command reference
Usage: crsctl add – add a resource, type or other entity crsctl check – check a service, resource or other entity crsctl config – output autostart configuration crsctl debug – obtain or modify debug state crsctl delete – delete a resource, type or other entity crsctl disable – disable autostart crsctl discover – discover DHCP server crsctl enable – enable autostart crsctl eval – evaluate operations on resource or other entity without performing them crsctl get – get an entity value crsctl getperm – get entity permissions crsctl lsmodules – list debug modules crsctl modify – modify a resource, type or other entity crsctl query – query service state crsctl pin – Pin the nodes in the nodelist crsctl relocate – relocate a resource, server or other entity crsctl replace – replaces the location of voting files crsctl release – release a DHCP lease crsctl request – request a DHCP lease or an action entrypoint crsctl setperm – set entity permissions crsctl set – set an entity value crsctl start – start a resource, server or other entity crsctl status – get status of a resource or other entity crsctl stop – stop a resource, server or other entity crsctl unpin – unpin the nodes in the nodelist crsctl unset – unset a entity value, restoring its default
crsctl add -h
Usage: crsctl add {resource|type|serverpool|policy} <name> <options> where name Name of the CRS entity options Options to be passed to the add command
See individual CRS entity help for more details
crsctl add crs administrator -u <user_name> [-f] where user_name User name to be added to the admin list or “*” -f Override user name validity check
crsctl add wallet -type <wallet_type> <options> where wallet_type Type of wallet i.e. APPQOS, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB. options Options to be passed to the add command
crsctl add css votedisk <vdisk>[…] <options> where vdisk […] One or more blank-separated voting file paths options Options to be passed to the add command
crsctl add category <categoryName> -attr “<attrName>=<value>[,…]” [-i] where categoryName Name of server category to be added attrName Attribute name value Attribute value -i Fail if request cannot be processed immediately
crsctl add credentials -target <target> [[-user <user_name>] [-passwd]|[-keylength <nbits>]] where target Target as is listed by ‘crsctl query credentials targetlist’ user_name User to be added to wallet. If not specified, will be pseudo-randomly generated. Can only be specified with ‘userpass’ credentials.
-passwd Indication that password will be specified. Password will be read from standard input. If not specified, will be pseudo-randomly generated. Can only be specified with ‘userpass’ credentials. nbits Length of the key. If not specified, defaults to 128. Can only be specified with ‘keypair’ credentials.
crsctl check -h
Usage: crsctl check crs Check status of OHAS and CRS stack
crsctl check cluster [[-all]|[-n <server>[…]]] Check status of CRS stack
crsctl check ctss Check status of Cluster Time Synchronization Services
crsctl check resource {<resName> […]|-w <filter>} [-n <server>] [-k <cid>] [-d <did>] Check status of resources
crsctl check css Check status of Cluster Synchronization Services
crsctl check evm Check status of Event Manager crsctl config -h
Usage: crsctl config crs Display OHAS autostart config on this server
crsctl debug -h
Usage: crsctl debug statedump {crs|css|evm} where crs Cluster Ready Services css Cluster Synchronization Services evm Event Manager
crsctl delete -h
Usage: crsctl delete resource <resName> […] [-f] [-i] where resName […] One or more resource names to be deleted -f Force option -i Fail if request cannot be processed immediately
crsctl delete type <typeName> […] [-i] where typeName […] One or more blank-separated resource type names -i Fail if request cannot be processed immediately
crsctl delete wallet -type <wallet_type> [-name <name>] [-user <user_name>] Delete the designated wallet or user from wallet where wallet_type Type of wallet i.e. APPQOSADMIN, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB. name Name is required for APPQOSDB and CVUDB wallets. user_name User to be deleted from wallet.
crsctl delete category <categoryName> […] [-i] where categoryName […] One or more server categories to be deleted -i Fail if request cannot be processed immediately
crsctl delete serverpool <spName> […] [-i] where spName […] One or more server pool names to be deleted -i Fail if request cannot be processed immediately
crsctl delete crs administrator -u <user_name> [-f] where user_name User name to be deleted from the admin list or “*” -f Override user name validity check
crsctl delete css votedisk {<vdiskGUID>[…]|<vdisk>[…]|+<diskgroup>} where vdiskGUID […] One or more blank-separated voting file GUIDs vdisk […] One or more blank-separated voting file paths diskgroup The name of a diskgroup containing voting files; allowed only when clusterware is in exclusive mode
crsctl delete node -n <nodename> where nodename Node to be deleted
crsctl delete credentials -target <target> [-path <path>] [-id <ID>] [-local] where target Target listed by ‘crsctl query credentials targetlist’ path Path, relative to the target, of a credentials domain ID Credential member ID to delete -local Delete credential information from OLR rather than OCR crsctl disable -h
Usage: crsctl disable crs Disable OHAS autostart on this server
crsctl discover -h
Usage: crsctl discover dhcp -clientid <clientid> [-port <port>] [-subnet <subnet>] Discover DHCP server where clientid client ID for which discovery will be attempted port The port on which the discovery packets will be sent subnet The subnet on which DHCP discover packets will be sent
crsctl enable -h
Usage: crsctl enable crs Enable OHAS autostart on this server
Usage: crsctl eval start resource {<resname>|-w <filter>}[-n server] [-f] Evaluate start of specified resources
where resname Name of a resource -w Resource filter -n Server Name -f Evaluate as with force option crsctl eval stop resource {<resname>|-w <filter>} [-f] Evaluate stop of specified resources
where resname Name of a resource -w Resource filter -f Evaluate as with force option crsctl eval relocate resource {<resName> | {<resName>|-all} -s <server> | -w <filter>} {-n <server>} [-f] Evaluate relocate of specified resource where resName Name of a resource -all Relocate all resources -s Source server -w Resource filter (e.g., “TYPE = ora.database.type”) -n Destination server -f Evaluate as with force option crsctl eval add serverpool <spName> [-file <filePath> | -attr “<attrName>=<value>[,…]”] [-f] [-admin [-l <level>] [-x] [-a]] Evaluate addition of specified serverpool
where spName Name of the serverpool -file File containing attributes -attr Attributes for the serverpool -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources crsctl eval modify serverpool <spName> {-file <filePath> | -attr “<attrName>=<value>[,…]”} [-f] [-admin [-l <level>] [-x] [-a]] Evaluate modification of specified serverpool
where
spName Name of the serverpool -file File containing attributes -attr Attributes for the serverpool -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources crsctl eval delete serverpool <spName> [-admin [-l <level>] [-x] [-a]] Evaluate deletion of specified server pool
where spName Name of the serverpool -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources crsctl eval add server <serverName> [-file <filePath> | -attr “<attrName>=<value>[,…]”] [-f] [-admin [-l <level>] [-x] [-a]] Evaluate addition of server
where serverName Name of the server -file File containing attributes -attr Attributes for the server -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources crsctl eval relocate server <serverName> -to <toPool> [-f] [-admin [-l <level>] [-x] [-a]] Evaluate relocation of specified server
where serverName Name of the server -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources toPool ServerPool to add the server to crsctl eval delete server <serverName> [-f] [-admin [-l <level>] [-x] [-a]] Evaluate deletion of specified server where serverName Name of the server -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources crsctl eval add resource <resName> -type <typeName> [-file <filePath> | -attr “<attrName>=<value>[,…]”] [-f] Evaluate add of specified resource
where resName Name of the resource type Resource type attrName Attribute name value Attribute value -f Evaluate as with force option crsctl eval modify resource <resName> -attr “<attrName>=<value>[,…]” [-f] Evaluate modification of specified resource
where resName Name of the resource attrName Attribute name value Attribute value -f Evaluate as with force option crsctl eval fail resource {<resname> | -w <filter>} [-n <server>] Evaluate failure of specified resource
where resName Name of the resource filter Resource filter server Server Name crsctl eval activate policy <policyName> [-f] [-admin [-l <level>] [-x] [-a]] Evaluate activation of specified policy
where policyName Name of policy -f Evaluate as with force option -admin Cluster Administrator view level Output display level (‘serverpools’ for server pools, ‘resources’ for resources or ‘all’ for all) -x Show differences only -a Show all resources
crsctl get -h
Usage: crsctl get {log|trace} {mdns|gpnp|css|crf|crs|ctss|evm|gipc} “<name1>,…” Get the log/trace levels for specific modules
crsctl get log res <resname> Get the log level for an agent
crsctl get hostname Displays the host name
crsctl get nodename Displays the node name
crsctl get cluster mode {config|status} Get the cluster mode
crsctl get clientid dhcp -cluname <cluster_name> -viptype <vip_type> [-vip <VIPResName>] [-n <nodename>] Generate client ID’s as used by RAC agents for configured cluster resources where cluster_name name of the cluster to be configured
vip_type Type of VIP resource: HOSTVIP, SCANVIP, or APPVIP VIPResName User defined application VIP name (required for APPVIP vip_type) nodename Node for which the client ID is required (required for HOSTVIP vip_type)
crsctl get node role {config|status} [-node <nodename> | -all] Gets the current role of nodes in the cluster
crsctl get cluster hubsize Gets the current configured value for the maximum number of Hub nodes in the cluster
crsctl get credentials <options> Exports the specified credentials to an XML file
crsctl get css <parameter> Displays the value of a Cluster Synchronization Services parameter
clusterguid diagwait disktimeout misscount reboottime priority logfilesize leafmisscount
crsctl get css ipmiaddr Displays the IP address of the local IPMI device as set in the Oracle registry.
crsctl get {css <parameter>|hostname|nodename}
crsctl get cpu equivalency Gets the current configured value for server attribute CPU_EQUIVALENCY
crsctl get resource use Gets the current configured value for server attribute RESOURCE_USE_ENABLED
crsctl get server label Gets the current configured value for server attribute SERVER_LABEL
crsctl getperm -h
Usage: crsctl getperm resource <resName> [[-u <user_name>]|[-g <group_name>]] where resName Get permissions for named resource -u Get permissions for user name -g Get permissions for group name
crsctl getperm type <typeName> [[-u <user_name>]|[-g <group_name>]] where typeName Get permissions for named resource type -u Get permissions for user name -g Get permissions for group name
crsctl getperm serverpool <spName> [[-u <user_name>]|[-g <group_name>]] where spName Get permissions for named server pool -u Get permissions for user name -g Get permissions for group name
crsctl lsmodules -h
Usage: crsctl lsmodules {mdns|gpnp|css|crf|crs|ctss|evm|gipc} where mdns multicast Domain Name Server gpnp Grid Plug-n-Play Service css Cluster Synchronization Services crf Cluster Health Monitor crs Cluster Ready Services ctss Cluster Time Synchronization Service evm EventManager gipc Grid Interprocess Communications
crsctl modify -h
Usage: crsctl modify {resource|type|serverpool|policy|policyset} <name> <options> where name Name of the CRS entity options Options to be passed to the modify command
See individual CRS entity help for more details
crsctl modify wallet -type <wallet_type> <options> where wallet_type Type of wallet i.e. , APPQOSADMIN, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB. options Options to be passed to the modify command
crsctl modify policyset {-attr “<attrName>=<value>[,…]” | -file <fileName>} [-ksp] where attrName Name of the policy set attribute, e.g. SERVER_POOL_NAMES, etc. value Value of the attribute fileName Name of text file containing policy set definition -ksp Keep server pools in the system
crsctl modify category <categoryName> -attr “<attrName>=<value>[,…]” [-f] [-i] where categoryName Name of server category to be modified attrName Attribute name value Attribute value -f Force option -i Fail if request cannot be processed immediately
crsctl query -h
Usage: crsctl query crs administrator Display admin list
crsctl query crs autostart Gets the value of automatic start delay and server count
crsctl query crs activeversion Lists the Oracle Clusterware operating version
crsctl query crs releaseversion Lists the Oracle Clusterware release version
crsctl query crs softwareversion [<nodename>] Lists the version of Oracle Clusterware software installed where Default List software version of the local node nodename List software version of named node
crsctl query crs releasepatch Lists the Oracle Clusterware release patch level
crsctl query crs softwarepatch [<host>] Lists the patch level of Oracle Clusterware software installed where Default List software patch level of the local host host List software path level of the named host
crsctl query css ipmiconfig Checks whether Oracle Clusterware has been configured for IPMI
crsctl query css ipmidevice Checks whether the IPMI device/driver is present
crsctl query css votedisk Lists the voting files used by Cluster Synchronization Services
crsctl query wallet -type <wallet_type> [-name <name>] [-user <user_name> | -all] Check if the designated wallet or user exists where wallet_type Type of wallet i.e. APPQOSADMIN, APPQOSUSER, APPQOSDB, MGMTDB, OSUSER or CVUDB. name Name is required for APPQOSDB and CVUDB wallets. user_name User to be queried from wallet. all List all users in wallet
crsctl query dns -servers Lists the system configured DNS server, search paths, attempt and timeout values
crsctl query dns -name <name> [-dnsserver <DNS_server_address>] [-port <port>] [-attempts <attempts>] [-timeout <timeout>] [-v] Returns a list of addresses returned by DNS lookup of the name with the specified DNS server Where name Fully qualified domain name to lookup DNS_server_address Address of the DNS server on which name needs to be looked up port Port on which DNS server is listening attempts Number of retry attempts timeout Timeout in seconds
crsctl query socket udp [-address <address>] [-port <port>] Verifies that a daemon can listen on specified address and port Where address IP address on which socket needs to be created port port on which socket needs to be created
crsctl query credentials targetlist Lists the valid credentials targets to be used with credentials commands
crsctl query credentials -target <target> Lists the valid credentials under a specific target
crsctl pin -h
Usage: crsctl pin css -n <node1>[…] Pin the nodes (make leases non-expiring).
crsctl relocate -h
Usage: crsctl relocate resource {<resName> [-k <cid>]| {resName|-all} -s <server>|-w <filter>} [-n <server>] [-env “env1=val1,env2=val2,…”] [-f] [-i] Relocate designated resources where resName Resource named resource -all Relocate all resources -s Source server -w Resource filter (e.g., “TYPE = ora.database.type”) -n Destination server -k Cardinality ID -env Attribute overrides for this command -f Force option -i Fail if request cannot be processed immediately
crsctl relocate server <server> […] -c <spName> [-f] [-i] Relocate designated servers where server […] One or more blank-separated server names spName Destination server pool name -f Force option -i Fail if request cannot be processed immediately
crsctl replace -h
Usage: crsctl replace {discoverystring <ds_string>| votedisk [<+diskgroup>|<vdisk> … <vdisk>]} where ds_string comma-separated voting file paths without spaces and enclosed in quotes diskgroup diskgroup where the voting files will be located in ASM vdisk location of the voting files outside of ASM separated by space
crsctl release -h
Usage: crsctl release dhcp -clientid <clientid> [-port <port>] [-subnet <subnet>] Release DHCP lease for the client ID specified where clientid client ID for which DHCP lease release request will be attempted port The port on which the DHCP lease release packets will be sent subnet The subnet on which DHCP release lease packets will be sent
crsctl request -h
Usage: crsctl request dhcp -clientid <clientid> [-port <port>] [-subnet <subnet>] Request DHCP lease for the client ID specified where clientid client ID for which DHCP lease request will be attempted port The port on which the DHCP lease request packets will be sent subnet The subnet on which DHCP lease request packets will be sent
crsctl request action <actionName> {-r <resName>[…]|-w <filter>} [-env] [-i] Request the specified action where actionName Action to be requested resName […] One or more blank-separated resource names -w Resource filter -env Attribute overrides for this command -i Fail if request cannot be processed immediately
crsctl setperm -h
Usage: crsctl setperm {resource|type|serverpool} <name> {-u <aclstring>|-x <aclstring>|-o <user_name>|-g <group_name>} where -u Update entity ACL -x Delete entity ACL -o Change entity owner -g Change entity primary group
ACL (Access Control List) string:
{ user:<user_name>[:<readPerm><writePerm><execPerm>] | group:<group_name>[:<readPerm><writePerm><execPerm>] | other[::<readPerm><writePerm><execPerm>] } where user User ACL group Group ACL other Other ACL readPerm Read permission (“r” grants, “-” forbids) writePerm Write permission (“w” grants, “-” forbids) execPerm Execute permission (“x” grants, “-” forbids)
crsctl set -h
Usage: crsctl set crs activeversion [<version>] [-force] Sets the Oracle Clusterware operating version
crsctl set {log|trace} {mdns|gpnp|css|crf|crs|ctss|evm|gipc} “<name1>=<lvl1>,…” Set the log/trace levels for specific modules within daemons
crsctl set log res <resname>=<lvl> Set the log levels for agents
crsctl set css <parameter> <value> Sets the value of a Cluster Synchronization Services parameter
crsctl set css {ipmiaddr|ipmiadmin} <value> Sets IPMI configuration data in the Oracle registry
crsctl set css votedisk {asm <diskgroup>|raw <vdisk>[…]} Defines the set of voting disks to be used by CRS
crsctl set crs autostart [delay <delayTime>] [servercount <count>] Sets the Oracle Clusterware automatic resource start criteria
crsctl set cluster mode {standard|flex} Set the cluster mode
crsctl set node role {hub|leaf} [-node <nodename>] Sets the configured role of the node in the cluster
crsctl set cluster hubsize <value> Sets the configuration value for the maximum number of Hub nodes in the cluster
crsctl set credentials <options> Imports the specified credentials from an XML file
crsctl set cpu equivalency Sets the configuration value for server attribute CPU_EQUIVALENCY
crsctl set resource use Sets the configuration value for server attribute RESOURCE_USE_ENABLED
crsctl set server label Sets the configuration value for server attribute SERVER_LABEL
crsctl start -h
Usage: crsctl start resource {<resName> […]|-w <filter>]|-all} [-n <server> | -s <server_pools>] [-k <cid>] [-d <did>] [-env “env1=val1,env2=val2,…”] [-begin|-end] [-f] [-i] [-l] Start designated resources where resName […] One or more blank-separated resource names -w Resource filter (e.g., “TYPE = ora.database.type”) -all All resources -n Server name -k Resource cardinality ID -d Resource degree ID -env Attribute overrides for this command -f Force option -i Fail if request cannot be processed immediately -l Leave affected resources as-is if request fails
-begin Start transparent HA action -end End transparent HA action
-s Server pool names (e.g. -s pool1 pool2) crsctl start crs [-excl [-nocrs] [-cssonly]] | [-wait | -waithas | -nowait] | [-noautostart] Start OHAS on this server where -excl Start Oracle Clusterware in exclusive mode -nocrs Start Oracle Clusterware in exclusive mode without starting CRS -nowait Do not wait for OHAS to start -wait Wait until startup is complete and display all progress and status messages -waithas Wait until startup is complete and display OHASD progress and status messages -cssonly Start only CSS -noautostart Start only OHAS
crsctl start cluster [[-all]|[-n <server>[…]]] Start CRS stack where Default Start local server -all Start all servers -n Start named servers server […] One or more blank-separated server names
crsctl start rollingpatch Transition Oracle Clusterware and Oracle ASM to rolling patch mode
crsctl start rollingupgrade <version> Transition Oracle Clusterware and Oracle ASM to rolling upgrade mode
crsctl start ip -A {<IP_name>|<IP_address>}/<net_mask>/<interface_name> Start an IP on the given interface with specified net mask Where IP_name Name which resolves to an IP. If it is not a fully qualified domain name then standard name search will be used. IP_address IP address net_mask Subnet mask for the IP to start interface_name Interface on which to start the IP
crsctl start testdns [-address <IP_address>] [-port <port>][-domain <GNS_domain>] [-once][-v] Start a test DNS listener that listens on the given address at the given port and for specified domain Where IP_address IP address to be used by the listener (defaults to hostname) port The port on which the listener will listen. Default value is 53. domain The domain query for which to listen. By default, all domain queries are processed.
-once Flag indicating that DNS listener should exit after one DNS query packet is received -v Verbose output
crsctl status -h
Usage: crsctl status {resource|type|serverpool|server} [<name>|-w <filter>] [-g] where name CRS entity name -w CRS entity filter -g Check if CRS entities are registered
crsctl status ip -A {<IP_name>|<IP_address>} Check if the IP is alive Where IP_name Name which resolves to an IP. If name is not fully qualified domain name then standard name search will be used. IP_address IP address
crsctl status testdns [-address <IP_address>] [-port <port>] [-v] Check status of DNS server for specified domain Where IP_address DNS server address (defaults to hostname) port The port on which the DNS server is listening. Default value for the port is 53. -v Verbose output
crsctl status category [<categoryName>[…] | -w <filter> | -server <serverName>] Check status of designated server categories where categoryName […] One or more blank-separated category names -w Server Category filter -server Used to list all the categories that a particular server matches
crsctl status policyset [-file <fileName>] Show the current policies in the policy set where fileName Name with which to create a text file that can be used with the ‘crsctl modify policyset -file’ command
crsctl status policy [<policy_name>[…] | -w <filter> | -active] Check status of designated policies where policy_name […] One or more blank-separated policy names -w Policy filter -active Used to display the active policy
crsctl stop -h
Usage: crsctl stop resource {<resName>[…]|-w <filter>|-all} [-n <server> | -s <server_pools>] [-k <cid>] [-d <did>] [-env “env1=val1,env2=val2,…”] [-begin|-end] [-f] [-i] [-l] Stop designated resources where resName […] One or more blank-separated resource names -w Resource filter (e.g., “TYPE = ora.database.type”) -all All resources -n Server name -k Resource cardinality ID -d Resource degree ID -env Attribute overrides for this command -f Force option -i Fail if request cannot be processed immediately -l Leave affected resources as-is if request fails -begin Start transparent HA action -end End transparent HA action -s Server pool names (e.g. -s pool1 pool2)
crsctl stop crs [-f] Stop OHAS on this server where -f Force option
crsctl stop cluster [[-all]|[-n <server>[…]]] [-f] Stop CRS stack where Default Stop local server -all Stop all servers -n Stop named servers server […] One or more blank-separated server names -f Force option
crsctl stop rollingpatch Transition Oracle Clusterware and Oracle ASM out of rolling patch mode
crsctl stop ip -A {<IP_name>|<IP_address>}/<interface_name> Stop the designated IP address Where IP_name Name which resolves to an IP. If it is not fully qualified domain name then standard name search will be performed IP_address IP address interface_name Interface on which IP was started
crsctl stop testdns [-address <IP_address>] [-port <port>] [-v] Stop the test DNS listener that listens on the given address and at the given port Where IP_address IP address on which testdns was started (defaults to hostname) -port The port on which the listener is listening. Default value for the port is 53. -v Verbose output crsctl unpin -h
Usage: crsctl unpin css -n <node1>[…] Unpin the nodes (allow leases to expire).
crsctl unset -h
Usage: crsctl unset css <parameter> Unsets the value of a Cluster Synchronization Services parameter, restoring its default value
diagwait disktimeout misscount reboottime priority logfilesize leafmisscount
crsctl unset css ipmiconfig Unsets the IPMI configuration and deletes the associated Oracle Registry entries.
crsctl unset cluster hubsize Unsets the configuration value for the maximum number of Hub nodes in the cluster
Comments