PK P Doa,mimetypeapplication/epub+zipPKP DiTunesMetadata.plist[ artistName Oracle Corporation book-info cover-image-hash 643911016 cover-image-path OEBPS/dcommon/oracle-logo.jpg package-file-hash 605822255 publisher-unique-id E48819-05 unique-id 350577546 genre Oracle Documentation itemName Oracle® Clusterware Administration and Deployment Guide, 12c Release 1 (12.1) releaseDate 2014-06-30T06:43:36Z year 2014 PKy9`[PKP DMETA-INF/container.xml PKYuPKP DOEBPS/cover.htm  Cover

Oracle Corporation

PK@t` PKP DOEBPS/title.htmc Oracle Clusterware Administration and Deployment Guide, 12c Release 1 (12.1)

Oracle® Clusterware

Administration and Deployment Guide

12c Release 1 (12.1)

E48819-05

July 2014


Oracle Clusterware Administration and Deployment Guide, 12c Release 1 (12.1)

E48819-05

Copyright © 2007, 2014, Oracle and/or its affiliates. All rights reserved.

Primary Author:  Richard Strohm

Contributor:  The Oracle Database 12c documentation is dedicated to Mark Townsend, who was an inspiration to all who worked on this release.

Contributors:  Ahmed Abbas, Troy Anthony, Ram Avudaiappan, Mark Bauer, Eric Belden, Suman Bezawada, Gajanan Bhat, Burt Clouse, Jonathan Creighton, Mark Fuller, Apostolos Giannakidis, Angad Gokakkar, John Grout, Andrey Gusev, Winston Huang, Sameer Joshi, Sana Karam, Roland Knapp, Erich Kreisler, Raj K. Kammend, Karen Li, Barb Lundhild, Bill Manry, Saar Maoz, John McHugh, Markus Michalewicz, Anil Nair, Siva Nandan, Philip Newlan, Srinivas Poovala, Sampath Ravindhran, Kevin Reardon, Dipak Saggi, K.P. Singh, Duane Smith, Janet Stern, Su Tang, Douglas Williams, Soo Huey Wong

This software and related documentation are provided under a license agreement containing restrictions on use and disclosure and are protected by intellectual property laws. Except as expressly permitted in your license agreement or allowed by law, you may not use, copy, reproduce, translate, broadcast, modify, license, transmit, distribute, exhibit, perform, publish, or display any part, in any form, or by any means. Reverse engineering, disassembly, or decompilation of this software, unless required by law for interoperability, is prohibited.

The information contained herein is subject to change without notice and is not warranted to be error-free. If you find any errors, please report them to us in writing.

If this is software or related documentation that is delivered to the U.S. Government or anyone licensing it on behalf of the U.S. Government, the following notice is applicable:

U.S. GOVERNMENT END USERS: Oracle programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, delivered to U.S. Government end users are "commercial computer software" pursuant to the applicable Federal Acquisition Regulation and agency-specific supplemental regulations. As such, use, duplication, disclosure, modification, and adaptation of the programs, including any operating system, integrated software, any programs installed on the hardware, and/or documentation, shall be subject to license terms and license restrictions applicable to the programs. No other rights are granted to the U.S. Government.

This software or hardware is developed for general use in a variety of information management applications. It is not developed or intended for use in any inherently dangerous applications, including applications that may create a risk of personal injury. If you use this software or hardware in dangerous applications, then you shall be responsible to take all appropriate fail-safe, backup, redundancy, and other measures to ensure its safe use. Oracle Corporation and its affiliates disclaim any liability for any damages caused by use of this software or hardware in dangerous applications.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.

Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered trademark of The Open Group.

This software or hardware and documentation may provide access to or information on content, products, and services from third parties. Oracle Corporation and its affiliates are not responsible for and expressly disclaim all warranties of any kind with respect to third-party content, products, and services. Oracle Corporation and its affiliates will not be responsible for any loss, costs, or damages incurred due to your access to or use of third-party content, products, or services.

PKPKP DOEBPS/pbmgmt.htm Policy-Based Cluster and Capacity Management

3 Policy-Based Cluster and Capacity Management

This chapter provides an overview of Oracle Clusterware policy-based management of servers and resources used by Oracle databases or applications.

This chapter includes the following topics:

Overview of Server Pools and Policy-Based Management

Oracle Clusterware 11g release 2 (11.2) introduced server pools, where resources that Oracle Clusterware manages are contained in logical groups of servers called server pools. Resources are hosted on a shared infrastructure and are contained within server pools. Examples of resources that Oracle Clusterware manages are database instances, database services, application VIPs, and application components.

In an Oracle Flex Cluster, with Hub Nodes and Leaf Nodes, you can use server pools to run particular types of workloads on cluster member nodes, while providing simplified administration options. You can use a cluster configuration policy set to provide dynamic management of cluster policies across the cluster.


See Also :

Chapter 4, "Oracle Flex Clusters" for details about Oracle Flex Cluster configuration

You can continue to manage resources in an Oracle Clusterware standard Cluster by using the Oracle Clusterware 11g release 2 (11.2) server pool model, or you can manually manage resources by using the traditional fixed, non-server pool method.

This section includes the following topics:

Server Pools and Server Categorization

Administrators can deploy and manage servers dynamically using server pools by identifying servers distinguished by particular attributes, a process called server categorization. In this way, you can create clusters made up of heterogeneous nodes.


See Also:

"Overview of Server Categorization" for details about server categorization

Server Pools and Policy-Based Management

With policy-based management, administrators specify the server pool (excluding the Generic and Free pools) in which the servers run. For example, a database administrator uses SRVCTL to create a server pool for servers hosting a database or database service. A clusterware administrator uses CRSCTL to create server pools for non-database use, such as creating a server pool for servers hosting an application.

Policy-based management:

  • Enables online server reallocation based on a defined policy to satisfy workload capacity requirements

  • Guarantees the allocation of required resources for critical work as defined by the policy

  • Ensures isolation where necessary, so that you can provide dedicated servers in a cluster for applications and databases

  • Enables policies to be configured to change pools in accordance with business needs or application demand, so that pools provide the required capacity at the right time

Server pools provide resource isolation to prevent applications running in one server pool from accessing resources running in another server pool. Oracle Clusterware provides fine-grained role separation between server pools. This capability maintains required management role separation between these groups in organizations that have clustered environments managed by separate groups.


See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resource attributes

Oracle Clusterware efficiently allocates servers in the cluster. Server pool attributes, defined when the server pool is created, dictate placement and prioritization of servers based on the IMPORTANCE server pool attribute.


See Also:

"Overview of Cluster Configuration Policies and the Policy Set" for details about managing server pools to respond to business or application demand

How Server Pools Work

Server pools divide the cluster into logical groups of servers hosting both singleton and uniform applications. The application can be a database service or a non-database application. An application is uniform when the application workload is distributed over all servers in the server pool. An application is singleton when it runs on a single server within the server pool. Oracle Clusterware role-separated management determines access to and use of a server pool.

You manage server pools that contain Oracle RAC databases with the Server Control (SRVCTL) utility. Use the Oracle Clusterware Control (CRSCTL) utility to manage all other server pools. Only cluster administrators have permission to create top-level server pools.

Database administrators use the Server Control (SRVCTL) utility to create and manage server pools that will contain Oracle RAC databases. Cluster administrators use the Oracle Clusterware Control (CRSCTL) utility to create and manage all other server pools, such as server pools for non-database applications. Only cluster administrators have permission to create top-level server pools.

Top-level server pools:

  • Logically divide the cluster

  • Are always exclusive, meaning that one server can only reside in one particular server pool at a certain point in time

Default Server Pools

When Oracle Clusterware is installed, two internal server pools are created automatically: Generic and Free. All servers in a new installation are assigned to the Free server pool, initially. Servers move from Free to newly defined server pools automatically.

The Free Server Pool

The Free server pool contains servers that are not assigned to any other server pools. The attributes of the Free server pool are restricted, as follows:

  • SERVER_NAMES, MIN_SIZE, and MAX_SIZE cannot be edited by the user

  • IMPORTANCE and ACL can be edited by the user

The Generic Server Pool

The Generic server pool stores any server that is not in a top-level server pool and is not policy managed. Servers that host non-policy-managed applications, such as administrator-managed databases, are statically assigned to the Generic server pool.

The Generic server pool's attributes are restricted, as follows:

  • No one can modify configuration attributes of the Generic server pool (all attributes are read-only)

  • You can only create administrator-managed databases in the Generic Pool, if the server you want to create the database on is one of the following:

    • Online and exists in the Generic server pool

    • Online and exists in the Free server pool, in which case Oracle Clusterware moves the server into the Generic server pool

    • Online and exists in any other server pool and the user is either a cluster administrator or is allowed to use the server pool's servers, in which case, the server is moved into the Generic server pool

    • Offline and the user is a cluster administrator

Server Pool Attributes

You can use SRVCTL or CRSCTL to create server pools for databases and other applications, respectively. If you use SRVCTL to create a server pool, then you can only use a subset of the server pool attributes described in this section. If you use CRSCTL to create server pools, then you can use the entire set of server pool attributes.

Server pool attributes are the attributes that you define to create and manage server pools.

The decision about which utility to use is based upon the type of resource being hosted in the server pool. You must use SRVCTL to create server pools that host Oracle databases. You must use CRSCTL to create server pools that host non-database resources such as middle tiers and applications.

When you use SRVCTL to create a server pool, the server pool attributes available to you are:


-category
-importance
-min
-max
-serverpool
-servers

SRVCTL prepends "ora." to the name of the server pool.

When you use CRSCTL to create a server pool, all attributes listed and described in Table 3-1 are available to you.


See Also:

"crsctl add serverpool" for more information

Table 3-1 Server Pool Attributes

AttributeValues and FormatDescription
ACL

String in the following format:

owner:user:rwx,pgrp:group:rwx,other::r—

Defines the owner of the server pool and which privileges are granted to various operating system users and groups. The server pool owner defines the operating system user of the owner, and which privileges that user is granted.

The value of this optional attribute is populated at the time a server pool is created based on the ACL of the user creating the server pool, unless explicitly overridden. The value can subsequently be changed, if such a change is allowed based on the existing privileges of the server pool.

In the string:

  • owner: The operating system user of the server pool owner, followed by the privileges of the owner

  • pgrp: The operating system group that is the primary group of the owner of the server pool, followed by the privileges of members of the primary group

  • other: Followed by privileges of others

  • r: Read only

  • w: Modify attributes of the pool or delete it

  • x: Assign resources to this pool

By default, the identity of the client that creates the server pool is the owner. Also by default, root, and the user specified in owner have full privileges. You can grant required operating system users and operating system groups their privileges by adding the following lines to the ACL attribute:

The operating system user that creates the server pool is the owner of the server pool, and the ACL attribute for the server pool reflects the UNIX-like read, write, and execute ACL definitions for the user, primary group, group, and other.

ACTIVE_SERVERS

A string of server names in the following format:

server_name1 server_name2 ...

Oracle Clusterware automatically manages this attribute, which contains the space-delimited list of servers that are currently assigned to a server pool.

EXCLUSIVE_POOLS

String

This optional attribute indicates if servers assigned to this server pool are shared with other server pools. A server pool can explicitly state that it is mutually exclusive of any other server pool that has the same value for this attribute. Two or more server pools are mutually exclusive when the sets of servers assigned to them do not have a single server in common. For example, server pools A and B must be mutually exclusive if they both have the value of this attribute set to the same string, such as pools_A_B.

Top-level server pools are mutually exclusive, by default.

IMPORTANCE

Any integer from 0 to 1000

Relative importance of the server pool, with 0 denoting the lowest level of importance and 1000, the highest. This optional attribute is used to determine how to reconfigure the server pools when a node joins or leaves the cluster. The default value is 0.

MIN_SIZE

Any nonnegative integer

The minimum size of a server pool. If the number of servers contained in a server pool is below the number you specify in this attribute, then Oracle Clusterware automatically moves servers from other pools into this one until that number is met.

Note: The value of this optional attribute does not set a hard limit. It governs the priority for server assignment whenever the cluster is reconfigured. The default value is 0.

MAX_SIZE

Any nonnegative integer or -1 (no limit)

The maximum number of servers a server pool can contain. This attribute is optional and is set to -1 (no limit), by default.

Note: A value of -1 for this attribute spans the entire cluster.

NAME

String

The name of the server pool, which you must specify when you create the server pool. Server pool names must be unique within the domain of names of user-created entities, such as resources, types, and servers. A server pool name has a 254 character limit and can contain any platform-supported characters except the exclamation point (!), the tilde (~), and spaces. A server pool name cannot begin with a period nor with ora.

Note: When you create server pools using SRVCTL, the utility prepends "ora." to the name of the server pool.

PARENT_POOLS

A string of space-delimited server pool names in the following format:

sp1 sp2 ...

Use of this attribute makes it possible to create nested server pools. Server pools listed in this attribute are referred to as parent server pools. A server pool included in a parent server pool is referred to as a child server pool.

Note: If you use SRVCTL to create the server pool, then you cannot specify this attribute.

SERVER_CATEGORY

String

The name of a registered server category, used as part of server categorization. Oracle Clusterware standard Clusters and Oracle Flex Clusters have default categories of hub and leaf. When you create a server pool, if you set a value for SERVER_CATEGORY, then you cannot set a value for SERVER_NAMES. Only one of these parameters may have a non-empty value.

Use the SERVER_CATEGORY attribute to classify servers assigned to a server pool based on server attributes. You can organize servers and server pools in a cluster to match specific workload to servers and server pools, based on server attributes that you define.

See Also: "crsctl status server" for a list of server attributes

SERVER_NAMES

A string of space-delimited server names in the following format:

server1 server2 ...

A list of candidate node names that may be associated with a server pool. If you do not provide a set of server names for this optional attribute, then Oracle Clusterware is configured so that any server may be assigned to any server pool, to the extent allowed by values of other attributes, such as PARENT_POOLS.

The server names identified as candidate node names are not validated to confirm that they are currently active cluster members. Use this attribute to define servers as candidates that have not yet been added to the cluster.

If you set a value for SERVER_NAMES, then you cannot set a value for SERVER_CATEGORY; Only one of these attributes may have a non-empty value.

Note: If you set the SERVER_CATEGORY attribute and you need to specify individual servers, then you can list servers by name using the EXPRESSION server category attribute.


How Oracle Clusterware Assigns New Servers Using Server Pools

Oracle Clusterware assigns new servers to server pools in the following order:

  1. Generic server pool

  2. User-created server pool

  3. Free server pool

Oracle Clusterware continues to assign servers to server pools until the following conditions are met:

  1. Until all server pools are filled in order of importance to their minimum (MIN_SIZE).

  2. Until all server pools are filled in order of importance to their maximum (MAX_SIZE).

  3. By default, any servers not placed in a server pool go into the Free server pool.

    You can modify the IMPORTANCE attribute for the Free server pool. If the value of the IMPORTANCE attribute of the Free server pool is greater than one or more of the other server pools, then the Free server pool will receive any remaining servers once the value of their MIN_SIZE attribute is met.

When a server joins a cluster, several things occur.

Consider the server pools configured in Table 3-2:

Table 3-2 Sample Server Pool Attributes Configuration

NAMEIMPORTANCEMIN_SIZEMAX_SIZEPARENT_POOLSEXCLUSIVE_POOLS
sp1
1
1
10

 


 


sp2
3
1
6

 


 


sp3
2
1
2

 


 


sp2_1
2
1
5
sp2
S123
sp2_2
1
1
5
sp2
S123

For example, assume that there are no servers in a cluster; all server pools are empty.

When a server, named server1, joins the cluster:

  1. Server-to-pool assignment commences.

  2. Oracle Clusterware only processes top-level server pools (those that have no parent server pools), first. In this example, the top-level server pools are sp1, sp2, and sp3.

  3. Oracle Clusterware lists the server pools in order of IMPORTANCE, as follows: sp2, sp3, sp1.

  4. Oracle Clusterware assigns server1 to sp2 because sp2 has the highest IMPORTANCE value and its MIN_SIZE value has not yet been met.

  5. Oracle Clusterware processes the remaining two server pools, sp2_1 and <code>sp2_2. The sizes of both server pools are below the value of the MIN_SIZE attribute (both server pools are empty and have MIN_SIZE values of 1).

  6. Oracle Clusterware lists the two remaining pools in order of IMPORTANCE, as follows: sp2_1, sp2_2.

  7. Oracle Clusterware assigns server1 to sp2_1 but cannot assign server1 to sp2_2 because sp2_1 is configured to be exclusive with sp2_2.

After processing, the cluster configuration appears, as follows

Table 3-3 Post Processing Server Pool Configuration

Server Pool NameAssigned Servers
sp1

 


sp2
server1
sp3

 


sp2_1
server1
sp2_2

 



Servers Moving from Server Pool to Server Pool

If the number of servers in a server pool falls below the value of the MIN_SIZE attribute for the server pool (such as when a server fails), based on values you set for the MIN_SIZE and IMPORTANCE attributes for all server pools, Oracle Clusterware can move servers from other server pools into the server pool whose number of servers has fallen below the value for MIN_SIZE. Oracle Clusterware selects servers from other server pools to move into the deficient server pool that meet the following criteria:

  • For server pools that have a lower IMPORTANCE value than the deficient server pool, Oracle Clusterware can take servers from those server pools even if it means that the number of servers falls below the value for the MIN_SIZE attribute.

  • For server pools with equal or greater IMPORTANCE, Oracle Clusterware only takes servers from those server pools if the number of servers in a server pool is greater than the value of its MIN_SIZE attribute.

Managing Server Pools Using Default Attributes

By default, each server pool is configured with the following attribute options for managing server pools:

  • MIN_SIZE: The minimum number of servers the server pool should contain.

    If the number of servers in a server pool is below the value of this attribute, then Oracle Clusterware automatically moves servers from elsewhere into the server pool until the number of servers reaches the attribute value.

  • MAX_SIZE: The maximum number of servers the server pool should contain.

  • IMPORTANCE: A number from 0 to 1000 (0 being least important) that ranks a server pool among all other server pools in a cluster.

In addition, you can assign additional attributes to provide more granular management of server pools, as part of a cluster configuration policy. Attributes such as EXCLUSIVE_POOLS and SERVER_CATEGORY can assist you to create policies for your server pools that enhance performance and build tuning design management into your server pool.

Overview of Server Categorization

Oracle Clusterware 11g release 2 (11.2) introduced server pools as a means for specifying resource placement and administering server allocation and access. Originally, server pools were restricted to a set of basic attributes characterizing servers as belonging to a given pool, with no way to distinguish between types of servers; all servers were considered to be equal in relation to their processors, physical memory, and other characteristics.

Server categorization enables you to organize servers into particular categories by using attributes such as processor types, memory, and other distinguishing system features. You can configure server pools to restrict eligible members of the pool to a category of servers, which share a particular set of attributes.


Note:

If you create policies with Oracle Database Quality of Service Management (Oracle Database QoS Management), then you categorize servers by setting server pool directive overrides, and CRSCTL commands using the policy and policyset nouns are disabled. Also if you switch from using Oracle Clusterware policies to using Oracle Database QoS Management policies, then the Oracle Clusterware policies are replaced, because the two policy types cannot coexist. Oracle recommends that you create a backup using crsctl status policyset -file file_name before you switch policies.


See Also:


Overview of Cluster Configuration Policies and the Policy Set

A cluster configuration policy is a document that contains exactly one definition for each server pool managed by the cluster configuration policy set. A cluster configuration policy set is a document that defines the names of all server pools configured for the cluster and definitions for all policies.


Note:

Oracle Clusterware 11g release 2 (11.2) supports only a single server pool configuration. You must manually make any changes to the server pool configuration when you want the change to take effect.

In Oracle Clusterware 12c, you use the policies defined in the cluster configuration policy set for server pool specification and management, and Oracle Clusterware manages the server pools according to the policies in the policy set. With a cluster configuration policy set, for example, you can allocate more servers to OLTP workload during weekly business hours to respond to email demand, and on the weekends and evenings, allocate more servers to batch workloads, and perform transitions of server pool configuration or server allocation, atomically.

At any point in time, only one policy is in effect for the cluster. But you can create several different policies, so that you can configure pools of servers with parameters to reflect differences in requirements for the cluster based on business needs or demand, or based on calendar dates or times of the day.


See Also:

"An Example Policy Set Configuration" for a more detailed example of policy set configuration

Server Configuration and Server State Attributes

Oracle Clusterware assigns each server a set of attributes as soon as you add a server to a cluster. Some of these attributes describe the physical characteristics of the server, while others describe the state conditions of the server. Also, there are other server attributes which you can modify that help further categorize servers. If you remove the server from the cluster, then Oracle Clusterware deletes the server object.

You use server configuration attributes to categorize servers, as part of a server categorization management policy.

Table 3-4 lists and describes server configuration attributes.

Table 3-4 Server Configuration Attributes

AttributeDescription
ACTIVE_CSS_ROLE

Role being performed by the server. A server can have one of the following roles:

  • hub: Designated role for a server in an Oracle Flex Cluster or the designated role for any node in an Oracle Clusterware standard Cluster.

  • leaf: The server is a Leaf Node in an Oracle Flex Cluster.

Note: You cannot configure this attribute.

CONFIGURED_CSS_ROLE

Configured role for the server. A server can be either of the following:

hub: Designated role for a server in an Oracle Flex Cluster or the designated role for any node in an Oracle Clusterware standard Cluster.

leaf: The server is a Leaf Node in an Oracle Flex Cluster.

Note: You cannot configure this attribute.

CPU_CLOCK_RATE

CPU clock rate in megahertz (MHz)

CPU_COUNT

Number of processors

CPU_EQUIVALENCY

The relative value (expressed as a positive integer greater than or equal to 1) that Oracle Clusterware uses to describe that the CPU power provided by a server may deviate (positively or negatively) from its physical representation using a baseline of 1000, for example. A value lower than 1000 describes that, despite a certain value for the CPU_COUNT and CPU_CLOCK_RATE parameters, the equivalent power provided by this server is respectively lower.

Use the following commands to view or modify, respectively, this attribute on the local server:

crsctl get cpu equivalency
crsctl set cpu equivalency
CPU_HYPERTHREADING

Status of hyperthreading for the CPU. A value of 0 signifies that hyperthreading is not in use. A value of 1 signifies that hyperthreading is in use.

MEMORY_SIZE

Memory size in megabytes (MB)

NAME

The name of the server.

RESOURCE_USE_ENABLED

A server pool resource management parameter. If the value for this attribute is 1, which is the default, then the server can be used for resource placement. If the value is 0, then Oracle Clusterware disallows starting server pool resources on the server. The server remains in the Free pool.

You can review the setting and control this attribute for each cluster member node by using the crsctl get resource use and crsctl set resource use commands.

SERVER_LABEL

An arbitrary value that you can use to label the server. You can use this attribute when setting up server categories. For example, you can specify a location (such as building_A or building_B), which makes it possible to put servers into pools where location is a requirement, by creating an appropriate server category and using it for the server pool.

Use the following commands to view or modify, respectively, this attribute on the local server:

crsctl get server label
crsctl set server label

Table 3-5 lists and describes read-only server state and configuration attributes:

Table 3-5 Server State Attributes

AttributeDescription
ACTIVE_POOLS

A space-delimited list of the names of the server pools to which a server belongs. Oracle Clusterware manages this list automatically.

STATE

A server can be in one of the following states:

  • ONLINE: The server is a member of the cluster and is available for resource placement.

  • OFFLINE: The server is not currently a member of the cluster. Subsequently, it is not available for resource placement.

  • JOINING: When a server joins a cluster, Oracle Clusterware processes the server to ensure that it is valid for resource placement. Oracle Clusterware also checks the state of resources configured to run on the server. Once the validity of the server and the state of the resources are determined, the server transitions out of this state.

  • LEAVING: When a planned shutdown for a server begins, the state of the server transitions to LEAVING, making it unavailable for resource placement.

  • VISIBLE: Servers that have Oracle Clusterware running, but not the Cluster Ready Services daemon (CRSD), are put into the VISIBLE state. This usually indicates an intermittent issue or failure and Oracle Clusterware trying to recover (restart) the daemon. Oracle Clusterware cannot manage resources on servers while the servers are in this state.

  • RECONFIGURING: When servers move between server pools due to server pool reconfiguration, a server is placed into this state if resources that ran on it in the current server pool must be stopped and relocated. This happens because resources running on the server may not be configured to run in the server pool to which the server is moving. As soon as the resources are successfully relocated, the server is put back into the ONLINE state.

Use the crsctl status server command to obtain server information.

STATE_DETAILS

This is a read-only attribute that Oracle Clusterware manages. The attribute provides additional details about the state of a server. Possible additional details about a server state are:

Server state: ONLINE:

  • AUTOSTARTING RESOURCES

    Indicates that the resource autostart procedure (performed when a server reboots or the Oracle Clusterware stack is restarted) is in progress for the server.

  • AUTOSTART QUEUED

    The server is waiting for the resource autostart to commence. Once that happens, the attribute value changes to AUTOSTARTING RESOURCES.

Server state: RECONFIGURING:

  • STOPPING RESOURCES

    Resources that are restricted from running in a new server pool are stopping.

  • STARTING RESOURCES

    Resources that can run in a new server pool are starting.

  • RECONFIG FAILED

    One or more resources did not stop and thus the server cannot transition into the ONLINE state. At this point, manual intervention is required. You must stop or unregister resources that did not stop. After that, the server automatically transitions into the ONLINE state.

Server state: JOINING:

  • CHECKING RESOURCES

    Whenever a server reboots, the Oracle Clusterware stack restarts, or CRSD on a server restarts, the policy engine must determine the current state of the resources on the server. While that procedure is in progress, this value is returned.


Managing Memory Pressure for Database Servers

Enterprise database servers can use all available memory due to too many open sessions or runaway workloads. Running out of memory can result in failed transactions or, in extreme cases, a restart of the server and the loss of a valuable resource for your applications. Oracle Database QoS Management detects memory pressure on a server in real time and redirects new sessions to other servers to prevent using all available memory on the stressed server.

When Oracle Database QoS Management is enabled and managing an Oracle Clusterware server pool, Cluster Health Monitor sends a metrics stream that provides real-time information about memory resources for the cluster servers to Oracle Database QoS Management. This information includes the following:

  • Amount of available memory

  • Amount of memory currently in use

If Oracle Database QoS Management determines that a node is under memory stress, then the database services managed by Oracle Clusterware are stopped on that node, preventing new connections from being created. After the memory stress is relieved, the services on that node are restarted automatically, and the listener starts sending new connections to that server. Memory pressure can be relieved in several ways (for example, by closing existing sessions or by user intervention).

Rerouting new sessions to different servers protects the existing workloads on the memory-stressed server and enables the server to remain available. Managing the memory pressure for servers adds a new resource protection capability in managing service levels for applications hosted on Oracle RAC databases.

Server Category Attributes

You define servers into named categories, and assign attributes that define servers as members of that category. Some attributes that you can use to define members of a category describe the state conditions for the server, and others describe the physical characteristics of the server. You can also create your own characteristics to define servers as members of a particular category.


Note:

If you change the value of any of the server attributes listed in the EXPRESSION server category attribute, then you must restart the Oracle Clusterware technology stack on the affected servers before the new values take effect.

Table 3-6 lists and describes possible server category attributes.

Table 3-6 Server Category Attributes

AttributeValues and FormatDescription
NAME

String

The name of the server category, which you must specify when you create the server category. Server category names must be unique within the domain of names of user-created entities, such as resources, types, and servers. A server pool name has a 254 character limit and can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A server pool name cannot begin with a period nor with ora.

ACTIVE_CSS_ROLE

hub, leaf

Active role for the server, which can be either of the following:

hub: The server is a Hub Node in either an Oracle Flex Cluster or an Oracle Clusterware standard Cluster. This is the default value in either case.

leaf: The server is a Leaf Node in an Oracle Flex Cluster.

EXPRESSION

String in the following format:

(expression)

A set of server attribute names, values, and conditions that can be evaluated for each server to determine whether it belongs to the category. Table 3-4 lists and describes server attributes.

Acceptable comparison operators include:


=: equal
eqi: equal, case insensitive
>: greater than
<: less than
!=: not equal
co: contains
coi: contains, case insensitive
st: starts with
en: ends with
nc: does not contain
nci: does not contain, case insensitive

Acceptable boolean operators include:


AND
OR

Note: Spaces must surround the operators used in the EXPRESSION string.

For example:

EXPRESSION='((NAME = server1) OR (NAME = server2))'"

An Example Policy Set Configuration

Assume that you have a four-node cluster that is used by three different applications, app1, app2, and app3, and that you have created three server pools, pool1, pool2, and pool3. You configure the server pools such that each application is assigned to run in its own server pool, and that app1 wants to have two servers, and app2 and app3 each want one server. The server pool configurations are as follows:

$ crsctl status serverpool pool1 -p
NAME=pool1
IMPORTANCE=0
MIN_SIZE=2
MAX_SIZE=2
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:mjk:rwx,pgrp:g900:rwx,other::r--
SERVER_CATEGORY=

$ crsctl status serverpool pool2 -p
NAME=pool2
IMPORTANCE=0
MIN_SIZE=1
MAX_SIZE=1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:mjk:rwx,pgrp:g900:rwx,other::r--
SERVER_CATEGORY=

$ crsctl status serverpool pool3 -p
NAME=pool3
IMPORTANCE=0
MIN_SIZE=1
MAX_SIZE=1
SERVER_NAMES=
PARENT_POOLS=
EXCLUSIVE_POOLS=
ACL=owner:mjk:rwx,pgrp:g900:rwx,other::r--
SERVER_CATEGORY=

Notes:

The crsctl status serverpool command shown in the preceding examples only functions if you created the server pools using CRSCTL.

This configuration, however, does not consider the fact that some applications need server time at different times of the day, week, or month. Email applications, for example, typically use more resources during business hours and use less resources at night and on weekends.

Further assume that app1 requires two servers during business hours, but only requires one server at night and does not require any servers on weekends. At the same time, app2 and app3 each require one server during business hours, while at night, app2 requires two servers and app3 requires one. On the weekend, app2 requires one server and app3 requt+ires three. This scenario suggests three configurations that you must configure for the cluster:

  1. Day Time:


    app1 uses two servers
    app2 and app3 use one server, each
  2. Night Time:


    app1 uses one server
    app2 uses two servers
    app3 uses one server
  3. Weekend:


    app1 is not running (0 servers)
    app2 uses one server
    app3 uses three servers

Policy Set Creation

Given these assumptions, run the crsctl create policyset command to create a policy set with a single policy named Default, which reflects the configuration displayed by the crsctl status serverpool command. You can use the Default policy to create other policies to meet the needs assumed in this example. The crsctl create policyset command creates a text file similar to Example 3-1.

Example 3-1 Policy Set Text File

SERVER_POOL_NAMES=Free pool1 pool2 pool3
POLICY
  NAME=Default
  SERVERPOOL
    NAME=pool1
    IMPORTANCE=0
    MAX_SIZE=2
    MIN_SIZE=2
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool2
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool3
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=

Policy Modification

To modify the preceding policy set to meet the needs assumed in this example, edit the text file to define policies for the three scenarios discussed previously, by changing the name of the policy from Default to DayTime. Then, copy the policy and paste it twice to form two subsequent policies, which you name NightTime and Weekend, as shown in Example 3-2.

Example 3-2 Modified Policy Set Text File

SERVER_POOL_NAMES=Free pool1 pool2 pool3
POLICY
  NAME=DayTime
  SERVERPOOL
    NAME=pool1
    IMPORTANCE=0
    MAX_SIZE=2
    MIN_SIZE=2
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool2
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool3
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
POLICY
  NAME=NightTime
  SERVERPOOL
    NAME=pool1
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool2
    IMPORTANCE=0
    MAX_SIZE=2
    MIN_SIZE=2
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool3
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
POLICY
  NAME=Weekend
  SERVERPOOL
    NAME=pool1
    IMPORTANCE=0
    MAX_SIZE=0
    MIN_SIZE=0
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool2
    IMPORTANCE=0
    MAX_SIZE=1
    MIN_SIZE=1
    SERVER_CATEGORY=
  SERVERPOOL
    NAME=pool3
    IMPORTANCE=0
    MAX_SIZE=3
    MIN_SIZE=3
    SERVER_CATEGORY=

Notice that, in addition to changing the names of the individual policies, the MAX_SIZE and MIN_SIZE policy attributes for each of the server pools in each of the policies were also modified according to the needs of the applications.

The following command registers the policy set stored in a file with Oracle Clusterware:

$ crsctl modify policyset -file file_name

You can achieve the same results as shown in the previous examples by editing the Default policy set, as a whole, using the crsctl modify policyset command, and by using the crsctl modify serverpool command to change individual server pool attributes for a specific policy.

The following command modifies the Default policy set to manage the three server pools:

$ crsctl modify policyset –attr "SERVER_POOL_NAMES=Free pool1 pool2 pool3"

The following commands add the three policies:

$ crsctl add policy DayTime
$ crsctl add policy NightTime
$ crsctl add policy Weekend

The following commands configure the three server pools according to the requirements of the policies:

$ crsctl modify serverpool pool1 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy DayTime
$ crsctl modify serverpool pool1 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime
$ crsctl modify serverpool pool1 -attr "MIN_SIZE=0,MAX_SIZE=0" -policy Weekend

$ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime
$ crsctl modify serverpool pool2 -attr "MIN_SIZE=2,MAX_SIZE=2" -policy NightTime
$ crsctl modify serverpool pool2 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy Weekend

$ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy DayTime
$ crsctl modify serverpool pool3 -attr "MIN_SIZE=1,MAX_SIZE=1" -policy NightTime
$ crsctl modify serverpool pool3 -attr "MIN_SIZE=3,MAX_SIZE=3" -policy Weekend

There are now three distinct policies to manage the server pools to accommodate the requirements of the three applications.

Policy Activation

The policy set is now configured and controlling the three server pools with three different policies. You can activate policies when necessary, prompting Oracle Clusterware to reconfigure a server pool according to each policy's configuration.

The following command activates the DayTime policy:

$ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=DayTime"

The current status of the resources is as follows:

$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
app1
      1        ONLINE  ONLINE       mjk_has3_2               STABLE
      2        ONLINE  ONLINE       mjk_has3_0               STABLE
app2
      1        ONLINE  ONLINE       mjk_has3_1               STABLE
app3
      1        ONLINE  ONLINE       mjk_has3_3               STABLE

The status of the server pools is as follows:

$ crsctl stat serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=pool1
ACTIVE_SERVERS=mjk_has3_0 mjk_has3_2

NAME=pool2
ACTIVE_SERVERS=mjk_has3_1

NAME=pool3
ACTIVE_SERVERS=mjk_has3_3

The servers are allocated according to the DayTime policy and the applications run on their respective servers.

The following command activates the Weekend policy (remember, because the server pools have different sizes, as servers move between server pools, some applications will be stopped and others will be started):

$ crsctl modify policyset -attr "LAST_ACTIVATED_POLICY=Weekend"
CRS-2673: Attempting to stop 'app1' on 'mjk_has3_2'
CRS-2673: Attempting to stop 'app1' on 'mjk_has3_0'
CRS-2677: Stop of 'app1' on 'mjk_has3_0' succeeded
CRS-2672: Attempting to start 'app3' on 'mjk_has3_0'
CRS-2677: Stop of 'app1' on 'mjk_has3_2' succeeded
CRS-2672: Attempting to start 'app3' on 'mjk_has3_2'
CRS-2676: Start of 'app3' on 'mjk_has3_2' succeeded
CRS-2676: Start of 'app3' on 'mjk_has3_0' succeeded

The current status of the resources is as follows:

$ crsctl status resource -t
--------------------------------------------------------------------------------
Name           Target  State        Server                   State details      
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
app1
      1        ONLINE  OFFLINE                               STABLE
      2        ONLINE  OFFLINE                               STABLE
app2
      1        ONLINE  ONLINE       mjk_has3_1               STABLE
app3
      1        ONLINE  ONLINE       mjk_has3_0               STABLE
      2        ONLINE  ONLINE       mjk_has3_2               STABLE
      3        ONLINE  ONLINE       mjk_has3_3               STABLE
--------------------------------------------------------------------------------

The status of the server pools is as follows:

$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=

NAME=pool1
ACTIVE_SERVERS=

NAME=pool2
ACTIVE_SERVERS=mjk_has3_1

NAME=pool3
ACTIVE_SERVERS=mjk_has3_0 mjk_has3_2 mjk_has3_3

Using the crsctl modify policyset command, Oracle Clusterware changed server pool configuration, moved servers according to the requirements of the policy, and stopped and started the applications.


See Also:

Appendix E, "Oracle Clusterware Control (CRSCTL) Utility Reference" for complete details on using the CRSCTL commands shown in this example

PK+t+PKP DOEBPS/intro.htm Introduction to Oracle Clusterware

1 Introduction to Oracle Clusterware

Oracle Clusterware enables servers to communicate with each other, so that they appear to function as a collective unit. This combination of servers is commonly known as a cluster. Although the servers are standalone servers, each server has additional processes that communicate with other servers. In this way the separate servers appear as if they are one system to applications and end users.

This chapter includes the following topics:

What is Oracle Clusterware?

Oracle Clusterware provides the infrastructure necessary to run Oracle Real Application Clusters (Oracle RAC). Oracle Clusterware also manages resources, such as virtual IP (VIP) addresses, databases, listeners, services, and so on. These resources are generally named ora.entity_name.resource_type_abbreviation, such as ora.mydb.db, which is the name of a resource that is a database. (Some examples of abbreviation are db for database, lsnr for listener, and vip for VIP.) Oracle does not support editing these resources except under the explicit direction of My Oracle Support.

Figure 1-1 shows a configuration that uses Oracle Clusterware to extend the basic single-instance Oracle Database architecture. In Figure 1-1, the cluster is running Oracle Database and is actively servicing applications and users. Using Oracle Clusterware, you can use the same high availability mechanisms to make your Oracle database and your custom applications highly available.

Figure 1-1 Oracle Clusterware Configuration

Description of Figure 1-1 follows

The benefits of using a cluster include:

  • Scalability of applications

  • Reduce total cost of ownership for the infrastructure by providing a scalable system with low-cost commodity hardware

  • Ability to fail over

  • Increase throughput on demand for cluster-aware applications, by adding servers to a cluster to increase cluster resources

  • Increase throughput for cluster-aware applications by enabling the applications to run on all of the nodes in a cluster

  • Ability to program the startup of applications in a planned order that ensures dependent processes are started in the correct sequence

  • Ability to monitor processes and restart them if they stop

  • Eliminate unplanned downtime due to hardware or software malfunctions

  • Reduce or eliminate planned downtime for software maintenance

You can program Oracle Clusterware to manage the availability of user applications and Oracle databases. In an Oracle RAC environment, Oracle Clusterware manages all of the resources automatically. All of the applications and processes that Oracle Clusterware manages are either cluster resources or local resources.

Oracle Clusterware is required for using Oracle RAC; it is the only clusterware that you need for platforms on which Oracle RAC operates. Although Oracle RAC continues to support many third-party clusterware products on specific platforms, you must also install and use Oracle Clusterware. Note that the servers on which you want to install and run Oracle Clusterware must use the same operating system.

Using Oracle Clusterware eliminates the need for proprietary vendor clusterware and provides the benefit of using only Oracle software. Oracle provides an entire software solution, including everything from disk management with Oracle Automatic Storage Management (Oracle ASM) to data management with Oracle Database and Oracle RAC. In addition, Oracle Database features, such as Oracle Services, provide advanced functionality when used with the underlying Oracle Clusterware high availability framework.

Oracle Clusterware has two stored components, besides the binaries: The voting files, which record node membership information, and the Oracle Cluster Registry (OCR), which records cluster configuration information. Voting files and OCRs must reside on shared storage available to all cluster member nodes.

Understanding System Requirements for Oracle Clusterware

To use Oracle Clusterware, you must understand the hardware and software concepts and requirements as described in the following sections:

Oracle Clusterware Hardware Concepts and Requirements


Note:

Many hardware providers have validated cluster configurations that provide a single part number for a cluster. If you are new to clustering, then use the information in this section to simplify your hardware procurement efforts when you purchase hardware to create a cluster.

A cluster consists of one or more servers. Access to an external network is the same for a server in a cluster (also known as a cluster member or node) as for a standalone server. However, a server that is part of a cluster, otherwise known as a node or a cluster member, requires a second network. This second network is referred to as the interconnect. For this reason, cluster member nodes require at least two network interface cards: one for a public network and one for a private network. The interconnect network is a private network using a switch (or multiple switches) that only the nodes in the cluster can access.Foot 1 


Note:

Oracle does not support using crossover cables as Oracle Clusterware interconnects.

Cluster size is determined by the requirements of the workload running on the cluster and the number of nodes that you have configured in the cluster. If you are implementing a cluster for high availability, then configure redundancy for all of the components of the infrastructure as follows:

  • At least two network interfaces for the public network, bonded to provide one address

  • At least two network interfaces for the private interconnect network

The cluster requires cluster-aware storageFoot 2  that is connected to each server in the cluster. This may also be referred to as a multihost device. Oracle Clusterware supports Network File Systems (NFSs), iSCSI, Direct Attached Storage (DAS), Storage Area Network (SAN) storage, and Network Attached Storage (NAS).

To provide redundancy for storage, generally provide at least two connections from each server to the cluster-aware storage. There may be more connections depending on your I/O requirements. It is important to consider the I/O requirements of the entire cluster when choosing your storage subsystem.

Most servers have at least one local disk that is internal to the server. Often, this disk is used for the operating system binaries; you can also use this disk for the Oracle software binaries. The benefit of each server having its own copy of the Oracle binaries is that it increases high availability, so that corruption of one binary does not affect all of the nodes in the cluster simultaneously. It also allows rolling upgrades, which reduce downtime.

Oracle Clusterware Operating System Concepts and Requirements

Each server must have an operating system that is certified with the Oracle Clusterware version you are installing. Refer to the certification matrices available in the Oracle Grid Infrastructure Installation Guide for your platform or on My Oracle Support (formerly OracleMetaLink) for details, which are available from the following URL:

http://www.oracle.com/technetwork/database/clustering/tech-generic-unix-new-166583.html

When the operating system is installed and working, you can then install Oracle Clusterware to create the cluster. Oracle Clusterware is installed independently of Oracle Database. After you install Oracle Clusterware, you can then install Oracle Database or Oracle RAC on any of the nodes in the cluster.

Oracle Clusterware Software Concepts and Requirements

Oracle Clusterware uses voting files to provide fencing and cluster node membership determination. OCR provides cluster configuration information. You can place the Oracle Clusterware files on either Oracle ASM or on shared common disk storage. If you configure Oracle Clusterware on storage that does not provide file redundancy, then Oracle recommends that you configure multiple locations for OCR and voting files. The voting files and OCR are described as follows:

  • Voting Files

    Oracle Clusterware uses voting files to determine which nodes are members of a cluster. You can configure voting files on Oracle ASM, or you can configure voting files on shared storage.

    If you configure voting files on Oracle ASM, then you do not need to manually configure the voting files. Depending on the redundancy of your disk group, an appropriate number of voting files are created.

    If you do not configure voting files on Oracle ASM, then for high availability, Oracle recommends that you have a minimum of three voting files on physically separate storage. This avoids having a single point of failure. If you configure a single voting file, then you must use external mirroring to provide redundancy.

    Oracle recommends that you do not use more than five voting files, even though Oracle supports a maximum number of 15 voting files.

  • Oracle Cluster Registry

    Oracle Clusterware uses the Oracle Cluster Registry (OCR) to store and manage information about the components that Oracle Clusterware controls, such as Oracle RAC databases, listeners, virtual IP addresses (VIPs), and services and any applications. OCR stores configuration information in a series of key-value pairs in a tree structure. To ensure cluster high availability, Oracle recommends that you define multiple OCR locations. In addition:

    • You can have up to five OCR locations

    • Each OCR location must reside on shared storage that is accessible by all of the nodes in the cluster

    • You can replace a failed OCR location online if it is not the only OCR location

    • You must update OCR through supported utilities such as Oracle Enterprise Manager, the Oracle Clusterware Control Utility (CRSCTL), the Server Control Utility (SRVCTL), the OCR configuration utility (OCRCONFIG), or the Database Configuration Assistant (DBCA)


    See Also:

    Chapter 2, "Administering Oracle Clusterware" for more information about voting files and OCR

Oracle Clusterware Network Configuration Concepts

Oracle Clusterware enables a dynamic Oracle Grid Infrastructure through the self-management of the network requirements for the cluster. Oracle Clusterware 12c supports the use of Dynamic Host Configuration Protocol (DHCP) or stateless address autoconfiguration for the VIP addresses and the Single Client Access Name (SCAN) address, but not the public address. DHCP provides dynamic assignment of IPv4 VIP addresses, while Stateless Address Autoconfiguration provides dynamic assignment of IPv6 VIP addresses.

When you are using Oracle RAC, all of the clients must be able to reach the database, which means that the clients must resolve VIP and SCAN names to all of the VIP and SCAN addresses, respectively. This problem is solved by the addition of Grid Naming Service (GNS) to the cluster. GNS is linked to the corporate Domain Name Service (DNS) so that clients can resolve host names to these dynamic addresses and transparently connect to the cluster and the databases. Oracle supports using GNS without DHCP or zone delegation in Oracle Clusterware 12c (as with Oracle Flex ASM server clusters, which you can configure without zone delegation or dynamic networks).


Note:

Oracle does not support using GNS without DHCP or zone delegation on Windows.


See Also:

Oracle Automatic Storage Management Administrator's Guide for more information about Oracle Flex ASM

Beginning with Oracle Clusterware 12c, a GNS instance can now service multiple clusters rather than just one, thus only a single domain must be delegated to GNS in DNS. GNS still provides the same services as in previous versions of Oracle Clusterware.

The cluster in which the GNS server runs is referred to as the server cluster. A client cluster advertises its names with the server cluster. Only one GNS daemon process can run on the server cluster. Oracle Clusterware puts the GNS daemon process on one of the nodes in the cluster to maintain availability.

In previous, single-cluster versions of GNS, the single cluster could easily locate the GNS service provider within itself. In the multicluster environment, however, the client clusters must know the GNS address of the server cluster. Given that address, client clusters can find the GNS server running on the server cluster.

In order for GNS to function on the server cluster, you must have the following:

  • The DNS administrator must delegate a zone for use by GNS

  • A GNS instance must be running somewhere on the network and it must not be blocked by a firewall

  • All of the node names in a set of clusters served by GNS must be unique


See Also:

"Overview of Grid Naming Service" for information about administering GNS

Single Client Access Name (SCAN)

The SCAN is a domain name registered to at least one and up to three IP addresses, either in DNS or GNS. When using GNS and DHCP, Oracle Clusterware configures the VIP addresses for the SCAN name that is provided during cluster configuration.

The node VIP and the three SCAN VIPs are obtained from the DHCP server when using GNS. If a new server joins the cluster, then Oracle Clusterware dynamically obtains the required VIP address from the DHCP server, updates the cluster resource, and makes the server accessible through GNS.

Configuring Addresses Manually

Alternatively, you can choose manual address configuration, in which you configure the following:

  • One public address and host name for each node.

  • One VIP address for each node.

    You must assign a VIP address to each node in the cluster. Each VIP address must be on the same subnet as the public IP address for the node and should be an address that is assigned a name in the DNS. Each VIP address must also be unused and unpingable from within the network before you install Oracle Clusterware.

  • Up to three SCAN addresses for the entire cluster.


    Note:

    The SCAN must resolve to at least one address on the public network. For high availability and scalability, Oracle recommends that you configure the SCAN to resolve to three addresses on the public network.


See Also:

Your platform-specific Oracle Grid Infrastructure Installation Guide installation documentation for information about system requirements and configuring network addresses

Overview of Oracle Clusterware Platform-Specific Software Components

When Oracle Clusterware is operational, several platform-specific processes or services run on each node in the cluster. This section describes these various processes and services.

The Oracle Clusterware Technology Stack

Oracle Clusterware consists of two separate technology stacks: an upper technology stack anchored by the Cluster Ready Services (CRS) daemon (CRSD) and a lower technology stack anchored by the Oracle High Availability Services daemon (OHASD). These two technology stacks have several processes that facilitate cluster operations. The following sections describe these technology stacks in more detail:

The Cluster Ready Services Technology Stack

The following list describes the processes that comprise CRS:

  • Cluster Ready Services (CRS): The primary program for managing high availability operations in a cluster.

    The CRSD manages cluster resources based on the configuration information that is stored in OCR for each resource. This includes start, stop, monitor, and failover operations. The CRSD process generates events when the status of a resource changes. When you have Oracle RAC installed, the CRSD process monitors the Oracle database instance, listener, and so on, and automatically restarts these components when a failure occurs.

  • Cluster Synchronization Services (CSS): Manages the cluster configuration by controlling which nodes are members of the cluster and by notifying members when a node joins or leaves the cluster. If you are using certified third-party clusterware, then CSS processes interface with your clusterware to manage node membership information.

    The cssdagent process monitors the cluster and provides I/O fencing. This service formerly was provided by Oracle Process Monitor Daemon (oprocd), also known as OraFenceService on Windows. A cssdagent failure may result in Oracle Clusterware restarting the node.

  • Oracle ASM: Provides disk management for Oracle Clusterware and Oracle Database.

  • Cluster Time Synchronization Service (CTSS): Provides time management in a cluster for Oracle Clusterware.

  • Event Management (EVM): A background process that publishes events that Oracle Clusterware creates.

  • Grid Naming Service (GNS): Handles requests sent by external DNS servers, performing name resolution for names defined by the cluster.

  • Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. This process runs server callout scripts when FAN events occur. This process was known as RACG in Oracle Clusterware 11g release 1 (11.1).

  • Oracle Notification Service (ONS): A publish and subscribe service for communicating Fast Application Notification (FAN) events.

  • Oracle Root Agent(orarootagent): A specialized oraagent process that helps the CRSD manage resources owned by root, such as the network, and the Grid virtual IP address.

The Cluster Synchronization Service (CSS), Event Management (EVM), and Oracle Notification Services (ONS) components communicate with other cluster component layers on other nodes in the same cluster database environment. These components are also the main communication links between Oracle Database, applications, and the Oracle Clusterware high availability components. In addition, these background processes monitor and manage database operations.

The Oracle High Availability Services Technology Stack

The following list describes the processes that comprise the Oracle High Availability Services technology stack:

  • appagent: Protects any resources of the application resource type used in previous versions of Oracle Clusterware.


    See Also:

    "Resources" for more information about appagent

  • Cluster Logger Service (ologgerd): Receives information from all the nodes in the cluster and persists in an Oracle Grid Infrastructure Management Repository-based database. This service runs on only two nodes in a cluster.

  • Grid Interprocess Communication (GIPC): A support daemon that enables Redundant Interconnect Usage.

  • Grid Plug and Play (GPNPD): Provides access to the Grid Plug and Play profile, and coordinates updates to the profile among the nodes of the cluster to ensure that all of the nodes have the most recent profile.

  • Multicast Domain Name Service (mDNS): Used by Grid Plug and Play to locate profiles in the cluster, and by GNS to perform name resolution. The mDNS process is a background process on Linux and UNIX and on Windows.

  • Oracle Agent (oraagent): Extends clusterware to support Oracle-specific requirements and complex resources. This process manages daemons that run as the Oracle Clusterware owner, like the GIPC, GPNPD, and GIPC daemons.


    Note:

    This process is distinctly different from the process of the same name that runs in the Cluster Ready Services technology stack.

  • Oracle Root Agent (orarootagent): A specialized oraagent process that helps the CRSD manage resources owned by root, such as the Cluster Health Monitor (CHM).


    Note:

    This process is distinctly different from the process of the same name that runs in the Cluster Ready Services technology stack.


    See Also:

    "Overview of Managing Oracle Clusterware Environments" for more information about CHM

  • scriptagent: Protects resources of resource types other than application when using shell or batch scripts to protect an application.


    See Also:

    "Resources" for more information about scriptagent

  • System Monitor Service (osysmond): The monitoring and operating system metric collection service that sends the data to the cluster logger service. This service runs on every node in a cluster.

Table 1-1 lists the processes and services associated with Oracle Clusterware components. In Table 1-1, if a UNIX or a Linux system process has an (r) beside it, then the process runs as the root user.

Table 1-1 List of Processes and Services Associated with Oracle Clusterware ComponentsFoot 1 

Oracle Clusterware ComponentLinux/UNIX ProcessWindows Processes

Oracle ASMFoot 2 



CRS

crsd.bin (r)

crsd.exe

CSS

ocssd.bin, cssdmonitor, cssdagent

cssdagent.exe, cssdmonitor.exe ocssd.exe

CTSS

octssd.bin (r)

octssd.exe

EVM

evmd.bin, evmlogger.bin

evmd.exe

GIPC

gipcd.bin

 


GNS

gnsd (r)

gnsd.exe

Grid Plug and Play


gpnpd.bin

gpnpd.exe

LOGGER

ologgerd.bin (r)

ologgerd.exe

Master Diskmon

diskmon.bin

 


mDNS

mdnsd.bin

mDNSResponder.exe

Oracle agent

oraagent.bin (Oracle Clusterware 12c release 1 (12.1) and 11g release 2 (11.2)), or racgmain and racgimon (Oracle Clusterware 11g release 1 (11.1))

oraagent.exe

Oracle High Availability Services


ohasd.bin (r)

ohasd.exe

ONS

ons

ons.exe

Oracle root agent

orarootagent (r)

orarootagent.exe

SYSMON

osysmond.bin (r)

osysmond.exe


Footnote 1 The only Windows services associated with the Oracle Grid Infrastructure are OracleOHService (OHASD), Oracle ASM, listener services (including node listeners and SCAN listeners), and management database. Oracle ASM can be considered part of the Oracle Clusterware technology stack when OCR is stored on Oracle ASM. The listeners and management database are Oracle Clusterware resources and are not properly part of the Oracle Clusterware technology stack.

Footnote 2 Oracle ASM is not just one process, but an instance. Given Oracle Flex ASM, Oracle ASM does not necessarily run on every cluster node but only some of them.


See Also:

"Oracle Clusterware Diagnostic and Alert Log Data" for information about the location of log files created for processes


Note:

Oracle Clusterware on Linux platforms can have multiple threads that appear as separate processes with unique process identifiers.

Figure 1-2 illustrates cluster startup.

Figure 1-2 Cluster Startup

Description of Figure 1-2 follows

Oracle Clusterware Processes on Windows Systems

Oracle Clusterware processes on Microsoft Windows systems include the following:

  • mDNSResponder.exe: Manages name resolution and service discovery within attached subnets

  • OracleOHService: Starts all of the Oracle Clusterware daemons

Overview of Installing Oracle Clusterware

The following section introduces the installation processes for Oracle Clusterware.


Note:

Install Oracle Clusterware with the Oracle Universal Installer.

Oracle Clusterware Version Compatibility

You can install different releases of Oracle Clusterware, Oracle ASM, and Oracle Database on your cluster. Follow these guidelines when installing different releases of software on your cluster:

  • You can only have one installation of Oracle Clusterware running in a cluster, and it must be installed into its own home (Grid_home). The release of Oracle Clusterware that you use must be equal to or higher than the Oracle ASM and Oracle RAC versions that run in the cluster. You cannot install a version of Oracle RAC that was released after the version of Oracle Clusterware that you run on the cluster. In other words:

    • Oracle Clusterware 12c supports Oracle ASM 12c only, because Oracle ASM is in the Oracle Grid Infrastructure home, which also includes Oracle Clusterware

    • Oracle Clusterware 12c supports Oracle Database 12c, Oracle Database 11g release 2 (11.2) and 11g release 1 (11.1), and Oracle Database 10g release 2 (10.2) and 10g release 1 (10.1)

    • Oracle ASM 12c requires Oracle Clusterware 12c and supports Oracle Database 12c, Oracle Database 11g release 2 (11.2), Oracle Database 11g release 1 (11.1), Oracle Database 10g release 2 (10.2), and 10g release 1 (10.1)

    • Oracle Database 12c requires Oracle Clusterware 12c

      For example:

      • If you have Oracle Clusterware 12c installed as your clusterware, then you can have an Oracle Database 10g release 1 (10.1) single-instance database running on one node, and separate Oracle Real Application Clusters 10g release 1 (10.1), 10g release 2 (10.2), and Oracle Real Application Clusters 11g release 1 (11.1) databases also running on the cluster. However, you cannot have Oracle Clusterware 10g release 2 (10.2) installed on your cluster, and install Oracle Real Application Clusters 11g. You can install Oracle Database 11g single-instance on a node in an Oracle Clusterware 10g release 2 (10.2) cluster.

      • When using different Oracle ASM and Oracle Database releases, the functionality of each depends on the functionality of the earlier software release. Thus, if you install Oracle Clusterware 11g and you later configure Oracle ASM, and you use Oracle Clusterware to support an existing Oracle Database 10g release 2 (10.2.0.3) installation, then the Oracle ASM functionality is equivalent only to that available in the 10g release 2 (10.2.0.3) release version. Set the compatible attributes of a disk group to the appropriate release of software in use.


        See Also:

        Oracle Automatic Storage Management Administrator's Guide for information about compatible attributes of disk groups

  • There can be multiple Oracle homes for the Oracle database (both single instance and Oracle RAC) in the cluster. The Oracle homes for all nodes of an Oracle RAC database must be the same.

  • You can use different users for the Oracle Clusterware and Oracle database homes if they belong to the same primary group.

  • As of Oracle Clusterware 12c, there can only be one installation of Oracle ASM running in a cluster. Oracle ASM is always the same version as Oracle Clusterware, which must be the same (or higher) release than that of the Oracle database.

  • For Oracle RAC running Oracle9i you must run an Oracle9i cluster. For UNIX systems, that is HACMP, Serviceguard, Sun Cluster, or Veritas SF. For Windows and Linux systems, that is the Oracle Cluster Manager. To install Oracle RAC 10g, you must also install Oracle Clusterware.

  • Oracle recommends that you do not run different cluster software on the same servers unless they are certified to work together. However, if you are adding Oracle RAC to servers that are part of a cluster, either migrate to Oracle Clusterware or ensure that:

    • The clusterware you run is supported to run with Oracle RAC 12c.

    • You have installed the correct options for Oracle Clusterware and the other vendor clusterware to work together.


See Also:

Oracle Grid Infrastructure Installation Guide for more version compatibility information

Overview of Upgrading and Patching Oracle Clusterware

Oracle supports out-of-place upgrades, only, because Oracle Clusterware 12c must have its own, new Grid home. For Oracle Clusterware 12c, Oracle supports in-place or out-of-place patching. Oracle supports patch bundles and one-off patches for in-place patching but only supports patch sets and major point releases for out-of-place upgrades.

In-place patching replaces the Oracle Clusterware software with the newer version in the same Grid home. Out-of-place upgrade has both versions of the same software present on the nodes at the same time, in different Grid homes, but only one version is active.

Rolling upgrades avoid downtime and ensure continuous availability of Oracle Clusterware while the software is upgraded to the new version. When you upgrade to Oracle Clusterware 12c, Oracle Clusterware and Oracle ASM binaries are installed as a single binary called the Oracle Grid Infrastructure. You can upgrade Oracle Clusterware in a rolling manner from Oracle Clusterware 10g and Oracle Clusterware 11g, however you can only upgrade Oracle ASM in a rolling manner from Oracle Database 11g release 1 (11.1).

Oracle supports force upgrades in cases where some nodes of the cluster are down.


See Also:

Your platform-specific Oracle Grid Infrastructure Installation Guide for procedures on upgrading Oracle Clusterware

Overview of Managing Oracle Clusterware Environments

The following list describes the tools and utilities for managing your Oracle Clusterware environment:

  • Cluster Health Monitor (CHM): Cluster Health Monitor detects and analyzes operating system and cluster resource-related degradation and failures to provide more details to users for many Oracle Clusterware and Oracle RAC issues, such as node eviction. The tool continuously tracks the operating system resource consumption at the node, process, and device levels. It collects and analyzes the clusterwide data. In real-time mode, when thresholds are met, the tool shows an alert to the user. For root-cause analysis, historical data can be replayed to understand what was happening at the time of failure.


    See Also:

    "Cluster Health Monitor" for more information about CHM

  • Cluster Verification Utility (CVU): CVU is a command-line utility that you use to verify a range of cluster and Oracle RAC specific components. Use CVU to verify shared storage devices, networking configurations, system requirements, and Oracle Clusterware, and operating system groups and users.

    Install and use CVU for both preinstallation and postinstallation checks of your cluster environment. CVU is especially useful during preinstallation and during installation of Oracle Clusterware and Oracle RAC components to ensure that your configuration meets the minimum installation requirements. Also use CVU to verify your configuration after completing administrative tasks, such as node additions and node deletions.


    See Also:

    Your platform-specific Oracle Clusterware and Oracle RAC installation guide for information about how to manually install CVU, and Appendix A, "Cluster Verification Utility Reference" for more information about using CVU

  • Oracle Cluster Registry Configuration Tool (OCRCONFIG): OCRCONFIG is a command-line tool for OCR administration. You can also use the OCRCHECK and OCRDUMP utilities to troubleshoot configuration problems that affect OCR.


    See Also:

    Chapter 2, "Administering Oracle Clusterware" for more information about managing OCR

  • Oracle Clusterware Control (CRSCTL): CRSCTL is a command-line tool that you can use to manage Oracle Clusterware. Use CRSCTL for general clusterware management, management of individual resources, configuration policies, and server pools for non-database applications.

    Oracle Clusterware 12c introduces cluster-aware commands with which you can perform operations from any node in the cluster on another node in the cluster, or on all nodes in the cluster, depending on the operation.

    You can use crsctl commands to monitor cluster resources (crsctl status resource) and to monitor and manage servers and server pools other than server pools that have names prefixed with ora.*, such as crsctl status server, crsctl status serverpool, crsctl modify serverpool, and crsctl relocate server. You can also manage Oracle High Availability Services on the entire cluster (crsctl start | stop | enable | disable | config crs), using the optional node-specific arguments -n or -all. You also can use CRSCTL to manage Oracle Clusterware on individual nodes (crsctl start | stop | enable | disable | config crs).


    See Also:


  • Oracle Enterprise Manager: Oracle Enterprise Manager has both the Cloud Control and Grid Control GUI interfaces for managing both single instance and Oracle RAC database environments. It also has GUI interfaces to manage Oracle Clusterware and all components configured in the Oracle Grid Infrastructure installation. Oracle recommends that you use Oracle Enterprise Manager to perform administrative tasks.


    See Also:

    Oracle Database 2 Day + Real Application Clusters Guide, Oracle Real Application Clusters Administration and Deployment Guide, and Oracle Enterprise Manager online documentation for more information about administering Oracle Clusterware with Oracle Enterprise Manager

  • Oracle Interface Configuration Tool (OIFCFG): OIFCFG is a command-line tool for both single-instance Oracle databases and Oracle RAC environments. Use OIFCFG to allocate and deallocate network interfaces to components. You can also use OIFCFG to direct components to use specific network interfaces and to retrieve component configuration information.

  • Server Control (SRVCTL): SRVCTL is a command-line interface that you can use to manage Oracle resources, such as databases, services, or listeners in the cluster.


    Note:

    You can only use SRVCTL to manage server pools that have names prefixed with ora.*.


    See Also:

    Oracle Real Application Clusters Administration and Deployment Guide for more information about SRVCTL

Overview of Cloning and Extending Oracle Clusterware in Grid Environments

Cloning nodes is the preferred method of creating new clusters. The cloning process copies Oracle Clusterware software images to other nodes that have similar hardware and software. Use cloning to quickly create several clusters of the same configuration. Before using cloning, you must install an Oracle Clusterware home successfully on at least one node using the instructions in your platform-specific Oracle Clusterware installation guide.

For new installations, or if you must install on only one cluster, Oracle recommends that you use the automated and interactive installation methods, such as Oracle Universal Installer or the Provisioning Pack feature of Oracle Enterprise Manager. These methods perform installation checks to ensure a successful installation. To add or delete Oracle Clusterware from nodes in the cluster, use the addnode.sh and rootcrs.pl scripts.


See Also:


Overview of the Oracle Clusterware High Availability Framework and APIs

Oracle Clusterware provides many high availability application programming interfaces called CLSCRS APIs that you use to enable OracOle Clusterware to manage applications or processes that run in a cluster. The CLSCRS APIs enable you to provide high availability for all of your applications.


See Also:

Appendix H, "Oracle Clusterware C Application Program Interfaces" for more detailed information about the CLSCRS APIs

You can define a VIP address for an application to enable users to access the application independently of the node in the cluster on which the application is running. This is referred to as the application VIP. You can define multiple application VIPs, with generally one application VIP defined for each application running. The application VIP is related to the application by making it dependent on the application resource defined by Oracle Clusterware.

To maintain high availability, Oracle Clusterware components can respond to status changes to restart applications and processes according to defined high availability rules. You can use the Oracle Clusterware high availability framework by registering your applications with Oracle Clusterware and configuring the clusterware to start, stop, or relocate your application processes. That is, you can make custom applications highly available by using Oracle Clusterware to create profiles that monitor, relocate, and restart your applications.

Overview of Cluster Time Management

The Cluster Time Synchronization Service (CTSS) is installed as part of Oracle Clusterware and runs in observer mode if it detects a time synchronization service or a time synchronization service configuration, valid or broken, on the system. For example, if the etc/ntp.conf file exists on any node in the cluster, then CTSS runs in observer mode even if no time synchronization software is running.

If CTSS detects that there is no time synchronization service or time synchronization service configuration on any node in the cluster, then CTSS goes into active mode and takes over time management for the cluster.

If CTSS is running in active mode while another, non-NTP, time synchronization software is running, then you can change CTSS to run in observer mode by creating a file called etc/ntp.conf. CTSS puts an entry in the alert log about the change to observer mode.

When nodes join the cluster, if CTSS is in active mode, then it compares the time on those nodes to a reference clock located on one node in the cluster. If there is a discrepancy between the two times and the discrepancy is within a certain stepping limit, then CTSS performs step time synchronization, which is to step the time, forward or backward, of the nodes joining the cluster to synchronize them with the reference.

Clocks on nodes in the cluster become desynchronized with the reference clock (a time CTSS uses as a basis and is on the first node started in the cluster) periodically for various reasons. When this happens, CTSS performs slew time synchronization, which is to speed up or slow down the system time on the nodes until they are synchronized with the reference system time. In this time synchronization method, CTSS does not adjust time backward, which guarantees monotonic increase of the system time.

When Oracle Clusterware starts, if CTSS is running in active mode and the time discrepancy is outside the stepping limit (the limit is 24 hours), then CTSS generates an alert in the alert log, exits, and Oracle Clusterware startup fails. You must manually adjust the time of the nodes joining the cluster to synchronize with the cluster, after which Oracle Clusterware can start and CTSS can manage the time for the nodes.

When performing slew time synchronization, CTSS never runs time backward to synchronize with the reference clock. CTSS periodically writes alerts to the alert log containing information about how often it adjusts time on nodes to keep them synchronized with the reference clock.

CTSS writes entries to the Oracle Clusterware alert log and syslog when it:

  • Detects a time change

  • Detects significant time difference from the reference node

  • The mode switches from observer to active or vice versa

Having CTSS running to synchronize time in a cluster facilitates troubleshooting Oracle Clusterware problems, because you will not have to factor in a time offset for a sequence of events on different nodes.

To activate CTSS in your cluster, you must stop and deconfigure the vendor time synchronization service on all nodes in the cluster. CTSS detects when this happens and assumes time management for the cluster.

For example, to deconfigure NTP, you must remove or rename the etc/ntp.conf file.

Similarly, to deactivate CTSS in your cluster:

  1. Configure the vendor time synchronization service on all nodes in the cluster. CTSS detects this change and reverts back to observer mode.

  2. Use the crsctl check ctss command to ensure that CTSS is operating in observer mode.

  3. Start the vendor time synchronization service on all nodes in the cluster.

  4. Use the cluvfy comp clocksync -n all command to verify that the vendor time synchronization service is operating.


See Also:

Oracle Grid Infrastructure Installation Guide for your platform for information about configuring NTP for Oracle Clusterware, or disabling it to use CTSS



Footnote Legend

Footnote 1: Oracle Clusterware supports up to 100 nodes in a cluster on configurations running Oracle Database 10g release 2 (10.2) and later releases.
Footnote 2: Cluster-aware storage may also be referred to as a multihost device.
PK8ZPKP DOEBPS/crsref.htm Oracle Clusterware Control (CRSCTL) Utility Reference

E Oracle Clusterware Control (CRSCTL) Utility Reference

This appendix contains reference information for the Oracle Clusterware Control (CRSCTL) utility.


Note:

Do not use CRSCTL commands on Oracle entities (such as resources, resource types, and server pools) that have names beginning with ora unless you are directed to do so by My Oracle Support. The Server Control utility (SRVCTL) is the correct utility to use on Oracle entities.

This appendix includes the following topics:

CRSCTL Overview

CRSCTL is an interface between you and Oracle Clusterware, parsing and calling Oracle Clusterware APIs for Oracle Clusterware objects.

CRSCTL provides cluster-aware commands with which you can perform check, start, and stop operations on the cluster. You can run these commands from any node in the cluster on another node in the cluster, or on all nodes in the cluster, depending on the operation.

You can use CRSCTL commands to perform several operations on Oracle Clusterware, such as:

  • Starting and stopping Oracle Clusterware resources

  • Enabling and disabling Oracle Clusterware daemons

  • Checking the health of the cluster

  • Managing resources that represent third-party applications

  • Integrating Intelligent Platform Management Interface (IPMI) with Oracle Clusterware to provide failure isolation support and to ensure cluster integrity

  • Debugging Oracle Clusterware components

Clusterized (Cluster Aware) Commands

You can run clusterized commands on one node to perform operations on another node in the cluster. These are referred to as remote operations. This simplifies administration because, for example, you no longer have to log in to each node to check the status of the Oracle Clusterware on all of your nodes.

Clusterized commands are completely operating system independent; they rely on the OHASD (Oracle High Availability Services daemon). If this daemon is running, then you can perform remote operations, such as the starting, stopping, and checking the status of remote nodes.

Clusterized commands include the following:

Operational Notes

Usage Information

  • The CRSCTL utility is located in the Grid_home/bin directory. To run CRSCTL commands, type in crsctl at the operating system prompt followed by the command and arguments, as shown in the following example:

    crsctl stop crs
    
  • There are three categories of CRSCTL commands:

    • Those that you use in either the Oracle Real Application Clusters (Oracle RAC) environment or in the Oracle Restart environment

    • Those that you use in the Oracle RAC environment, only

    • Those that you use in the Oracle Restart environment, only

  • Many CRSCTL commands use the -f parameter to force the command to run and ignore any checks.

    For example, if you specify the force parameter for the crsctl stop resource command on a resource that is running and has dependent resources that are also running, then the force parameter omits the error message and instead stops or relocates all the dependent resources before stopping the resource you reference in the command.

  • Do not use versions of CRSCTL earlier than 12c release 1 (12.1) to manage Oracle Clusterware 12c.

Filters

You can use filters to narrow down Oracle Clusterware entities upon which a CRSCTL command operates, as follows:

  • Simple filters are attribute-value pairs with an operator.

  • Operators must be surrounded by spaces, as shown in the examples.

  • You can combine simple filters into expressions called expression filters using Boolean operators.

Supported filter operators are:


=
>
<
!=
co: Contains
st: Starts with
en: Ends with

Supported Boolean operators are AND and OR.

Examples of filters are:

  • TYPE = type1

  • ((TYPE = type1) AND (CHECK_INTERVAL > 50))

  • (TYPE = type1) AND ((CHECK_INTERVAL > 30) OR (AUTO_START co never))

  • NAME en network.res

  • TYPE st ora.db

Using the eval Command

The eval command, when you use it, enables you to simulate a command without making any changes to the system. CRSCTL returns output that informs you what will happen if you run a particular command.

The eval commands available are:


Note:

CRSCTL can only evaluate third-party resources. Resources with the .ora prefix, such as ora.orcl.db, must be evaluated using SRVCTL commands.


See Also:


Using CRSCTL Help

To print the help information for CRSCTL, use the following command:

crsctl -help

If you want help for a specific command, such as start, then enter the command and append -help to the end, as shown in the following example:

crsctl start -help

You can also use the abbreviations -h or -? (this parameter functions in Linux, UNIX, and Windows environments) instead of -help.

Deprecated Subprograms or Commands

Table E-1 lists deprecated commands and their replacements that you can use to perform the same or similar functionality.

Table E-1 Deprecated CRSCTL Commands and Replacements

Deprecated CommandReplacement Commands
crs_stat
crsctl check cluster
crsctl status resource
crs_register
crsctl add resource
crsctl add type
crsctl modify resource
crsctl modify type
crs_unregister
crsctl stop resource
crsctl delete resource
crs_start
crsctl start resource
crsctl start crs
crsctl start cluster
crs_stop
crsctl stop resource
crsctl stop crs
crsctl stop cluster
crs_getperm
crsctl getperm resource
crsctl getperm type
crs_profile
crsctl add resource
crsctl add type
crsctl status resource
crsctl status type
crsctl modify resource
crsctl modify type
crs_relocate
crsctl relocate resource
crs_setperm
crsctl setperm resource
crsctl setperm type
crsctl add crs administrator

Use the access control list (ACL) to control who can add server pools.

crsctl check crsd
crsctl check crs
crsctl check cssd
crsctl check css
crsctl check evmd
crsctl check evm
crsctl debug res log
 resource_name:level
crsctl set log
crsctl set css votedisk
crsctl add css votedisk
crsctl delete css votedisk
crsctl query css votedisk
crsctl replace votedisk
crsctl start resources
crsctl start resource -all
crsctl stop resources
crsctl stop resource -all

CRSCTL Command Reference

This section is separated into three categories of CRSCTL commands:

Dual Environment CRSCTL Commands

You can use the following commands in either the Oracle RAC or the Oracle Restart environments:

crsctl add resource

Use the crsctl add resource command to register a resource to be managed by Oracle Clusterware. A resource can be an application process, a database, a service, a listener, and so on.

Syntax

crsctl add resource resource_name -type resource_type [-file file_path |
   -attr "attribute_name=attribute_value,attribute_name=attribute_value,..."]
  [-i] [-f]

Parameters

Table E-2 crsctl add resource Command Parameters

ParameterDescription
resource_name

A short, descriptive name for the resource.

-type resource_type

The type of resource that you are adding preceded by the -type flag.

-file file_path

Path name (either absolute or relative) for a text file containing line-delimited attribute name-value pairs that define the resource.

-attr "attribute_name=
attribute_value

You can specify attributes for a resource you are adding in two different ways:

  • Following the -attr flag, you can specify one or more comma-delimited attribute name-value pairs enclosed in double quotations marks (""). For example:

    -attr "CHECK_INTERVAL=30,START_TIMEOUT=25"
    

    Some attributes can have multiple values. In those cases, separate the values with a space and enclose the list of values in single quotation marks. For example:

    -attr "SERVER_POOL_NAMES=
    'ora.pool1 ora.pool2',START_TIMEOUT=25"
    
  • Additionally, you can specify attribute values for resource instances with a particular cardinality value, and with a particular degree value. This method can be useful for applications that are tied to a particular server. Following the -attr flag, the syntax is as follows:

    attribute_name{@SERVERNAME(server_name)
    [@DEGREEID(did)] | @CARDINALITYID(cid)
    [@DEGREEID(did)]}=attribute_value
    

    If you specify the @SERVERNAME(server_name) syntax, then the attribute value you specify for the attribute you specify is limited to resource instances residing on the server you specify.

    Alternatively, if you specify the @CARDINALITYID(cid) syntax, then the attribute value you specify for the attribute you specify is limited to resource instances with a specific cardinality ID (cid).

    Optionally, you can combine the @DEGREEID(did) syntax with either the SERVERNAME or CARDINALITYID syntax, or both, to limit the attribute value to resources with the specific DEGREE.

    Examples:

    CHECK_INTERVAL@SERVERNAME(node1)=45
    STOP_TIMEOUT@CARDINALITYID(2)=65
    STOP_TIMEOUT@SERVERNAME(node1)@DEGREEID(2)=65
    STOP_TIMEOUT@CARDINALITYID(3)@DEGREEID(2)=65
    
-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Use the force parameter:

  • To add a resource that has dependencies on other resources that do not yet exist. The force parameter overrides checks that would prevent a command from being completed.

  • To add a resource if the resource has hard dependencies on other resources and the owner of the resources does not execute permissions on one or more of the dependencies. If you do not specify the force parameter in this case, an error displays.

  • To add resources of application type because you may need to move servers into the Generic server pool. If the servers currently host resources that must be stopped, then the force parameter is required



See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resources and resource attributes

Usage Notes

  • Both the resource_name and -type resource_type parameters are required. You can create an associated resource type using the crsctl add type command.

  • Any user can create a resource but only clusterware administrators can create resources of type local_resource or resources of type cluster_resource that have SERVER_POOLS=*.

    Once a resource is defined, its ACL controls who can perform particular operations with it. The Oracle Clusterware administrator list is no longer relevant.

    On Windows, a member of the Administrators group has full control over everything.


    See Also:

    "crsctl setperm resource" for more information about setting ACLs

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks ('').

  • Following is an example of an attribute file:

    PLACEMENT=favored
    HOSTING_MEMBERS=node1 node2 node3
    RESTART_ATTEMPTS@CARDINALITYID(1)=0
    RESTART_ATTEMPTS@CARDINALITYID(2)=0
    FAILURE_THRESHOLD@CARDINALITYID(1)=2
    FAILURE_THRESHOLD@CARDINALITYID(2)=4
    FAILURE_INTERVAL@CARDINALITYID(1)=300
    FAILURE_INTERVAL@CARDINALITYID(2)=500
    CHECK_INTERVAL=2
    CARDINALITY=2
    
  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources.

Examples

Example 1

To register a VIP as a resource with Oracle Clusterware:

$ crsctl add resource app.appvip -type app.appvip.type -attr "RESTART_ATTEMPTS=2,
START_TIMEOUT=100,STOP_TIMEOUT=100,CHECK_INTERVAL=10,
USR_ORA_VIP=172.16.0.0,
START_DEPENDENCIES=hard(ora.net1.network)pullup(ora.net1.network),
STOP_DEPENDENCIES=hard(ora.net1.network)"

Example 2

To register a resource based on the test_type1 resource type:

$ crsctl add resource r1 -type test_type1 -attr "PATH_NAME=/tmp/r1.txt"
$ crsctl add resource r1 -type test_type1 -attr "PATH_NAME=/tmp/r2.txt"

Example 3

To register a Samba server resource of the generic_application resource type, using the EXECUTABLE_NAMES attribute:

# crsctl add resource my_samba -type generic_application -attr
"EXECUTABLE_NAMES=smbd,START_PROGRAM='/etc/rc.d/init.d/smb start',
STOP_PROGRAM='/etc/rc.d/init.d/smb stop'"

Example 4

To register a DNS server of the generic_application resource type, using the EXECUTABLE_NAMES attribute:

# crsctl add resource my_dns -type generic_application -attr
"EXECUTABLE_NAMES=named,START_PROGRAM='/etc/rc.d/init.d/named start',
STOP_PROGRAM='/etc/rc.d/init.d/named stop'"

Example 5

To register an Apache web server of the generic_application resource type using the PID_FILES attribute:

# crsctl add resource my_apache -type generic_application -attr
"START_PROGRAM='/usr/sbin/httpd -k start',STOP_PROGRAM='/usr/sbin/httpd -k stop',
PID_FILES=/etc/httpd/run/httpd.pid"

Example 6

To register an application of generic_application resource type using environment variables:

# crsctl add resource my_app -type generic_application -attr
"START_PROGRAM='/opt/my_app start', EXECUTABLE_NAMES=my_app,
ENVIRONMENT_VARS='USE_NETAPP=no,USE_BACKUP=yes,CLEAN_ON_KILL=yes'"

crsctl add type

Use the crsctl add type command to create a resource type in Oracle Clusterware.

Syntax

crsctl add type type_name -basetype base_type_name {-attr
"ATTRIBUTE=attribute_name | -file file_path,TYPE={string | int}
 [,DEFAULT_VALUE=default_value][,FLAGS=typeFlags"} [-i]

Parameters

Table E-3 crsctl add type Command Parameters

ParameterDescription
type_name

A name for the resource type in the form of xxx.yyy.type. Resource type names must be unique and cannot be changed after the resource type is registered.

-basetype base_type_name

The name of an existing base type. Any resource type that you create must either have local_resource or cluster_resource as its base resource type.

-attr

You can specify the resource type attributes using the -attr argument. Each type attribute definition can contain up to four type attribute keywords that must be displayed in the order shown. Enter a comma-delimited description of one or more resource type attributes enclosed in double quotation marks (""). The keywords for an attribute include:

  1. ATTRIBUTE: Specify a name for the attribute. The name is case-sensitive and cannot contain spaces.

  2. TYPE: Specify whether the attribute type is integer or string.

  3. DEFAULT_VALUE: (Optional) If the attribute is required, then a default value is not required. For attributes that are not required, you must specify a default value that Oracle Clusterware uses when you create resources based on this resource type.

  4. FLAGS: (Optional) Specify one or more of the following types, separated by a vertical bar (|):

    CONFIG: After you register a resource with this resource type, you can configure the attribute.

    READONLY: After you register a resource with this resource type, you cannot modify this attribute.

    REQUIRED: You must specify the name and value of this attribute when you create a resource that is based on this resource type. If you specify that this attribute is not required, then Oracle Clusterware uses the default value of this attribute that you specify.

    HOTMOD: If you change the value of an attribute for resources of this type, then the changes are applied immediately with the need to restart the resource.

You cannot use multiple -attr arguments to define multiple arguments for the resource type. Instead, you can specify multiple types within the double quotation marks after the -attr flag. For example:

"ATTRIBUTE=FOO,TYPE=integer,DEFAULT_
VALUE=0,FLAGS=REQUIRED|HOTMOD,ATTRIBUTE=BAR,TYPE=string"

The preceding example defines two type attributes, FOO and BAR. When you specify the ATTRIBUTE keyword, it ends the previous type attribute (if any) and begins a new type attribute.

-file file_path

Path name (either absolute or relative) for a text file containing line-delimited resource type keyword-value pairs that define the resource type. An example of the contents of the file is:

ATTRIBUTE=FOO
TYPE=integer
DEFAULT_VALUE=0
FLAGS=REQUIRED
ATTRIBUTE=BAR
TYPE=string

Note: The keywords must be in the following order: ATTRIBUTE, TYPE, DEFAULT_VALUE, and FLAGS. When you specify the ATTRIBUTE keyword, it ends the previous type attribute (if any) and begins a new type attribute.

The preceding example defines two type attributes, FOO and BAR.

Note: All operators must be surrounded by spaces.

See Also: "Filters" for more information about operators

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.



See Also:

"Resource Types" for more information about resource types

Usage Notes

  • Both the type_name and base_type_name parameters are required

  • You can either specify a file containing the type information or you can specify the type information on the command line

  • Do not use this command for any resource types with names that begin with ora because these resource types are Oracle resource types

  • You must have read permissions on the base type

Example

To create a resource type for demoActionScript:

# crsctl add type test_type1 -basetype cluster_resource 
 -attr "ATTRIBUTE=FOO,TYPE=integer,DEFAULT_VALUE=0"

crsctl add wallet

Use the crsctl add wallet command to create and add users to a wallet.

Syntax

crsctl add wallet -type wallet_type [-name name] [-user user_name -passwd]

Table E-4 crsctl add wallet

ParameterDescription
-type wallet_type

Type of wallet you want to create, such as APPQOSADMIN, APPQOSUSER, APPQOSDB, OSUSER, or CVUDB.

  • OSUSER: This wallet type stores a low-privileged Windows user's user name and password that the agent uses when you create a Windows service on a policy-managed database or in general to update the Windows service's password.

  • CVUDB: This wallet type stores a database user name and password that the health check component of CVU uses to connect to the database and perform database checks.

-name name

You must specify a name for the wallet to create APPQOSDB and CVUDB wallets.

-user user_name -passwd

Specify the user name you want to add to the wallet and provide the password through standard input. The user name is required to create an OSUSER wallet.


Usage Notes

  • If you are using a policy-managed database, then you must have a wallet. Otherwise, wallets are optional.

Example

To add a wallet:

$ crsctl add wallet -type OSUSER -user lp_oracle_home_user -passwd

In the preceding example, lp_oracle_home_user is a low-privileged Oracle home user who owns the home where the policy-managed database was created.

crsctl check css

Use the crsctl check css command to check the status of Cluster Synchronization Services. This command is most often used when Oracle Automatic Storage Management (Oracle ASM) is installed on the local server.

Syntax

crsctl check css

Example

The crsctl check css command returns output similar to the following:

CRS-4529: Cluster Synchronization Services is online

crsctl check evm

Use the crsctl check evm command to check the status of the Event Manager.

Syntax

crsctl check evm

Example

The crsctl check evm command returns output similar to the following:

CRS-4533: Event Manager is online

crsctl delete resource

Use the crsctl delete resource command to remove resources from the Oracle Clusterware configuration.

Syntax

crsctl delete resource resource_name [-i] [-f]

Parameters

Table E-5 crsctl delete resource Command Parameters

ParameterDescription
resource_name

Specify the name of the resource you want to remove or specify a space-delimited list of multiple resources you want to remove.

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Use the force parameter to remove either running resources, or remove this resource even though other resources have a hard dependency on it.


Usage Notes

  • The resource_name parameter is required

  • You must have read and write permissions to delete the specified resources

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources

Example

To delete a resource from Oracle Clusterware:

# crsctl delete resource myResource

crsctl delete type

Use the crsctl delete type command to remove resource types from the Oracle Clusterware configuration.

Syntax

crsctl delete type type_name [-i]

Usage Notes

  • The type_name parameter is required. You can specify more than one type by separating each type by a space.

  • If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.

  • Do not use this command for any resource types with names that begin with ora because these resource types are Oracle resource types.

Example

To delete two resource types, run the following command as a user who has write permissions on the resource type:

$ crsctl delete type test_type1 test_type2

crsctl delete wallet

Use the crsctl delete wallet command to remove wallets or users from a wallet.

Syntax

crsctl delete wallet -type wallet_type [-name name] [-user user_name]

Table E-6 crsctl delete wallet

ParameterDescription
-type wallet_type

Type of wallet you want to remove, such as APPQOSADMIN, APPQOSUSER, APPQOSDB, OSUSER, or CVUDB.

  • OSUSER: This wallet type stores a low-privileged Windows user's user name and password that the agent uses when you create a Windows service on a policy-managed database or in general to update the Windows service's password.

  • CVUDB: This wallet type stores a database user name and password that the health check component of CVU uses to connect to the database and perform database checks.

-name name

You must specify the name of the wallet to remove an APPQOSDB wallet.

-user user_name

You must specify a user name to remove a user from an OSUSER wallet.


Example

To delete a user from the OSUSER wallet:

$ crsctl delete wallet -type OSUSER -user lp_oracle_home_user

In the preceding example, lp_oracle_home_user is a low-privileged Oracle home user who owns the home where the policy-managed database was created. Additionally, the command does not delete the wallet if it contains other users.

crsctl eval add resource

Use the crsctl eval add resource command to predict the effects of adding a resource without making changes to the system. This command may be useful to application administrators.

Syntax

crsctl eval add resource resource_name -type resource_type
    [-attr "attribute_name=attribute_value[,attribute_name=attribute_value[,..."]]
    | -file file_path] [-f]

Parameters


See Also:

"crsctl add resource" for descriptions of the -type, -attr, and -file parameters

Table E-7 crsctl eval add resource Command Parameters

ParameterDescription
-f

Specify this parameter to evaluate what happens if you run the command with the force parameter.


crsctl eval fail resource

Use the crsctl eval fail resource command to predict the consequences of a resource failing.

Syntax

crsctl eval fail resource {resource_name | -w "filter"} [-n server]

Parameters

Table E-8 crsctl eval fail resource Command Parameters

ParameterDescription
resource_name

The name of a resource for which you want to simulate a failure.

-w "filter"

Specify a resource filter that Oracle Clusterware uses to limit the number of resources evaluated. The filter must be enclosed in double quotation marks (""). Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits Oracle Clusterware to relocate only resources of cluster_resource type

  • "CHECK_INTERVAL > 10": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

Note: All operators must be surrounded by spaces.

See Also: "Filters" for more information about operators

-n server

Specify the name of the server on which the resource that you want to simulate a failure resides.


Example

This command returns output similar to the following:

$ crsctl eval fail res cs1
 
Stage Group 1:
--------------------------------------------------------------------------------
Stage Number    Required       Action
--------------------------------------------------------------------------------
 
     1              Y   Resource 'cs1' (1/1) will be in state
                        [ONLINE|INTERMEDIATE] on server
                        [mjkeenan_node_0]
                    Y   Resource 'cs1' (2/1) will be in state
                        [ONLINE|INTERMEDIATE] on server
                        [mjkeenan_node_1]
 
--------------------------------------------------------------------------------

crsctl eval modify resource

Use the crsctl eval modify resource command to predict the effects of modifying a resource without making changes to the system.

Syntax

crsctl eval modify resource resource_name -attr "attribute_name=attribute_value"
    [-f]

Parameters

Table E-9 crsctl eval modify resource Command Parameters

Specify this parameter to evaluate what happens if you run the command with the force parameter.

ParameterDescription
resource_name

The name of the resource you want to modify.

-attr "attribute_name=
attribute_value"

You can specify attributes for a resource you want to modify in two different ways:

  • Following the -attr flag, you can specify one or more comma-delimited attribute name-value pairs to modify enclosed in double quotations marks (""). For example:

    -attr "CHECK_INTERVAL=30, START_TIMEOUT=25"
    

    Some attributes can have multiple values. In those cases, separate the values with a space and enclose the list of values in single quotation marks. For example:

    -attr "SERVER_POOL_NAMES=
    'ora.pool1 ora.pool2',START_TIMEOUT=25"
    
  • Alternatively, you can specify attribute values for resources on a particular server, with a particular cardinality value, and with a particular degree value. This method can be useful for applications that are somehow tied to a particular server. Following the -attr flag, the syntax is as follows:

    attribute_name{@SERVERNAME(server_name)
    [@DEGREEID(did)] | @CARDINALITYID(cid)
    [@DEGREEID(did)]}=attribute_value
    

    If you specify the @SERVERNAME(server_name) syntax, then the attribute value you specify for the attribute you specify is limited to resources residing on the server you specify.

    Alternatively, if you specify the @CARDINALITYID(cid) syntax, then the attribute value you specify for the attribute you specify is limited to resource instances with a specific cardinality ID (cid).

    Optionally, you can combine the @DEGREEID(did) syntax with either the SERVERNAME or CARDINALITYID syntax, or both, to limit the attribute value to resources with the specific DEGREE.

    Examples:

    CHECK_INTERVAL@SERVERNAME(node1)=45
    STOP_TIMEOUT@CARDINALITYID(2)=65
    STOP_TIMEOUT@SERVERNAME(node1)@DEGREEID(2)=65
    STOP_TIMEOUT@CARDINALITYID(3)@DEGREEID(2)=65
    
-f


See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resources and resource attributes

crsctl eval relocate resource

Use the crsctl eval relocate resource command to simulate relocating a resource without making changes to the system.

Syntax

crsctl eval relocate resource {resource_name | -all} {-s source_server |
-w "filter"} [-n destination_server] [-f]

Parameters


See Also:

"crsctl relocate resource" for descriptions of the parameters used with this command

crsctl eval start resource

Use the crsctl eval start resource command to predict the effects of starting a resource without making changes to the system.

Syntax

crsctl eval start resource {resource_name [...] | -w "filter" | -all}
   [-n server_name] [-f]

Parameters


See Also:

"crsctl start resource" for descriptions of the parameters used with this command

crsctl eval stop resource

Use the crsctl eval stop resource command to predict the effects of stopping a resource without making changes to the system.

Syntax

crsctl eval stop resource {resource_name [...] | -w "filter" | -all} [-f]

Parameters


See Also:

"crsctl stop resource" for descriptions of the parameters used with this command

crsctl get hostname

Use the crsctl get hostname command to retrieve the host name of the local server.

Syntax

crsctl get hostname

Example

Oracle Clusterware returns the host name of the local server:

$ crsctl get hostname
node2

crsctl getperm resource

Use the crsctl getperm resource command to display the user and group permissions for the specified resource.

Syntax

crsctl getperm resource resource_name [ {-u user_name | -g group_name} ]

See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resources and resource attributes

Parameters

Table E-10 crsctl getperm resource Command Parameters

ParameterDescription
resource_name

Specify the name of the resource for which you want to obtain permissions.

-u user_name

If you specify -u, then Oracle Clusterware obtains permissions for a particular user.

-g group_name

If you specify -g, then Oracle Clusterware obtains permissions for a particular group.


Usage Notes

  • The resource_name parameter is required

  • You must have read permission on the specified resources to obtain their permissions

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources

Example

The crsctl getperm resource command returns output similar to the following, depending on the command option you choose:

$ crsctl getperm resource app.appvip

Name: app.appvip
owner:root:rwx,pgrp:oinstall:rwx,other::r--
$ crsctl getperm resource app.appvip -u oracle

Name: app.appvip
rwx
$ crsctl getperm resource app.appvip -g dba

Name: app.appvip
r--

crsctl getperm type

Use the crsctl getperm type command to obtain permissions for a particular resource type.

Syntax

crsctl getperm type resource_type  [-u user_name] | [-g group_name]

See Also:

"Resource Types" for more information about resource types

Parameters

Table E-11 crsctl getperm type Command Parameters

ParameterDescription
resource_type

Specify the resource type for which you want to obtain permissions.

-u user_name

If you specify -u, then Oracle Clusterware obtains permissions for a particular user.

-g group_name

If you specify -g, then Oracle Clusterware obtains permissions for a particular group.


Usage Notes

  • The resource_type parameter is required

  • Do not use this command for any resource types with names that begin with ora because these resource types are Oracle resource types

Example

The crsctl getperm type command returns output similar to the following:

$ crsctl getperm type app.appvip.type

Name: app.appvip.type
owner:root:rwx,pgrp:oinstall:rwx,other::r--

crsctl modify resource

Use the crsctl modify resource command to modify the attributes of a particular resource in Oracle Clusterware.

Syntax

crsctl modify resource resource_name -attr "attribute_name=attribute_value"
[-i] [-f] [-delete]

Parameters

Table E-12 crsctl modify resource Command Parameters

ParameterDescription
resource_name

The name of the resource you want to modify.

-attr "attribute_name=
attribute_value"

You can specify attributes for a resource you want to modify in two different ways:

  • Following the -attr flag, you can specify one or more comma-delimited attribute name-value pairs to modify enclosed in double quotations marks (""). For example:

    -attr "CHECK_INTERVAL=30, START_TIMEOUT=25"
    

    Some attributes can have multiple values. In those cases, separate the values with a space and enclose the list of values in single quotation marks. For example:

    -attr "SERVER_POOL_NAMES=
    'ora.pool1 ora.pool2',START_TIMEOUT=25"
    
  • Alternatively, you can specify attribute values for resources on a particular server, with a particular cardinality value, and with a particular degree value. This method can be useful for applications that are somehow tied to a particular server. Following the -attr flag, the syntax is as follows:

    attribute_name{@SERVERNAME(server_name)
    [@DEGREEID(did)] | @CARDINALITYID(cid)
    [@DEGREEID(did)]}=attribute_value
    

    If you specify the @SERVERNAME(server_name) syntax, then the attribute value you specify for the attribute you specify is limited to resources residing on the server you specify.

    Alternatively, if you specify the @CARDINALITYID(cid) syntax, then the attribute value you specify for the attribute you specify is limited to resource instances with a specific cardinality ID (cid).

    Optionally, you can combine the @DEGREEID(did) syntax with either the SERVERNAME or CARDINALITYID syntax, or both, to limit the attribute value to resources with the specific DEGREE.

    Examples:

    CHECK_INTERVAL@SERVERNAME(node1)=45
    STOP_TIMEOUT@CARDINALITYID(2)=65
    STOP_TIMEOUT@SERVERNAME(node1)@DEGREEID(2)=65
    STOP_TIMEOUT@CARDINALITYID(3)@DEGREEID(2)=65
    
-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Use the -f parameter when:

  • The resource has a hard dependency on a non-existing resource

  • The owner of the resource does not have execute permissions on one or more hard dependencies

  • The modification results in servers being moved into the Generic pool and resources being stopped or relocated to accomplish the server move

-delete

If you specify the -delete parameter, then Oracle Clusterware deletes the named attribute.



See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resources and resource attributes

Usage Notes

  • The resource_name parameter is required

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks (''). For example:

    "START_DEPENDENCIES='hard(res1,res2,res3)'"
    
  • You must have read and write permissions on the specified resources to modify them

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources

Example

To modify the attributes of the appsvip resource:

$ crsctl modify resource appsvip -attr USR_ORA_VIP=10.1.220.17 -i

crsctl modify type

Use the crsctl modify type command to modify an existing resource type.

Syntax

crsctl modify type type_name -attr "ATTRIBUTE=attribute_name,TYPE={string | int}
[,DEFAULT_VALUE=default_value [,FLAGS=[READONLY][| REQUIRED]]" [-i] [-f]]

Parameters

Table E-13 crsctl modify type Command Parameters

ParameterDescription
type_name

Specify the name of the resource type you want to modify. You cannot modify resource type names.

-attr

You can modify the following resource type keywords:

  • TYPE

  • DEFAULT_VALUE

  • FLAGS

Note: Although you must specify the ATTRIBUTE keyword, you cannot modify it.

See Also: Table E-3, "crsctl add type Command Parameters" for descriptions of these keywords

-i

If you specify the -i parameter, then the command fails if Oracle Clusterware cannot process the request immediately.



See Also:

"Resource Types" for more information about resource types

Usage Notes

  • The type_name parameter is required

  • Do not use this command for any resource types with names that begin with ora because these resource types are Oracle resource types

Example

The following example modifies the two type attributes FOO and BAR:

$ crsctl modify type myType.type -attr "ATTRIBUTE=FOO,DEFAULT_VALUE=0
ATTRIBUTE=BAR,DEFAULT_VALUE=baz"

crsctl modify wallet

Use the crsctl modify wallet command to modify the password for a specific user in a specific wallet.

Syntax

crsctl modify wallet -type wallet_type [-name name] [-user user_name -passwd]

Table E-14 crsctl modify wallet

ParameterDescription
-type wallet_type

Specify the type of wallet you want to modify, such as APPQOSADMIN, APPQOSUSER, APPQOSDB, OSUSER, or CVUDB.

  • OSUSER: This wallet type stores a low-privileged Windows user's user name and password that the agent uses when you create a Windows service on a policy-managed database or in general to update the Windows service's password.

  • CVUDB: This wallet type stores a database user name and password that the health check component of CVU uses to connect to the database and perform database checks.

-name name

You must specify the wallet name to modify an APPQOSDB wallet.

-user user_name -passwd

You must specify the user name for whom you want to modify the password. Modify the password through standard input.


Usage Notes

  • You cannot use this command to change a user name.

Example

To modify the password of a low-privileged Oracle home user:

$ crsctl modify wallet -type OSUSER -user lp_oracle_home_user -passwd

crsctl query wallet

Use the crsctl query wallet command to query low-privileged users from a wallet.

Syntax

crsctl query wallet -type wallet_type [-name name] [-user user_name] [-all]

Table E-15 crsctl query wallet

ParameterDescription
-type wallet_type

Type of wallet you want to query, such as APPQOSADMIN, APPQOSUSER, APPQOSDB, OSUSER, or CVUDB.

  • OSUSER: This wallet type stores a low-privileged Windows user's user name and password that the agent uses when you create a Windows service on a policy-managed database or in general to update the Windows service's password.

  • CVUDB: This wallet type stores a database user name and password that the health check component of CVU uses to connect to the database and perform database checks.

-name name

You must specify the name of the wallet to query an APPQOSDB wallet.

-user user_name

You must specify a user name to query a user from an OSUSER wallet.

-all

Specify -all to list all of the users in a specific wallet.


Example

To list all of the users in the OSUSER wallet:

$ crsctl query wallet -type OSUSER -all 

crsctl relocate resource

Use the crsctl relocate resource command to relocate resources to another server in the cluster.

Syntax

crsctl relocate resource {resource_name | resource_name | -all -s source_server |
-w "filter"} [-n destination_server] [-k cid] [-env "env1=val1,env2=val2,..."]
[-i] [-f]

Parameters

Table E-16 crsctl relocate resource Command Parameters

ParameterDescription
resource_name

The name of a resource you want to relocate.

resource_name | -all
-s source_server

Specify one particular or all resources located on a particular server from which you want to relocate those resources.

-w "filter"

Specify a resource filter that Oracle Clusterware uses to limit the number of resources relocated. The filter must be enclosed in double quotation marks (""). Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits Oracle Clusterware to relocate only resources of cluster_resource type

  • "CHECK_INTERVAL > 10": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

See Also: "Filters" for more information

-n destination_server

Specify the name of the server to which you want relocate resources. If you do not specify a destination server, then Oracle Clusterware relocates the resources to the best server according to the attribute profile of each resource.

-k cid

Specify the resource cardinality ID. If you specify this parameter, then Oracle Clusterware relocates the resource instance that have the cardinality you specify.

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values for this command. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Specify the -f parameter to force the relocating of the resource when it has other resources running that depend on it. Dependent resources are relocated or stopped when you use this parameter.

Note: When you are relocating resources that have cardinality greater than 1, you must use either -k or -s to narrow down which resource instances are to be relocated.


Usage Notes

  • Any one of the three following options is required to specify which resources you want to relocate:

    • You can specify one particular resource to relocate.

    • Or you can specify one particular or all the resources to relocate from a particular source server.

    • Thirdly, you can specify a resource filter that Oracle Clusterware uses to match resources to relocate.

  • If a resource has a degree ID greater than 1, then Oracle Clusterware relocates all instances of the resource.

  • You must have read and execute permissions on the specified resources to relocate them

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources.

Example

To relocate one particular resource from one server to another:

# crsctl relocate resource myResource1 -s node1 -n node3

crsctl restart resource

Use the crsctl restart resource command to restart idle resources in the cluster, instead of having to run two commands to stop and start the resource.

Syntax

crsctl restart resource {resource_name [...] | -w "filter"} [-k cid] [-d did]
   [-env "env1=val1,env2=val2,..."] [-i] [-f]

Parameters

Table E-17 crsctl restart resource Command Parameters

ParameterDescription
resource_name [...]

One or more space-delimited resource names to restart.

-w filter

Specify a resource filter surrounded by double quotation marks ("") that Oracle Clusterware uses to match resources. For example, -w "TYPE = ora.database.type" or -w "NAME = cs1".

See Also: "Filters" for more information

-k cid

Specify the resource cardinality ID. If you specify this parameter, then Oracle Clusterware restarts the resource instances that have the cardinality you specify.

-d did

Specify the resource degree ID. If you specify this parameter and the degree ID is greater than 1, then Oracle Clusterware restarts all resource instances that meet this criteria.

Note: You cannot use the -d parameter without specifying the -k parameter.

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values with the -env command parameter. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Use the -f parameter to relocate a resource running on another server on which the resource you want to restart has a hard start dependency. If you do not specify the force parameter in this case, then the start command fails.


Usage Notes

  • Any one of the three following options is required to specify which resources you want to restart:

    • You can specify one or more resources to restart

    • You can specify a resource filter that Oracle Clusterware uses to match resources to restart

  • You must have read and execute permissions on the specified resources to restart them

  • Do not use this command to restart any resources with names that begin with ora because these resources are Oracle resources

Example

To restart a resource:

# crsctl restart resource myResource -s pool1 pool2

crsctl setperm resource

Use the crsctl setperm resource command to set permissions for a particular resource.

Syntax

crsctl setperm resource resource_name {-u acl_string | -x acl_string |
-o user_name | -g group_name}

Parameters

Table E-18 crsctl setperm resource Command Parameters

ParameterDescription
resource_name

Specify the name of the resource for which you want to set permissions.

{-u | -x | -o | -g}

You can set only one of the following permissions for a resource:

  • -u acl_string: You can update the access control list (ACL) for a resource

  • -x acl_string: You can delete the ACL for a resource

  • -o user_name: You can change the owner of a resource by entering a user name

  • -g group_name: You can change the primary group of a resource by entering a group name

Specify a user, group, or other ACL string, as follows:

user:user_name[:readPermwritePermexecPerm] |
group:group_name[:readPermwritePermexecPerm] |
other[::readPermwritePermexecPerm]
  • user: User ACL

  • group: Group ACL

  • other: Other ACL

  • readPerm: Read permission for the resource; the letter r grants a user, group, or other read permission, the minus sign (-) denies read permission

  • writePerm: Write permission for the resource; the letter w grants a user, group, or other write permission, the minus sign (-) denies write permission

  • execPerm: Execute permission for the resource; the letter x grants a user, group, or other execute permission, the minus sign (-) denies execute permission



See Also:

Appendix B, "Oracle Clusterware Resource Reference" for more information about resources and resource attributes

Usage Notes

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources.

  • You must have read and write permissions on the specified resources to set their permissions

Example

To grant read, write, and execute permissions on a resource for user Scott:

$ crsctl setperm resource myResource -u user:scott:rwx

crsctl setperm type

Use the crsctl setperm type command to set permissions resources of a particular resource type.

Syntax

crsctl setperm type resource_type_name {-u acl_string | -x acl_string |
-o user_name | -g group_name}

Parameters

Table E-19 crsctl setperm type Command Parameters

ParameterDescription
resource_type_name

Specify the name of the resource type for which you want to set permissions.

{-u | -x | -o | -g}

You can specify only one of the following parameters for a resource type:

  • -u acl_string: You can update the access control list (ACL) for a resource type

  • -x acl_string: You can delete the ACL for a resource type

  • -o user_name: You can change the owner of a resource type by entering a user name

  • -g group_name: You can change the primary group of a resource type by entering a group name

Specify a user, group, or other ACL string, as follows:

user:user_name[:readPermwritePermexecPerm] |
group:group_name[:readPermwritePermexecPerm] |
other[::readPermwritePermexecPerm]
  • user: User ACL

  • group: Group ACL

  • other: Other ACL

  • readPerm: Read permission for the resource type; the letter r grants a user, group, or other read permission, the minus sign (-) denies read permission

  • writePerm: Write permission for the resource type; the letter w grants a user, group, or other write permission, the minus sign (-) denies write permission

  • execPerm: Execute permission for the resource type; the letter x grants a user, group, or other execute permission, the minus sign (-) denies execute permission


Usage Notes

  • The resource_type_name parameter is required

  • You must have read and write permissions on the specified resources to set their permissions

  • Do not use this command for any resource types with names that begin with ora because these resource types are Oracle resource types

Example

To grant read, write, and execute permissions on a resource type for user Scott:

$ crsctl setperm type resType -u user:scott:rwx

crsctl start resource

Use the crsctl start resource command to start many idle resources on a particular server in the cluster.

Syntax

crsctl start resource {resource_name [...] | -w "filter" | -all}
   [-n server_name | -s server_pool_names] [-k cid] [-d did]
   [-env "env1=val1,env2=val2,..."] [-begin] [-end] [-i] [-f] [-l]

Parameters

Table E-20 crsctl start resource Command Parameters

ParameterDescription
resource_name [...]

One or more space-delimited resource names to start.

-w "filter"

Specify a resource filter surrounded by double quotation marks ("") that Oracle Clusterware uses to match resources. For example, -w "TYPE = ora.database.type" or -w "NAME = cs1".

See Also: "Filters" for more information

-all

Use this parameter to start all resources in the cluster.

-n server_name

Specify the name of the server on which the resources you want to start reside. If you do not specify a server, then Oracle Clusterware starts the resources on the best server according to the attribute profile of each resource.

-s server_pool_names

Specify a single server pool name or a space-delimited list of server pools in which a resource resides that you want to start.

-k cid

Specify the resource cardinality ID. If you specify this parameter, then Oracle Clusterware starts the resource instances that have the cardinality you specify.

-d did

Specify the resource degree ID. If you specify this parameter and the degree ID is greater than 1, then Oracle Clusterware starts all resource instances that meet this criteria.

Note: You cannot use the -d parameter without specifying the -k parameter.

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values with the -env command parameter. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-begin

You can specify this parameter to begin a transparent HA action.

-end

You can specify this parameter to end a transparent HA action.

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Use the -f parameter to relocate a resource running on another server on which the resource you want to start has a hard start dependency. If you do not specify the force parameter in this case, then the start command fails.

-l

Use the -l parameter to leave the resources in the state they were in if the start command fails.


Usage Notes

  • Any one of the three following options is required to specify which resources you want to start:

    • You can specify one or more resources to start

    • You can specify a resource filter that Oracle Clusterware uses to match resources to start

    • You can specify the -all parameter to start all resources on the specified server

  • You must have read and execute permissions on the specified resources to start them

  • Do not use this command to start any resources with names that begin with ora because these resources are Oracle resources

  • Oracle does not support starting managed applications outside of the Oracle Grid Infrastructure

Example

To start a resource:

# crsctl start resource myResource -n server1

crsctl status resource

Use the crsctl status resource command to obtain the status and configuration information of many particular resources.

Syntax

Use this command, depending on how you want the information about the status of the resource returned.

To check the status of specific resources:

crsctl status resource resource_name [...] | -w "filter" [-p | -v] | [-f | -l | -g]
  [[-k cid | -n server_name] [ -e [-p | -v]] [-d did]] | [-s -k cid [-d did]]

To print the status of the resources in tabular form:

crsctl status resource resource_name [...] | -w "filter" -t

To print a list of the resource dependencies:

crsctl status resource [resource_name [...]] -dependency [-stop | -pullup]

Parameters

Table E-21 crsctl status resource Command Parameters

ParameterDescription
resource_name [...] |
-w "filter"

One or more space-delimited resource names of which you want to check the status.

Or you can specify a resource filter that Oracle Clusterware uses to limit the number of resources displayed. The filter must be enclosed in double quotation marks (""). Values that contain parentheses or spaces must be enclosed in single quotation marks (''). Operators must be surrounded by spaces. Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits the display to only resources of cluster_resource type.

  • "CHECK_INTERVAL > 10": This filter limits the display to resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits the display to resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2.

  • "START_DEPENDENCIES='hard(appsvip)'": This filter limits the display to resources that have a hard start dependency on the appsvip resource.

See Also: "Filters" for more information

[-p | -v] | [-f | -l | -g]

You can optionally specify the following parameters:

  • Specify either the -p parameter to display the static configuration of the resource or the -v parameter to display the run-time configuration of the resource.

  • Specify the -f parameter to display the full configuration of the resource; or specify the -l parameter to display all cardinal and degree values of the resource; or specify the -g parameter to check whether the specified resources are registered

[[-k cid | 
-n server_name] [ -e [-p | -v]]
[-d did | [-s -k cid [-d did]]]

You can specify one of the following two options:

  • Specify the -k cid parameter to specify a cardinality ID of the resources you want to query. Or you can specify the -n parameter to specify a particular server on which to check resources. Optionally, you can specify the -d parameter with the -n parameter to specify the degree ID of resources you want to check. If you specify a degree ID greater than 1, then Oracle Clusterware checks all resource instances on the server that meet this criteria.

    Use the -e parameter to evaluate the special values of a resource instance. You must also specify -p or -v with the -e parameter.

  • Specify the -s parameter with the -k parameter to obtain a list of target servers for relocation. You can further limit the output by specifying a degree ID with the -d parameter.

-t

Specify the -t parameter to display the output in tabular form.

-dependency [-stop | -pullup]

Specify the -dependency parameter to display resource dependencies. If you do not specify either the -stop or -pullup option, then CRSCTL displays the start dependencies of the resource.

Use either of the following options with the -dependency parameter:

  • Specify the -stop parameter to display resource stop dependencies.

  • Specify the -pullup parameter to display resource pull up dependencies.


Usage Notes

  • Either a space-delimited list of resources or a resource filter is required.

  • You must have read permissions on the specified resources to obtain their status.

  • Use crsctl status resource to query the status information of any resource deployed in the cluster. Oracle recommends, however, that you use the respective SRCVTL command to query the status information of Oracle (ora.*) resources.

Examples

The crsctl status resource command returns output similar to the following:

$ crsctl status resource ora.staii14.vip

NAME=ora.staii14.vip
TYPE=ora.cluster_vip_net1.type
TARGET=ONLINE
STATE=ONLINE on staii14

The following example shows the start dependencies for a resource named ora.newdb.db:

$ crsctl status resource ora.newdb.db -dependency ora.newdb.db(ora.database.type)

  ora.ACFS_DG1.dg(ora.diskgroup.type)[hard,pullup]
    ora.asm(ora.asm.type)[hard,pullup]
      ora.LISTENER.lsnr(ora.listener.type)[weak]
        type:ora.cluster_vip_net1.type[hard:type,pullup:type]
          ora.net1.network(ora.network.type)[hard,pullup]
  ora.dbhome_dg.dbhome_dg_v.acfs(ora.acfs.type)[hard,pullup]
    ora.asm(ora.asm.type)[pullup:always]

crsctl status type

Use the crsctl status type command to obtain the configuration information of one or more particular resource types.

Syntax

crsctl status type [resource_type_name [...] | -w "filter"] [-g] [-p] [-f]

Parameters

Table E-22 crsctl status type Command Parameters

ParameterDescription
resource_type_name [...] | -w "filter"

Specify one or more space-delimited resource type names of which you want to check the status.

Alternatively, you can specify a resource type filter surrounded by double quotation marks ("") that Oracle Clusterware uses to match resource types. For example, -w "TYPE = ora.database.type".

See Also: "Filters" for more information

[-g] [-p] [-f]

You can specify the following parameters as options when Oracle Clusterware checks the status of specific server pools:

  • -g: Use this parameter to check if the specified resource types are registered

  • -p: Use this parameter to display static configuration of the specified resource types

  • -f: Use this parameter to display the full configuration of the resource types


Usage Notes

  • The resource_type_name parameter or a filter is required

Example

The crsctl status type command returns output similar to the following:

$ crsctl status type ora.network.type

TYPE_NAME=ora.network.type
BASE_TYPE=ora.local_resource.type

crsctl stop resource

Use the crsctl stop resource command to stop running resources.

Syntax

crsctl stop resource {resource_name [...] | -w "filter" | -all} [-n server_name]
   [-k cid] [-d did] [-env "env1=val1,env2=val2,..."]
   [-begin | -end] [-i] [-f] [-l]

Parameters

Table E-23 crsctl stop resource Command Parameters

ParameterDescription
resource_name [...]

One or more space-delimited resource names to stop.

-w "filter"

Specify a resource filter that Oracle Clusterware uses to limit the number of resources stopped. The filter must be enclosed in double quotation marks (""). Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits Oracle Clusterware to stop only resources of cluster_resource type

  • "CHECK_INTERVAL > 10": This filter limits Oracle Clusterware to stop resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits Oracle Clusterware to stop resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

See Also: "Filters" for more information

-all

Use this parameter to stop all resources in the cluster.

-n server_name

Specify the name of the server on which the resource instances you want to stop reside. If you do not specify a server, then Oracle Clusterware stops all instances of the resource.

-k cid

Specify the resource cardinality ID. If you specify this parameter, then Oracle Clusterware stops the resource instances that have the cardinality you specify.

-d did

Specify the resource degree ID. If you specify this parameter and the degree ID is greater than 1, then Oracle Clusterware stops all resource instances that meet this criteria.

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values with the -env command parameter. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-begin

You can specify this parameter to begin a transparent HA action.

-end

You can specify this parameter to end a transparent HA action.

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Specify the -f parameter to force the stopping of the resource when it has other resources running that depend on it. Dependent resources are relocated or stopped when you use this parameter.

-l

Use the -l parameter to leave the resources in the state they were in if the stop command fails.


Usage Notes

  • Any one of the three following options is required to specify which resources you want to stop:

    • You can specify one or more resources to stop

    • You can specify a resource filter that Oracle Clusterware uses to match resources to stop

    • You can specify the -all parameter with the -n server_name parameter to stop all resources on a particular server

  • You must have read and execute permissions on the specified resources to stop them

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources

  • Oracle does not support stopping managed applications outside of the Oracle Grid Infrastructure

Example

To stop a resource:

$ crsctl stop resource -n node1 -k 2

Oracle RAC Environment CRSCTL Commands

The commands listed in this section manage the Oracle Clusterware stack in an Oracle RAC environment, which consists of the following:

  • Oracle Clusterware, the member nodes and server pools

  • Oracle ASM (if installed)

  • Cluster Synchronization Services

  • Cluster Time Synchronization Services

You can use the following commands only in an Oracle RAC environment:

crsctl add category

Use the crsctl add category command to add a server category.

Syntax

crsctl add category category_name [-attr "attr_name=attr_value
   [,attr_name=attr_value[,...]]"] [-i]

Parameters

Table E-24 crsctl add category Command Parameters

ParameterDescription
category_name

Specify a name for the server category you want to add.

attr_name

Specify the name of a category attribute you want to add preceded by the -attr flag.

attr_value

A value for the category attribute.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "EXPRESSION='(CPU_COUNT > 2) AND (MEMORY_SIZE > 2048)'"

See Also:

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Usage Notes

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks (''). For example:

    "START_DEPENDENCIES='hard(res1,res2,res3)'"
    

Example

To add a server category:

$ crsctl add category cat1 -attr "EXPRESSION='(CPU_COUNT > 2) AND (MEMORY_SIZE > 2048)'"

crsctl add crs administrator

Use the crsctl add crs administrator command to add a user to the list of cluster administrators.

Syntax

crsctl add crs administrator -u user_name [-f]

Parameters

Table E-25 crsctl add crs administrator Command Parameters

ParameterDescription
-u user_name

The name of the user to whom you want to give Oracle Clusterware administrative privileges.

-f

Use this parameter to override the user name validity check.


Usage Notes

  • This command is deprecated in Oracle Clusterware 12c.

  • You must run this command as root or a cluster administrator, or an administrator on Windows systems

  • By default, root, the user that installed Oracle Clusterware, and the * wildcard are members of the list of users who have Oracle Clusterware administrative privileges. Run the crsctl delete crs administrator command to remove the wildcard and enable role-separated management of Oracle Clusterware.


    See Also:

    "Role-Separated Management" for more information

Example

To add a user to the list of Oracle Clusterware administrators:

# crsctl add crs administrator -u scott

crsctl add css votedisk

Use the crsctl add css votedisk command to add one or more voting files to the cluster on storage devices other than an Oracle ASM disk group.

Syntax

crsctl add css votedisk path_to_voting_disk [path_to_voting_disk ...] [-purge]

Parameters

Table E-26 crsctl add css votedisk Command Parameters

ParameterDescription
path_to_voting_disk

A fully qualified path to the voting file you want to add. To add multiple voting files, separate each path with a space.

-purge

Removes all existing voting files at once. You can replace the existing set of voting files in one operation.


Usage Notes

  • You should have at least three voting files, unless you have a storage device, such as a disk array, that provides external redundancy. Oracle recommends that you do not use more than 5 voting files. The maximum number of voting files that is supported is 15.


See Also:

"Adding, Deleting, or Migrating Voting Files" for more information

Example

To add a voting file to the cluster:

$ crsctl add css votedisk /stor/grid/ -purge

crsctl add policy

Use the crsctl add policy command to add a configuration policy to the policy set.

Syntax

crsctl add policy policy_name -attr "attr_name=attr_value[,attr_name=attr_value[, ...]]" [-i]

Parameters

Table E-27 crsctl add policy Command Parameters

ParameterDescription
policy_name

Specify a name for the policy you want to add.

attr_name

Specify a description for the policy using the DESCRIPTION policy attribute preceded by the -attr flag.

attr_value

A value for the DESCRIPTION policy attribute that describes the policy.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "DESCRIPTION=daytime"
-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Usage Notes

  • Adding a policy does not activate the policy

  • The policy_name parameter is required

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

Example

To add a policy:

$ crsctl add policy nightTime -attr "DESCRIPTION=nighttime"

crsctl add serverpool

Use the crsctl add serverpool command to add a server pool that is for hosting non-database resources (such as application servers) to Oracle Clusterware.

Syntax

crsctl add serverpool server_pool_name {-file file_path | 
    -attr "attr_name=attr_value[,attr_name=attr_value[,...]]"} [-i] [-f]

Parameters

Table E-28 crsctl add serverpool Command Parameters

ParameterDescription
server_pool_name

A short, descriptive name for the server pool. A server pool name has a 254 character limit and can contain any platform-supported characters except the exclamation point (!), the tilde (~), and spaces. A server pool name cannot begin with a period nor with ora.

-file file_path

Fully-qualified path to an attribute file to define the server pool.

attribute_name

The name of a server pool attribute Oracle Clusterware uses to manage the server pool preceded by the -attr flag. The available attribute names include:

  • IMPORTANCE

  • MIN_SIZE

  • MAX_SIZE

  • SERVER_NAMES

  • PARENT_POOLS

  • EXCLUSIVE_POOLS

  • ACL

  • SERVER_CATEGORY

attribute_value

A value for the server pool attribute.

Note: The attribute_name and attribute_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "MAX_SIZE=30, IMPORTANCE=3"
-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.

-f

If you specify the -f parameter, then Oracle Clusterware stops resources running on a server in another server pool and relocates that server into the server pool you are adding. If you do not specify the -f parameter, then Oracle Clusterware checks whether the creation of the server pool results in stopping any resources on a server in another server pool that is going to give up a server to the server pool you are adding. If so, then Oracle Clusterware rejects the crsctl add serverpool command.



See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Usage Notes

  • The server_pool_name parameter is required.

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks ('').

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools.

  • Running this command may result in Oracle Clusterware relocating other servers between server pools to comply with the new configuration.

  • You must run this command as root or a cluster administrator.

  • Use the crsctl add serverpool command to create server pools that host non-database resources. To create server pools that host Oracle databases, use the SRVCTL command utility.


    See Also:

    Oracle Real Application Clusters Administration and Deployment Guide for information about using the SRVCTL command utility to create server pools

Examples

Example 1

To add a server pool named testsp with a maximum size of 5 servers, run the following command as root or the Oracle Clusterware installation owner:

# crsctl add serverpool testsp -attr "MAX_SIZE=5"

Example 2

Create the sp1_attr file with the attribute values for the sp1 serverpool, each on its own line, as shown in the following example:

IMPORTANCE=1
MIN_SIZE=1
MAX_SIZE=2
SERVER_NAMES=node3 node4 node5
PARENT_POOLS=Generic
EXCLUSIVE_POOLS=testsp
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--

Use the following command to create the sp1 server pool using the sp1_attr file as input:

$ crsctl add serverpool sp1 -file /tmp/sp1_attr

crsctl check cluster

Use the crsctl check cluster command on any node in the cluster to check the status of the Oracle Clusterware stack.

Syntax

crsctl check cluster [-all | [-n server_name [...]]

Usage Notes

  • You can check the status of the Oracle Clusterware stack on all nodes in the cluster with the -all parameter or you can specify one or more space-delimited nodes. If you do not specify either parameter, then Oracle Clusterware checks the status of the Oracle Clusterware stack on the local server.

  • You can use this cluster-aware command on any node in the cluster.

Example

The crsctl check cluster command returns output similar to the following:

$ crsctl check cluster -all
*****************************************************************
node1:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
*****************************************************************
node2:
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
*****************************************************************

crsctl check crs

Use the crsctl check crs command to check the status of Oracle High Availability Services and the Oracle Clusterware stack on the local server.

Syntax

crsctl check crs

Example

To check the health of Oracle Clusterware on the local server:

$ crsctl check crs
CRS-4638: Oracle High Availablity Services is online
CRS-4537: Cluster Ready Services is onlin
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

crsctl check resource

Use the crsctl check resource command to initiate the check action inside the application-specific agent of a particular resource. Oracle Clusterware only provides output if something prevents the system from issuing the check request, such as a bad resource name.

Syntax

crsctl check resource {resource_name [...] | -w "filter" }
      [-n node_name] [-k cardinality_id] [-d degree_id] }

Parameters

Table E-29 crsctl check resource Command Parameters

ParameterDescription
resource_name

Specify a particular resource. You can check multiple resources by entering multiple resource names, with each name separated by a space.

-w "filter"

Specify a resource filter that Oracle Clusterware uses to limit the number of resources checked. The filter must be enclosed in double quotation marks (""). Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits Oracle Clusterware to check only resources of cluster_resource type

  • "CHECK_INTERVAL > 10": This filter limits Oracle Clusterware to check resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits Oracle Clusterware to check resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

See Also: "Filters" for more information

-n node_name

Check the resource instance on a specific node. If you do not specify the -n parameter, then Oracle Clusterware checks the resource instances only on the local server.

-k cardinality_id

Specify the resource cardinality ID.

-d degree_id

Specify the resource degree ID.


Usage Notes

  • You must have read and execute permissions on the specified resources to check them

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources

  • If this command is successful, it only means that a check was issued; it does not mean the CHECK action has been completed

Example

To initiate the check action inside the application-specific agent of a particular resource:

$ crsctl check resource appsvip

crsctl check ctss

Use the crsctl check ctss command to check the status of the Cluster Time Synchronization services.

Syntax

crsctl check ctss

Example

The crsctl check ctss command returns output similar to the following:

CRS-4700: The Cluster Time Synchronization Service is in Observer mode.

or

CRS-4701: The Cluster Time Synchronization Service is in Active mode.
CRS-4702: Offset from the reference node (in msec): 100

crsctl config crs

Use the crsctl config crs command to display Oracle High Availability Services automatic startup configuration.

Syntax

crsctl config crs

Example

The crsctl config crs command returns output similar to the following:

CRS-4622: Oracle High Availability Services autostart is enabled.

crsctl create policyset

Use the crsctl create policyset command to create a single policy set, in the form of a text file, that reflects the server pool configuration. After you create a policy set, you can copy the contents of the text file to create other policy sets.

Syntax

crsctl create policyset -file path_to_file

Parameters

Table E-30 crsctl create policyset Command Parameters

ParameterDescription
-file file_name

Specify a path to where CRSCTL creates a file that you can edit and then send back using crsctl modify policyset to add, delete, or update policies.


Example

To create a policy set:

$ crsctl create policyset -file /tmp/ps

crsctl delete category

Use the crsctl delete category command to delete a server category.

Syntax

crsctl delete category category_name [category_name [...]] [-i]

Parameters

Table E-31 crsctl delete category Command Parameters

ParameterDescription
category_name

Specify the name of the server category or a space-delimited list of server categories that you want to delete.

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Example

To delete a server category:

$ crsctl delete category blue_server -i

crsctl delete crs administrator

Use the crsctl delete crs administrator command to remove a user from the Oracle Clusterware administrator list.

Syntax

crsctl delete crs administrator -u user_name [-f]

Parameters

Table E-32 crsctl delete crs administrator Command Parameters

ParameterDescription
-u user_name

The name of the user whose Oracle Clusterware administrative privileges you want to remove.

By default, the list of users with Oracle Clusterware administrative privileges consists of the user who installed Oracle Clusterware, root, and *. The user who installed Oracle Clusterware and root are permanent members this list. The * value gives Oracle Clusterware administrative privileges to all users and must be removed to enable role-separated management.

See Also: "Role-Separated Management" for more information

-f

Use this parameter to override the user name validity check.


Usage Notes

  • The user_name parameter is required

  • You must run this command as root or a cluster administrator, or an administrator on Windows systems

  • To enable role-separated management, you must remove the * value enclosed in double quotation marks ("")

Example

To remove a user from the list of cluster administrators:

# crsctl delete crs administrator -u scott

crsctl delete css votedisk

Use the crsctl delete css votedisk to remove a voting file from the Oracle Clusterware configuration.

Syntax

crsctl delete css votedisk {voting_disk_GUID [...] | vdisk [...] | +diskgroup}

Parameters

Table E-33 crsctl delete css votedisk Command Parameters

ParameterDescription
voting_disk_GUID

Enter the file universal identifier (GUID) of the voting file you want to remove. Specify multiple GUIDs in a space-delimited list.

vdisk

Enter the path of the voting file you want to remove. Specify multiple voting file paths in a space-delimited list.

+diskgroup

Enter the name of an Oracle ASM disk group that contains voting files you want to remove. You can only use parameter when Oracle Clusterware is in exclusive mode.


Usage Notes

  • You can specify one or more GUIDs of voting files you want to remove, one or paths to voting files you want to remove, or the name of an Oracle ASM disk group that contains voting files you want to remove.

  • You can obtain the GUIDs of each current voting file by running the crsctl query css votedisk command

Example

To remove a voting file:

$ crsctl delete css votedisk 26f7271ca8b34fd0bfcdc2031805581e

crsctl delete node

Use the crsctl delete node to remove a node from the cluster.

Syntax

crsctl delete node -n node_name

Usage Notes

  • You must be root to run this command

  • The node_name parameter is required

  • You cannot use this command on a Leaf Node in an Oracle Flex Cluster

Example

To delete the node named node06 from the cluster, run the following command as root:

# crsctl delete node -n node06

crsctl delete policy

Use the crsctl delete policy command to delete a configuration policy from the policy set.

Syntax

crsctl delete policy policy_name [policy_name [...]] [-i]

Parameters

Table E-34 crsctl delete policy Command Parameters

ParameterDescription
policy_name

Specify a name for the policy or a space-delimited list of policy names you want to delete.

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Usage Notes

  • The policy_name parameter is required

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

Example

To delete a policy, run the following command as root or the Oracle Clusterware installation owner:

# crsctl delete policy...

crsctl delete serverpool

Use the crsctl delete serverpool command to remove a server pool from the Oracle Clusterware configuration.

Syntax

crsctl delete serverpool server_pool_name [server_pool_name [...]] [-i]

See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Usage Notes

  • The server_pool_name parameter is required

  • If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

Example

To delete a server pool, run the following command as root or the Oracle Clusterware installation owner:

# crsctl delete serverpool sp1

crsctl disable crs

Use the crsctl disable crs command to prevent the automatic startup of Oracle High Availability Services when the server boots.

Syntax

crsctl disable crs

Usage Notes

  • This command only affects the local server

  • If you disable Oracle High Availability Services automatic startup, you must use the crsctl start crs command to start Oracle High Availability Services

Example

The crsctl disable crs command returns output similar to the following:

CRS-4621: Oracle High Availability Services autostart is disabled.

crsctl discover dhcp

Use the crsctl discover dhcp command to send DHCP discover packets on the network at the specified port. If DHCP servers are present on the network, then they respond to the discovery message and the command succeeds.

Syntax

crsctl discover dhcp -clientid clientid [-port port]

Parameters

Table E-35 crsctl discover dhcp Command Parameters

ParameterDescription
-clientid clientid

Specify the client ID for which you want to attempt discovery. Obtain the client ID by running the crsctl get clientid dhcp command.

-port port

The port to which CRSCTL sends the discovery packets.


Usage Notes

You must run this command as root

Example

The crsctl discover dhcp command returns output similar to the following:

# crsctl discover dhcp -clientid stmjk0462clr-stmjk01-vip

CRS-10009: DHCP server returned server: 192.168.53.232,
 loan address : 192.168.29.221/255.255.252.0, lease time: 43200

crsctl enable crs

Use the crsctl enable crs command to enable automatic startup of Oracle High Availability Services when the server boots.

Syntax

crsctl enable crs

Usage Notes

  • This command only affects the local server

Example

The crsctl enable crs command returns output similar to the following:

CRS-4622: Oracle High Availability Services autostart is enabled.

crsctl eval activate policy

Use the crsctl eval activate policy command to predict the effects of activating a specific policy without making changes to the system. This command may be useful to cluster administrators.

Syntax

crsctl eval activate policy policy_name [-f] [-admin [-l serverpools | resources
  | all] [-x] [-a]]

Parameters

Table E-36 crsctl eval activate policy Command Parameters

ParameterDescription
-f

Specify this parameter to evaluate what happens if you try to forcibly activate a policy.

-admin [-l serverpools |
resources | all] [-x] [-a]

You must specify -admin if you specify any combination of -l, -x, and -a.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.



See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

crsctl eval add server

Use the crsctl eval add server command to simulate the addition of a server without making changes to the system. This command may be useful to cluster administrators.

Syntax

crsctl eval add server server_name [-file file_path] | [-attr "attr_name=attr_value[,...]"]
    [-admin [-l level [-x] [-a]] [-f]

Parameters

Table E-37 crsctl eval add server Command Parameters

ParameterDescription
server_name

The name of the server you want to add.

-file file_path

Fully-qualified path to a file containing server attributes.

attr_name

The name of a server attribute that Oracle Clusterware uses to manage the server preceded by the -attr flag.

See Also: "Server Category Attributes" for information about server attributes

attr_value

A value for the server attribute.

Note: The attribute_name and attribute_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "MAX_SIZE=30,IMPORTANCE=3"
-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.

-f

If you specify this parameter, then CRSCTL predicts the effects of forcibly adding a server.



See Also:

Chapter 7, "Adding and Deleting Cluster Nodes" for more information about adding servers

Example

The following example predicts how the system reacts when you add a server called mjkeenan-node-3:

# crsctl eval add server mjkeenan-node-3 -admin -l resources -a
 
--------------------------------------------------------------------------------
Name           Target  State        Server                   Effect                
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.net1.network
               ONLINE  ONLINE       mjkeenan-node-0
               ONLINE  ONLINE       mjkeenan-node-1
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
cs1
      1        ONLINE  ONLINE       mjkeenan-node-1
      2        ONLINE  ONLINE       mjkeenan-node-0
cs2
      1        ONLINE  ONLINE       mjkeenan-node-3          Started
ora.gns

crsctl eval add serverpool

Use the crsctl eval add serverpool command to predict the effects of adding a server pool without making changes to the system. This command may be useful to cluster administrators.


See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Syntax

crsctl eval add serverpool server_pool_name [-file file_path | -attr "attr_name=attr_value
    [,attr_name=attr_value[,...]]" [-admin [-l level [-x] [-a]] [-f]

Parameters

Table E-38 crsctl eval add serverpool Command Parameters

ParameterDescription
server_pool_name

A short, descriptive name for the server pool.

-file file_path

Fully-qualified path to an attribute file to define the server pool.

attribute_name

The name of a server pool attribute Oracle Clusterware uses to manage the server pool preceded by the -attr flag. The available attribute names include:

  • IMPORTANCE

  • MIN_SIZE

  • MAX_SIZE

  • SERVER_NAMES

  • PARENT_POOLS

  • EXCLUSIVE_POOLS

  • ACL

attribute_value

A value for the server pool attribute.

Note: The attribute_name and attribute_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "MAX_SIZE=30,IMPORTANCE=3"
-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.

-f

If you specify this parameter, then CRSCTL predicts the effects of forcibly adding a server pool.


crsctl eval delete server

Use the crsctl eval delete server command to predict the effects of deleting a server without making changes to the system. This command may be useful to cluster administrators.


See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Syntax

crsctl eval delete server server_name [-admin [-l level] [-x] [-a]] [-f]

Parameters

Table E-39 crsctl eval delete server Command Parameters

ParameterDescription
server_name

Specify the name of the server you want to evaluate before deleting.

-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.


crsctl eval delete serverpool

Use the crsctl eval delete serverpool command to simulate the deletion of a server pool without making changes to the system. This command may be useful to cluster administrators.


See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Syntax

crsctl eval delete serverpool server_pool_name [-admin [-l level] [-x] [-a]]

Parameters

Table E-40 crsctl eval delete serverpool Command Parameters

ParameterDescription
server_pool_name

The name of the server pool you want to delete.

-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.


crsctl eval modify serverpool

Use the crsctl eval modify serverpool command to predict the effects of modifying a server pool without making changes to the system. This command may be useful to cluster administrators.

Syntax

crsctl eval modify serverpool server_pool_name {-file file_path
   | -attr "attr_name=attr_value [,attr_name=attr_value[, ...]]"}
   [-f] [-admin [-l level [-x] [-a]]

Parameters


See Also:

"crsctl modify serverpool" for a description of the -attr parameter

Table E-41 crsctl eval modify serverpool Command Parameters

ParameterDescription
server_pool_name

The name of the server pool you want to modify.

-f

If you specify this parameter, then CRSCTL predicts the effects of forcibly modifying a server pool.

-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.



See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Usage Notes

  • The server_pool_name parameter is required

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks (''). For example:

    "START_DEPENDENCIES='hard(res1,res2,res3)'"
    
  • Running this command may result in Oracle Clusterware relocating other servers between server pools to comply with the new configuration

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

crsctl eval relocate server

Use the crsctl eval relocate server command to predict the effects of relocating a server to a different server pool without making changes to the system. This command might be useful for a cluster administrator.

Syntax

crsctl eval relocate server server_name -to server_pool_name [-f]
[-admin [-l level] [-x] [-a]]

Parameters

Table E-42 crsctl eval relocate server Command Parameters

ParameterDescription
server_name

The name of the server you want to relocate. You can provide a space-delimited list of servers to evaluate relocating multiple servers.

-to

Specify the name of the server pool to which you want relocate the server.

-f

If you specify this parameter, then CRSCTL predicts the effects of forcibly relocating a server.

-admin [-l level] [-x] [-a]

If you specify this parameter, then CRSCTL displays output for the cluster administrator.

If you specify the -l parameter, then you can choose one of the following three output levels:

  • serverpools: Restricts the output to servers running in a server pool

  • resources: Restricts the output to resources running on servers in a server pool

  • all: Displays all available output

If you specify the -x parameter, then CRSCTL displays the differences.

If you specify the -a parameter, then CRSCTL displays all resources.

Note: To specify either the -l, -x, or -a parameters, or any combination of the three, you must specify the -admin parameter.


crsctl get clientid dhcp

Use the crsctl get clientid dhcp command to display the client ID that the Oracle Clusterware agent uses to obtain the IP addresses from the DHCP server for configured cluster resources. The VIP type is required.

Syntax

crsctl get clientid dhcp -cluname cluster_name -viptype vip_type
[-vip vip_res_name] [-n node_name]

Parameters

Table E-43 crsctl get clientid dhcp Command Parameters

ParameterDescription
-cluname cluster_name

Specify the name of the cluster where the cluster resources are configured.

-viptype vip_type

Specify the type of the VIP resource for which you want to display client IDs: HOSTVIP, SCANVIP, or APPVIP.

-vip vip_resource_name

Specify the name of the VIP resource. This parameter is required if you specify the APPVIP VIP type.

-n node_name

Specify the name of the node for which you want to obtain the client ID. This parameter is required if you specify the HOSTVIP VIP type.


Example

The crsctl get clientid dhcp command returns output similar to the following:

$ crsctl get clientid dhcp -cluname stmjk0462clr -viptype HOSTVIP -n stmjk01

CRS-10007: stmjk0462clr-stmjk01-vip

crsctl get cluster hubsize

Use the crsctl get cluster hubsize command to obtain the value of Hub Nodes in an Oracle Flex Cluster.

Syntax

crsctl get cluster hubsize

Example

The crsctl get cluster hubsize command returns output similar to the following:

CRS-4950: Current hubsize parameter value is 32

crsctl get cluster mode

Use the crsctl get cluster mode command to ascertain whether the cluster is configured for Oracle Flex Clusters or the current status.

Syntax

crsctl get cluster mode [config | status]

Usage Notes

  • Specify the config option to obtain the mode in which the cluster is configured.

  • Specify the status option to obtain the current status of the cluster.

crsctl get cpu equivalency

Use the crsctl cpu equivalency command to obtain the value of the CPU_EQUIVALENCY server configuration attribute.

Syntax

crsctl get cpu equivalency

crsctl get css

Use the crsctl get css command to obtain the value of a specific Cluster Synchronization Services parameter.

Syntax

crsctl get css parameter

Usage Notes

  • Cluster Synchronization Services parameters and their default values include:

    clusterguid
    diagwait
    disktimeout (200 (seconds))
    misscount (30 (seconds))
    reboottime (3 (seconds))
    priority (4 (UNIX), 3 (Windows))
    logfilesize (50 (MB))
    
  • This command does not display default values

  • This command only affects the local server

Example

The crsctl get css disktimeout command returns output similar to the following:

$ crsctl get css disktimeout
CRS-4678: Successful get disktimeout 200 for Cluster Synchronization Services.

crsctl get css ipmiaddr

Use the crsctl get css ipmiaddr command to get the address stored in the Oracle Local Registry of the local Intelligent Platform Management Interface (IPMI) device.

Syntax

crsctl get css ipmiaddr

Usage Notes

  • Run the command under the user account used to install Oracle Clusterware.

  • This command only obtains the IP address stored in the Oracle Local Registry. It may not be the IP address actually used by IPMI.

    Use either ipmiutil or ipmitool as root on the local server to obtain the IP address used by the IPMI device.

Example

To obtain the IPMI device IP address:

$ crsctl get css ipmiaddr

crsctl get css leafmisscount

Use the crsctl get css leafmisscount command to obtain the amount of time (in seconds) that must pass without any communication between a Leaf Node and the Hub Node to which it is attached, before the connection is declared to be no longer active and the Leaf Node is removed from the cluster.

Syntax

crsctl get css leafmisscount

crsctl get node role

Use the crsctl get node role command to obtain the configured node role of nodes in the cluster.

Syntax

crsctl get node role {config | status} [node node_name | -all]

Usage Notes

  • Specify the config option to obtain the configured node role for a specific node.

  • Specify the status option to obtain the current status of a specific node.

  • You can specify a particular node for which to obtain role information. If you do not specify a particular node, then CRSCTL returns information about the local node.

Example

The crsctl get node role command returns output similar to the following:

Node 'adc6140524' configured role is 'hub'

crsctl get nodename

Use the crsctl get nodename command to obtain the name of the local node.

Syntax

crsctl get nodename

Example

The crsctl get nodename command returns output similar to the following:

node2

crsctl get resource use

Use the crsctl get resource use command to check the current setting value of the RESOURCE_USE_ENABLED parameter.

Syntax

crsctl get resource use

Usage Notes

The possible values are 1 or 0. If the value for this attribute is 1, which is the default, then the server can be used for resource placement. If the value is 0, then Oracle Clusterware disallows starting server pool resources on the server. The server remains in the Free server pool.

Example

This command returns output similar to the following:

CRS-4966: Current resource use parameter value is 1

crsctl get server label

Use the crsctl get server label command to check the current setting value of the SERVER_LABEL server attribute.

Syntax

crsctl get server label

Example

The crsctl get server label command returns output similar to the following:

CRS-4972: Current SERVER_LABEL parameter value is hubserver

crsctl getperm serverpool

Use the crsctl getperm serverpool command to obtain permissions for a particular server pool.

Syntax

crsctl getperm serverpool server_pool_name  [-u user_name | -g group_name]

See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Parameters

Table E-44 crsctl getperm serverpool Command Parameters

ParameterDescription
server_pool_name

Specify the name of the server pool for which you want to obtain permissions.

-u user_name

If you specify -u, then Oracle Clusterware obtains permissions for a particular user.

-g group_name

If you specify -g, then Oracle Clusterware obtains permissions for a particular group.


Usage Notes

  • The server_pool_name parameter is required

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

Example

The crsctl getperm serverpool command returns output similar to the following:

$ crsctl getperm serverpool sp1
NAME: sp1
owner:root:rwx,pgrp:root:r-x,other::r--

crsctl lsmodules

Use the crsctl lsmodules command to list the components of the modules that you can debug.


See Also:

"Dynamic Debugging" for more information about debugging

Syntax

crsctl lsmodules {mdns | gpnp | css | crf | crs | ctss | evm | gipc}

Usage Notes

You can specify any of the following components:


mdns: Multicast domain name server
gpnp: Grid Plug and Play service
css: Cluster Synchronization Services
crf: Cluster Health Monitor
crs: Cluster Ready Services
ctss: Cluster Time Synchronization Service
evm: Event Manager
gipc: Grid Interprocess Communication

Example

The crsctl lsmodules command returns output similar to the following:

$ crsctl lsmodules evm
List EVMD Debug Module: CLSVER
List EVMD Debug Module: CLUCLS
List EVMD Debug Module: COMMCRS
List EVMD Debug Module: COMMNS
List EVMD Debug Module: CRSOCR
List EVMD Debug Module: CSSCLNT
List EVMD Debug Module: EVMAGENT
List EVMD Debug Module: EVMAPP
...

crsctl modify category

Use the crsctl modify category command to modify an existing server category.

Syntax

crsctl modify category category_name [-attr "attr_name=attr_value
   [,attr_name=attr_value[,...]]"] [-i] [-f]

Parameters

Table E-45 crsctl modify category Command Parameters

ParameterDescription
category_name

Specify the name of the server category you want to modify.

attr_name

Specify the name of a category attribute you want to modify preceded by the -attr flag.

attr_value

A value for the category attribute.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

"ACL='owner:st-cdc\cdctest:rwx,pgrp::---',
ACTIVE_CSS_ROLE=leaf"

See Also:

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.

-f

Force parameter


Usage Notes

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks (''). For example:

    "START_DEPENDENCIESs=s'hard(res1,res2,res3)'"
    

Example

To modify a server category:

$ crsctl modify category blue_server -attr  "EXPRESSION=(LOCATION=hub)"

crsctl modify policy

Use the crsctl modify policy command to modify an existing configuration policy.

Syntax

crsctl modify policy policy_name -attr "attr_name=attr_value" [-i]

Parameters

Table E-46 crsctl modify policy Command Parameters

ParameterDescription
policy_name

The name of the policy you want to modify.

attr_name

Specify a description for the policy using the DESCRIPTION policy attribute preceded by the -attr flag.

attr_value

A value for the DESCRIPTION policy attribute that describes the policy.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "DESCRIPTION=daytime"
-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Usage Notes

  • The policy_name parameter is required

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

Example

To modify an existing policy, run the following command as root or the Oracle Clusterware installation owner:

# crsctl modify policy p1 -attr "DESCRIPTION=daytime"

crsctl modify policyset

Use the crsctl modify policyset command to modify an existing policy set.

Syntax

crsctl modify policyset {-attr "attr_name=attr_value[,attr_name=attr_value[, ...]]" | -file file_name} [-ksp]

Parameters

Table E-47 crsctl modify policyset Command Parameters

ParameterDescription
attr_name

The name of a policy attribute you want to modify preceded by the -attr flag. With this command, you can specify any of the following attributes:


ACL
LAST_ACTIVATED_POLICY
SERVER_POOL_NAMES
attr_value

A value for the policy attribute.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "ACL='owner:mjkeenan:rwx,pgrp:svrtech:rwx,other::r--',
SERVER_POOL_NAMES=sp1 sp2 Free"
-file file_name

If you specify this parameter instead of -attr, then enter a name of a file that contains policy set definitions.

-ksp

If you specify this parameter, then CRSCTL keeps the server pools in the system, which means that they are independent and not managed by the policy set.


Usage Notes

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

  • You can only specify policy definitions using the -file parameter or by running the crsctl modify policy command

Example

To modify an existing policy set, run the following command as root or the Oracle Clusterware installation owner:

# crsctl modify policyset –file my_policy_set.def

crsctl modify serverpool

Use the crsctl modify serverpool command to modify an existing server pool.

Syntax

crsctl modify serverpool server_pool_name -attr "attr_name=attr_value
   [,attr_name=attr_value[, ...]]" [-policy policyName | -all_policies]
[-i] [-f]

Parameters

Table E-48 crsctl modify serverpool Command Parameters

ParameterDescription
server_pool_name

The name of the server pool you want to modify.

attr_name

The name of a server pool attribute you want to modify preceded by the -attr flag.

See Also: Table 3-1, "Server Pool Attributes" for details about server pool attributes

attr_value

A value for the server pool attribute.

Note: The attr_name and attr_value parameters must be enclosed in double quotation marks ("") and separated by commas. For example:

-attr "CHECK_INTERVAL=30,START_TIMEOUT=25"
-policy policyName |
-all_policies

Specify a particular policy or all policies for which you want to modify the server pool definition.

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.

-f

If you specify the -f parameter, then Oracle Clusterware stops resources running on a server in another server pool and relocates that server into the server pool you are adding.

If you do not specify the -f parameter, then Oracle Clusterware checks whether the creation of the server pool results in stopping any resources on a server in another server pool that is going to give up a server to the server pool you are adding. If so, then Oracle Clusterware rejects the crsctl add serverpool command.



See Also:

"How Server Pools Work" for more information about server pools and server pool attributes

Usage Notes

  • The server_pool_name parameter is required

  • If an attribute value for an attribute name-value pair contains commas, then the value must be enclosed in single quotation marks (''). For example:

    "START_DEPENDENCIES='hard(res1,res2,res3)'"
    
  • Running this command may result in Oracle Clusterware relocating other servers between server pools to comply with the new configuration

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

Example

To modify an existing server pool, run the following command as root or the Oracle Clusterware installation owner:

# crsctl modify serverpool sp1 -attr "MAX_SIZE=7"

crsctl pin css

Use the crsctl pin css command to pin many specific nodes. Pinning a node means that the association of a node name with a node number is fixed. If a node is not pinned, its node number may change if the lease expires while it is down. The lease of a pinned node never expires.

Syntax

crsctl pin css -n node_name [ node_name [..]]

Usage Notes

  • You can specify a space-delimited list of servers

  • Any pre-12c release 1 (12.1) Oracle software must reside on a pinned server.

  • A node may be unpinned with crsctl unpin css.

  • Deleting a node with the crsctl delete node command implicitly unpins the node.

Example

To pin the node named node2:

# crsctl pin css -n node2

crsctl query crs administrator

Use the crsctl query crs administrator command to display the list of users with Oracle Clusterware administrative privileges.

Syntax

crsctl query crs administrator

Example

The crsctl query crs administrator command returns output similar to the following:

CRS Administrator List: scott

crsctl query crs activeversion

Use the crsctl query crs activeversion command to display the active version and the configured patch level of the Oracle Clusterware software running in the cluster. During a rolling upgrade, however, the active version is not advanced until the upgrade is finished across the cluster, until which time the cluster operates at the pre-upgrade version.

Additionally, during a rolling patch, the active patch level is not advanced until the patching is finished across the cluster, until which time the cluster operates at the pre-upgrade patch level.

Syntax

crsctl query crs activeversion [-f]

If you specify the -f parameter, then this command also prints the patch level for each configured node in the cluster.

Example

The crsctl query crs activeversion command returns output similar to the following:

$ crsctl query crs activeversion -f

Oracle Clusterware active version on the cluster is [12.1.0.0.2]. The cluster
upgrade state is [NORMAL]. The cluster active patch level is [456789126].

crsctl query crs autostart

Use the crsctl query crs autostart command to obtain the values of the Oracle Clusterware automatic resource start criteria.

Syntax

crsctl query crs autostart

Example

The crsctl query crs autostart command returns output similar to the following:

'Autostart delay':       60
'Autostart servercount': 2

crsctl query crs releasepatch

Use the crsctl query crs releasepatch command to display the patch level which is updated in the Grid home patch repository while patching a node. The patch level corresponds to only the local node in which the command is executed. This command can be executed while the stack is not running.

Syntax

crsctl query crs releasepatch

Example

The crsctl query crs releasepatch command returns output similar to the following for a node which has no patches applied:

Oracle Clusterware release patch level is [3180840333] and the complete list of
patches is [13559647] on the local node.

crsctl query crs releaseversion

Use the crsctl query crs releaseversion command to display the version of the Oracle Clusterware software stored in the binaries on the local node.

Syntax

crsctl query crs releaseversion

Example

The crsctl query crs releaseversion command returns output similar to the following:

Oracle High Availablity Services release version on the local node is [11.2.0.2.0]

crsctl query crs softwarepatch

Use the crsctl query crs softwarepatch command to display the configured patch level of the installed Oracle Clusterware.

Syntax

crsctl query crs softwarepatch [host_name]

If you specify a host name, then CRSCTL displays the patch level of Oracle Clusterware installed on that host. Otherwise, CRSCTL displays the patch level of Oracle Clusterware installed on the local host.

Example

The crsctl query crs softwarepatch command returns output similar to the following:

Oracle Clusterware patch level on node [node1] is [456789126]

crsctl query crs softwareversion

Use the crsctl query crs softwareversion command to display latest version of the software that has been successfully started on the specified node.

Syntax

crsctl query crs softwareversion [node_name]

Usage Notes

  • If you do not provide a node name, then Oracle Clusterware displays the version of Oracle Clusterware running on the local server.

Example

The crsctl query crs softwareversion command returns output similar to the following:

Oracle Clusterware version on node [node1] is [11.2.0.2.0]

crsctl query css ipmiconfig

Use the crsctl query css ipmiconfig command to determine whether Oracle Clusterware on the local server has been configured to use IPMI for failure isolation. Note that this command detects the presence of configuration data, but cannot not validate its correctness.

Syntax

crsctl query css ipmiconfig

Usage Notes

  • This command attempts to locate and access the IPMI configuration stored in the Oracle Cluster Registry (OCR) and should be executed under the account used to install Oracle Clusterware, or an authorization failure may be reported.

  • An authorization failure may not result when executed under another account, if the registry contains no IPMI configuration data.

Example

The crsctl query css ipmiconfig command returns output similar to the following:

CRS-4236: Oracle Clusterware configured to use IPMI

Or

CRS-4237: Oracle Clusterware is not fully configured to use IPMI

crsctl query css ipmidevice

Use the crsctl query css ipmiconfig command to determine the presence of the Intelligent Platform Management Interface (IPMI) driver on the local system.

Syntax

crsctl query css ipmidevice

Usage Notes

  • This command performs a pre-check during IPMI installation, and is normally issued only by the installer.

  • This command performs a perfunctory check and a success return does not guarantee that the IPMI hardware is fully configured for use.

  • There are no special privileges required to run this command.

Example

The crsctl query css ipmidevice command returns output similar to the following:

CRS-4231: IPMI device and/or driver found

Or

CRS-4218: Unable to access an IPMI device on this system

crsctl query css votedisk

Use the crsctl query css votedisk command to display the voting files used by Cluster Synchronization Services, the status of the voting files, and the location of the disks, whether they are stored on Oracle ASM or elsewhere.

Syntax

crsctl query css votedisk

Example

The crsctl query css votedisk command returns output similar to the following:

$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
1. ONLINE   296641fd201f4f3fbf3452156d3b5881 (/ocfs2/host09_vd3) []
2. ONLINE   8c4a552bdd9a4fd9bf93e444223146f2 (/netapp/ocrvf/newvd) []
3. ONLINE   8afeee6ae3ed4fe6bfbb556996ca4da5 (/ocfs2/host09_vd1) []
Located 3 voting file(s).

crsctl query dns

Use the crsctl query dns command to obtain a list of addresses returned by DNS lookup of the name with the specified DNS server.

Syntax

crsctl query dns {-servers | -name name [-dnsserver DNS_server_address]
[-port port] [-attempts number_of_attempts] [-timeout timeout_in_seconds] [-v]}

Parameters

Table E-49 crsctl query dns Command Parameters

ParameterDescription
-servers

Use the -servers parameter to list the current DNS configuration of the node on which you run the command. Typically, on Linux/UNIX, CRSCTL reads the /etc/resolv.conf file at start time and that is what the system configuration is until you restart or until the resolver is restarted. CRSCTL gets its information from resolver. You can use this parameter or, optionally, you can specify the other parameters in the command syntax string.

-name name

Specify the fully-qualified domain name you want to look up.

-dnsserver DNS_server_address

Specify the address of the DNS server on which you want the domain name to be looked up.

-port port

The port on which the DNS server listens. If you do not specify a port, then it defaults to port 53.

-attempts number_of_attempts

Specify the number of retry attempts.

-timeout timeout_in_seconds

Specify the timeout length in seconds.

-v

Verbose output.


Example

The crsctl query dns command returns output similar to the following for a DNS server named stmjk07-vip.stmjk0462.foo.com:

CRS-10024: DNS server returned 192.168.29.250 for name
stmjk07-vip.stmjk0462.foo.com

If you choose the -servers parameter, then the command returns output similar to the following:

CRS-10018: the following configuration was found on the system:
CRS-10019: There are 3 domains in search order. They are:
us.foo.com
foo.com
foocorp.com
CRS-10022: There are 3 name servers. They are:
192.168.249.41
192.168.249.52
192.168.202.15
CRS-10020: number of retry attempts for name lookup is: 2
CRS-10021: timeout for each name lookup is: 1

crsctl query socket udp

Use the crsctl query socket udp command to verify that a daemon can listen on specified address and port.

Syntax

crsctl query socket udp [-address address] [-port port]

Table E-50 crsctl query socket udp

ParameterDescription
-address address

Specify the IP address on which the socket is to be created. If you do not specify an address, then CRSCTL assumes the local host as the default.

-port port

Specify the port on which the socket is to be created. If you do not specify a port, then CRSCTL assumes 53 as the default.


Usage Notes

  • You must run this command as root to verify port numbers less than 1024.

Examples

The following examples show various outputs:

$ crsctl query socket udp
CRS-10030: could not verify if port 53 on local node is in use

# crsctl query socket udp
CRS-10026: successfully created socket on port 53 on local node

The first of the preceding two commands was not run as root, and in both commands no port was specified, so CRSCTL assumed the default, 53, which is less than 1024. This condition necessitates running the command as root.

$ crsctl query socket udp -port 1023
CRS-10030: could not verify if port 1023 on local node is in use

# crsctl query socket udp -port 1023
CRS-10026: successfully created socket on port 1023 on local node

Similar to the first two examples, the first of the preceding two commands was not run as root, and, although a port number was specified, it is still less than 1024, which requires root privileges to run the command.

In this last example, a port number greater than 1024 is specified, so there is no need to run the command as root:

$ crsctl query socket udp -port 1028
CRS-10026: successfully created socket on port 1028 on local node

crsctl release dhcp

Use the crsctl release dhcp command to send a DHCP lease release request to a specific client ID and send release packets on the network to a specific port.

Syntax

crsctl release dhcp -clientid clientid [-port port]

Parameters

Table E-51 crsctl release dhcp Command Parameters

ParameterDescription
-clientid clientid

Specify the client ID for which you want to attempt release. Obtain the client ID by running the crsctl get clientid command.

-port port

The port to which CRSCTL sends the release packets. If you do not specify a port, then CRSCTL uses the default value 67.


Example

The crsctl release dhcp command returns output similar to the following:

$ crsctl release dhcp -clientid  stmjk0462clr-stmjk01-vip

CRS-10012: released DHCP server lease for client ID stmjk0462clr-stmjk01-vip
on port 67

crsctl relocate resource

Use the crsctl relocate resource command to relocate resources to another server in the cluster.

Syntax

crsctl relocate resource {resource_name [-k cid] | {resource_name | -all}
-s source_server | -w "filter"} [-n destination_server] [-env "env1=val1,env2=val2,..."]
[-i] [-f]

Parameters

Table E-52 crsctl relocate resource Command Parameters

ParameterDescription
resource_name [-k cid]

The name of a resource you want to relocate.

Optionally, you can also specify the resource cardinality ID. If you specify this parameter, then Oracle Clusterware relocates the resource instance that have the cardinality you specify.

resource_name | -all
-s source_server

Specify one particular or all resources located on a particular server from which you want to relocate those resources.

-w "filter"

Specify a resource filter that Oracle Clusterware uses to limit the number of resources relocated. The filter must be enclosed in double quotation marks (""). Examples of resource filters include:

  • "TYPE == cluster_resource": This filter limits Oracle Clusterware to relocate only resources of cluster_resource type

  • "CHECK_INTERVAL > 10": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • "(CHECK_INTERVAL > 10) AND (NAME co 2)": This filter limits Oracle Clusterware to relocate resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

See Also: "Filters" for more information

-n destination_server

Specify the name of the server to which you want relocate resources. If you do not specify a destination server, then Oracle Clusterware relocates the resources to the best server according to the attribute profile of each resource.

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values for this command. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-i

If you specify -i, then the command returns an error if processing this command requires waiting for Oracle Clusterware to unlock the resource or its dependents. Sometimes, Oracle Clusterware locks resources or other objects to prevent commands from interfering with each other.

-f

Specify the -f parameter to force the relocating of the resource when it has other resources running that depend on it. Dependent resources are relocated or stopped when you use this parameter.

Note: When you are relocating resources that have cardinality greater than 1, you must use either -k or -s to narrow down which resource instances are to be relocated.


Usage Notes

  • Any one of the three following options is required to specify which resources you want to relocate:

    • You can specify one particular resource to relocate.

    • Or you can specify one particular or all the resources to relocate from a particular source server.

    • Thirdly, you can specify a resource filter that Oracle Clusterware uses to match resources to relocate.

  • If a resource has a degree ID greater than 1, then Oracle Clusterware relocates all instances of the resource.

  • You must have read and execute permissions on the specified resources to relocate them

  • Do not use this command for any resources with names that begin with ora because these resources are Oracle resources.

Example

To relocate one particular resource from one server to another:

# crsctl relocate resource myResource1 -s node1 -n node3

crsctl relocate server

Use the crsctl relocate server command to relocate a server to a different server pool.

Syntax

crsctl relocate server server_name [...] -c server_pool_name [-i] [-f]

Parameters

Table E-53 crsctl relocate server Command Parameters

ParameterDescription
server_name

The name of the server you want to relocate. You can provide a space-delimited list of servers to relocate multiple servers.

-c server_pool_name

Specify the name of the server pool to which you want relocate the servers.

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.

-f

If you specify the -f parameter, then Oracle Clusterware stops resources running on the servers in another server pool and relocates that server into the server pool you specified.

If you do not specify the -f parameter, then Oracle Clusterware checks for resources that must be stopped on the servers that are being relocated. If it finds any, then Oracle Clusterware rejects the crsctl relocate server command.

Note: If the number of servers in the server pool is not above the value of the MIN_SIZE server pool attribute, then the force parameter will have no affect because CRSCTL will not violate the configuration.


Usage Notes

  • The server_name and -c server_pool_name parameters are required

Example

To move the node6 and node7 servers into the sp1 server pool without disrupting any active resources on those nodes, use the following command:

$ crsctl relocate server node6 node7 -c sp1

crsctl replace discoverystring

Use the crsctl replace discoverystring command to replace the existing discovery string used to locate voting files.

Syntax

crsctl replace discoverystring "absolute_path[,...]"

Parameters

Table E-54 crsctl replace discoverystring Command Parameters

ParameterDescription
absolute_path

Specify one or more comma-delimited absolute paths that match one or more voting file locations. Wildcards may be used.

The list of paths must be enclosed in double quotation marks ("").


Usage Notes

  • You must be root, the Oracle Clusterware installation owner, or a member of the Administrators group to run this command.

  • You can run this command on any node in the cluster.

  • If you store voting files in an Oracle ASM disk group, then you cannot change the discovery string.

Example

Assume the current discovery string is /oracle/css1/*. To also use voting files in the /oracle/css2/ directory, replace the current discovery string using the following command:

# crsctl replace discoverystring "/oracle/css1/*,/oracle/css2/*"

crsctl replace votedisk

Use the crsctl replace votedisk command to move or replace the existing voting files. This command creates voting files in the specified locations, either in Oracle ASM or some other storage option. Oracle Clusterware copies existing voting file information into the new locations and removes the voting files from the former locations.

Syntax

crsctl replace votedisk [+asm_disk_group | path_to_voting_disk [...]]

Parameters

Table E-55 crsctl replace votedisk Command Parameters

ParameterDescription

+asm_disk_group

Specify the disk group in Oracle ASM where you want to locate the voting file.

path_to_voting_disk [...]

A space-delimited list of voting file paths for voting files that reside outside of Oracle ASM.


Usage Notes

  • You must be root, the Oracle Clusterware installation owner, or a member of the Administrators group to run this command.

  • Specify to replace a voting file in either an Oracle ASM disk group or in some other storage device.

  • You can run this command on any node in the cluster.

Example

Example 1

To replace a voting file that is located within Oracle ASM:

$ crsctl replace votedisk +diskgroup1

Example 2

To replace a voting file that is located on a shared file system:

$ crsctl replace votedisk /mnt/nfs/disk1 /mnt/nfs/disk2

crsctl request action

Use the crsctl request action command to perform a specific action on specific resource.

Syntax

crsctl request action action_name {-r resource_name [...] | -w "filter"} [-env "env1=val1,env2=val2,..."] [-i]

Parameters

Table E-56 crsctl request action Command Parameters

ParameterDescription
action_name

Specify the name of the action you want perform. Actions supported by a particular resource are listed in the ACTIONS resource attribute of that resource.

-r resource_name [...]

Specify a particular resource. Multiple resource names must be separated by a space.

-w "filter"

Alternative to specifying resource names, you can specify a resource filter that Oracle Clusterware uses to limit the number of resources on which actions are performed. Examples of resource filters include:

  • TYPE == cluster_resource: This filter limits Oracle Clusterware to perform actions on only resources of cluster_resource type

  • CHECK_INTERVAL > 10: This filter limits Oracle Clusterware to perform actions on only resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute

  • (CHECK_INTERVAL > 10) AND (NAME co 2): This filter limits Oracle Clusterware to perform actions on only resources that have a value greater than 10 for the CHECK_INTERVAL resource attribute and the name of the resource contains the number 2

See Also: "Filters" for more information

-env "env1=val1,
env2=val2,..."

You can optionally override one or more resource profile attribute values with the -env command parameter. If you specify multiple environment name-value pairs, then you must separate each pair with a comma and enclose the entire list in double quotation marks ("").

-i

If you specify -i, then the command fails if Oracle Clusterware cannot process the request immediately.


Example

The crsctl request dhcp command returns output similar to the following:

$ crsctl request dhcp -clientid stmjk0462clr-stmjk01-vip

CRS-10009: DHCP server returned server: 192.168.53.232,
 loan address : 192.168.29.228/255.255.252.0, lease time: 43200

crsctl request dhcp

Use the crsctl request dhcp command to send DHCP request packets on the network at the specified port. If the DHCP server has an IP address it can provide, then it responds with the IP address for the client ID.

Syntax

crsctl request dhcp -clientid clientid [-port port]

Parameters

Table E-57 crsctl request dhcp Command Parameters

ParameterDescription
-clientid clientid

Specify the client ID for which you want to attempt request. Obtain the client ID by running the crsctl get clientid command.

-port port

The port to which CRSCTL sends the request packets. If you do not specify a port, then CRSCTL uses the default value 67.


Example

The crsctl request dhcp command returns output similar to the following:

$ crsctl request dhcp -clientid stmjk0462clr-stmjk01-vip

CRS-10009: DHCP server returned server: 192.168.53.232,
 loan address : 192.168.29.228/255.255.252.0, lease time: 43200

crsctl set cluster hubsize

Use the crsctl set cluster hubsize command to set the maximum number of Hub Nodes for an Oracle Flex Cluster.

Syntax

crsctl set cluster hubsize

Example

The following command example sets the maximum number of Hub Nodes to 32:

$ crsctl set cluster hubsize 32

crsctl set cluster mode

Use the crsctl set cluster mode command to change a cluster to an Oracle Clusterware standard Cluster or an Oracle Flex Cluster.

Syntax

crsctl set cluster mode [standard | flex]

Usage Notes

  • Choose either standard or flex depending on how you want to configure the cluster.

crsctl set cpu equivalency

Use the crsctl set cpu equivalency command to set a value for the CPU_EQUIVALENCY server configuration attribute.

Syntax

crsctl set cpu equivalency

crsctl set crs autostart

Use the crsctl set crs autostart command to set the Oracle Clusterware automatic resource start criteria. The autostart delay and minimum server count criteria delay Oracle Clusterware resource autostart until one of the two conditions are met.

Syntax

crsctl set crs autostart [delay delay_time] [servercount count]

Table E-58 crsctl set crs autostart Command Parameters

ParameterDescription
delay delay_time

Specify the number of seconds to delay Oracle Clusterware autostart.

servercount count

Specify the minimum number of servers required for Oracle Clusterware autostart.


Example

To ensure that Oracle Clusterware delays resource autostart for 60 seconds after the first server in the cluster is ONLINE:

crsctl set crs autostart delay 60

To ensure that Oracle Clusterware waits for there to be at least two servers ONLINE before it initiates resource autostart:

crsctl set crs autostart servercount 2

To ensure that Oracle Clusterware delays resource autostart until either of the previous two conditions are met (in no particular order):

crsctl set crs autostart delay 60 servercount 2

crsctl set css

Use the crsctl set css command to set the value of a Cluster Synchronization Services parameter.

Syntax

crsctl set css parameter value

Usage Notes

  • Do not use the crsctl set css command to set the following parameters unless instructed to do so by My Oracle Support.

  • Cluster Synchronization Services parameters include:

    diagwait
    disktimeout
    logfilesize
    misscount
    priority
    reboottime
    

crsctl set css ipmiaddr

Use the crsctl set css ipmiaddr command to store the address of the local Intelligent Platform Management Interface (IPMI) device in the Oracle Local Registry.

Syntax

crsctl set css ipmiaddr ip_address

Usage Notes

  • Run the command under the user account used to install Oracle Clusterware

  • Obtain the IP address used by the IPMI device using either ipmiutil or ipmitool as root on the local server

  • Oracle Clusterware stores the IP address for IPMI in the configuration store, and distributes the address as required

  • This command only stores the IPMI IP address on the server from which you run it

  • This command fails if another server cannot access IPMI at the supplied address

Example

To store the IPMI IP address on a local server and distribute it to other cluster nodes:

$ crsctl set css ipmiaddr 192.0.2.244

crsctl set css ipmiadmin

Use the crsctl set css ipmiadmin command to store the login credentials of an Intelligent Platform Management Interface (IPMI) administrator in the Oracle Local Registry.

Syntax

crsctl set css ipmiadmin ipmi_administrator_name

Usage Notes

  • This command must be run under the user account that installed Oracle Clusterware.

  • When prompted, provide the new password to associate with the new administrator account name. Oracle Clusterware stores the name and password for the local IPMI in the configuration store, and distributes the new credentials as required.

  • This command only modifies the IPMI administrator on the server from which you run it.

  • This command fails if another server cannot access the local IPMI at the supplied address.

Example

To modify the IPMI administrator scott:

$ crsctl set css ipmiadmin scott

crsctl set css leafmisscount

Use the crsctl set css leafmisscount command to specify, in seconds, the amount of time that must pass without any communication between a Leaf Node and the Hub Node to which it is attached, before the connection is declared to be no longer active and the Leaf Node is removed from the cluster.

Syntax

crsctl set css leafmisscount number_of_seconds

Usage Notes

  • You must run this command as root or the Oracle Clusterware installation owner

  • You can only run this command on a Hub Node

Example

To configure a 30 second interval between communication failure and removal of the Leaf Node from the cluster:

$ crsctl set css leafmisscount 30

crsctl set node role

Use the crsctl set node role command to set the role of a specific node in the cluster.

Syntax

crsctl set node role [-node node_name] {hub | leaf}

Usage Notes

  • You can specify a particular node for which to set role information. If you do not specify a particular node, then CRSCTL sets the node role on the local node.

  • Specify the hub option to configure the node role as a Hub Node.

  • Specify the leaf option to configure the node role as a Leaf Node.

  • You must restart the Oracle Clusterware technology stack to apply a node role change.

Example

To configure a node as a Hub Node:

$ crsctl set node role -node node151 hub

crsctl set resource use

Use the crsctl set resource use command to set the value of the RESOURCE_USE_ENABLED server configuration parameter for the server on which you run this command.

Syntax

crsctl set resource use [1 | 0]

Usage Notes

  • The possible values are 1 or 0. If you set the value for this attribute to 1, which is the default, then the server can be used for resource placement. If you set the value to 0, then Oracle Clusterware disallows starting server pool resources on the server. The server remains in the Free server pool.

  • You must run this command as root or a cluster administrator, or an administrator on Windows systems.

Example

To set the value of the RESOURCE_USE_ENABLED server configuration parameter:

# crsctl set resource use 1

crsctl set server label

Use the crsctl set server label command to set the configuration value of the SERVER_LABEL server configuration attribute for the server on which you run this command.

Syntax

crsctl set server label value

Usage Notes

  • Specify a value for the server. This value can reflect a physical location, such as building_A, or some other identifying characteristic of the server, such as hubserver.

  • You must restart the Oracle Clusterware technology stack on the node before any changes you make take effect.

Example

The crsctl set server label command returns output similar to the following:

$ crsctl set server label hubserver

crsctl setperm serverpool

Use the crsctl setperm serverpool command to set permissions for a particular server pool.

Syntax

crsctl setperm serverpool server_pool_name {-u acl_string | -x acl_string |
-o user_name | -g group_name}

Parameters

Table E-59 crsctl setperm serverpool Command Parameters

ParameterDescription
server_pool_name

Specify the name of the server pool for which you want to set permissions.

{-u | -x | -o | -g}

You can specify only one of the following parameters for a server pool:

  • -u acl_string: You can update the access control list (ACL) for a server pool

  • -x acl_string: You can delete the ACL for a server pool

  • -o user_name: You can change the owner of a server pool by entering a user name

  • -g group_name: You can change the primary group of a server pool by entering a group name

Specify a user, group, or other ACL string, as follows:

user:user_name[:readPermwritePermexecPerm] |
group:group_name[:readPermwritePermexecPerm] |
other[::readPermwritePermexecPerm]
  • user: User ACL

  • group: Group ACL

  • other: Other ACL

  • readPerm: Read permission for the server pool; the letter r grants a user, group, or other read permission, the minus sign (-) denies read permission

  • writePerm: Write permission for the server pool; the letter w grants a user, group, or other write permission, the minus sign (-) denies write permission

  • execPerm: Execute permission for the server pool; the letter x grants a user, group, or other execute permission, the minus sign (-) denies execute permission


Usage Notes

  • The server_pool_name parameter is required

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

Example

To grant read, write, and execute permissions on a server pool for user Jane Doe:

crsctl setperm serverpool sp3 -u user:jane.doe:rwx

crsctl start cluster

Use the crsctl start cluster command on any node in the cluster to start the Oracle Clusterware stack.

Syntax

crsctl start cluster [-all | -n server_name [...]]

Usage Notes

  • You can choose to start the Oracle Clusterware stack on all servers in the cluster, on one or more named servers in the cluster (separate multiple server names by a space), or the local server, if you do not specify either -all or -n.

  • You can use this cluster-aware command on any node in the cluster.

Example

To start the Oracle Clusterware stack on two named servers run the following command as root:

# crsctl start cluster -n node1 node2

crsctl start crs

Use the crsctl start crs command to start Oracle High Availability Services on the local server.

Syntax

crsctl start crs [-excl [-nocrs] [-cssonly]] | [-wait | -waithas | -nowait] | [-noautostart]

Parameters

Table E-60 crsctl start crs Command Parameters

ParameterDescription
-excl

Starts Oracle Clusterware in exclusive mode with two options:

  • Specify the -nocrs parameter to start Oracle Clusterware in exclusive mode without starting CRSD.

  • Specify the -cssonly parameter to start CSSD, only.

-wait | -waithas | -nowait

Choose one of the following:

  • Specify -wait to wait until startup is complete and display all progress and status messages.

  • Specify -waithas to wait until startup is complete and display OHASD progress and status messages.

  • Specify -nowait to not wait for OHASD to start

-noautostart

Start only OHASD.


Usage Notes

  • You must run this command as root

  • This command starts Oracle High Availability Services only on the local server

Example

To start Oracle High Availability Services on the local server, run the following command as root:

# crsctl start crs

crsctl start ip

Use the crsctl start ip command to start a given IP name or IP address on a specified interface with a specified subnet mask. Run this command on the server on which you want to start the IP.

Syntax

crsctl start ip -A {IP_name | IP_address}/netmask/interface_name

Parameters

Table E-61 crsctl start ip Command Parameters

ParameterDescription
{IP_name | IP_address}

Specify either a domain name or an IP address.

If you do not specify a fully-qualified domain name, then CRSCTL uses a standard name search.

netmask

Specify a subnet mask for the IP to start.

interface_name

Specify an interface on which to start the IP.


Example

To start an IP on the local server, run the command similar to the following:

$ crsctl start ip -A 192.168.29.220/255.255.252.0/eth0

crsctl start rollingpatch

The crsctl start rollingpatch command transitions Oracle Clusterware and Oracle ASM into rolling patch mode. In this mode, the software tolerates nodes having different patch levels.

Syntax

crsctl start rollingpatch

Usage Notes

  • This command queries the Oracle Clusterware rolling patch state and Oracle ASM cluster state. If either one is not in rolling patch mode, it will use the appropriate method to transition Oracle Clusterware or Oracle ASM to rolling patch mode.

  • If Oracle Clusterware and Oracle ASM are both in rolling patch mode when you run this command, then this command does nothing.

  • The rolling patch mode is not persistent. If all the nodes in a cluster are shut down and restarted, then the cluster transitions out of rolling patch mode when it is restarted. Similarly, if Oracle Clusterware is stopped and then restarted on all nodes in the cluster, then the rolling patch mode is lost.

  • This command does not transition Oracle ASM to rolling patch mode if issued within an Oracle ASM Client Cluster.

crsctl start rollingupgrade

The crsctl start rollingupgrade command transitions Oracle Clusterware and Oracle ASM into rolling upgrade mode.

Syntax

crsctl start rollingupgrade version

Usage Notes

  • This command queries the Oracle Clusterware rolling upgrade state and Oracle ASM cluster state. If either one is not in rolling upgrade mode, it will use the appropriate method to transition Oracle Clusterware or Oracle ASM to rolling upgrade mode.

  • If Oracle Clusterware and Oracle ASM are both in rolling upgrade mode when you run this command, then this command does nothing.

  • The rolling upgrade mode is not persistent. If all the nodes in a cluster are shut down and restarted, then the cluster transitions out of rolling upgrade mode when it is restarted. Similarly, if Oracle Clusterware is stopped and then restarted on all nodes in the cluster, then the rolling upgrade mode is lost.

  • This command does not transition Oracle ASM to rolling upgrade mode if run within an Oracle ASM Client Cluster.

crsctl start testdns

Use the crsctl start testdns command to start a test DNS server that will listen on a specified IP address and port. The test DNS server does not respond to incoming packets but does display the packets it receives. Typically, use this command to check if domain forwarding is set up correctly for the GNS domain.

Syntax

crsctl start testdns [-address address [-port port]] [-once] [-v]

Parameters

Table E-62 crsctl start testdns Command Parameters

ParameterDescription
-address address

Specify a server address in the form IP_address/netmask [/interface_name].

-port port

The port on which the server listens. If you do not specify a port, then it defaults to port 53.

-once

Specify this flag to indicate that the DNS server should exit after it receives one DNS query packet.

-v

Verbose output.


Example

To start a test DNS server on the local server, run the command similar to the following:

$ crsctl start testdns -address 192.168.29.218 -port 63 -v

crsctl status category

Use the crsctl status category command to obtain information about a server category.

Syntax

crsctl status category {category_name [category_name [...]] | [-w "filter" |
    -server server_name]}

Parameters

Table E-63 crsctl status category Command Parameters

ParameterDescription
category_name

Specify the name of the server category or a space-delimited list of server categories for which you want to obtain the status.

-w "filter"

Alternatively, you can specify a category filter preceded by the -w flag.

See Also: "Filters" for more information

-server server_name

Alternatively, you can specify a particular server to list all of the categories that the server matches.


Examples

To obtain the status of a server category using filters:

$ crsctl stat category -w "ACTIVE_CSS_ROLE = hub"

NAME=my_category_i
ACL=owner:mjkeenan:rwx,pgrp:svrtech:rwx,other::r--
ACTIVE_CSS_ROLE = hub
EXPRESSION=(CPU_COUNT > 3)

To obtain the status of a server category by server:

$ crsctl stat category -server node1

NAME=my_category
ACL=owner:mjkeenan:rwx,pgrp:svrtech:rwx,other::r--
ACTIVE_CSS_ROLE = hub
EXPRESSION=

crsctl status ip

Use the crsctl status ip command to check if a given IP address is up on the network.

Syntax

crsctl status ip -A {IP_name | IP_address}

Parameters

Table E-64 crsctl status ip Command Parameters

ParameterDescription
{IP_name | IP_address}

Specify either a domain name or an IP address.

If you do not specify a fully-qualified domain name, then CRSCTL uses a standard name search.


Example

The crsctl status ip command returns output similar to the following:

CRS-10003: IP address 192.168.29.220 could be reached from current node

crsctl status policy

Use the crsctl status policy command to view the status and definition of a configuration policy.

Syntax

crsctl status policy [policy_name [policy_name [...]] | -w "filter" | -active]

Parameters

Table E-65 crsctl status policy Command Parameters

ParameterDescription
policy_name

Specify the name of the policy or a space-delimited list of policy names for which you want to view the status.

-w "filter"

Alternatively, you can specify a policy filter preceded by the -w flag.

See Also: "Filters" for more information

-active

Alternatively, you can specify this parameter to display the status of the active policy.


Usage Notes

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

crsctl status policyset

Use the crsctl status policyset command to view the current policies in the policy set, including the access control list, which governs who can modify the set, the last activated policy, and the configuration which is now in effect, which is known as the Current policy.

Syntax

crsctl status policyset [-file file_name]

Parameters

Table E-66 crsctl status policyset Command Parameters

ParameterDescription
-file file_name

You can specify this parameter to create a file that you can edit and then send back using crsctl modify policyset to add, delete, or update multiple policies.

If you do not specify this optional parameter, then CRSCTL displays the Current configuration.


Usage Notes

  • Privileges necessary to run this command depend on the value of the ACL attribute of the policy set

Example

This command returns output similar to the following:

ACL=owner:'mjkeenan:rwx,pgrp:g900:rwx,other::r--'
LAST_ACTIVATED_POLICY=DayTime
SERVER_POOL_NAMES=Free pool1 pool2 pool3
POLICY
NAME=DayTime
DESCRIPTION=Test policy
SERVERPOOL
  NAME=pool1
  IMPORTANCE=0
  MAX_SIZE=2
  MIN_SIZE=2
  SERVER_CATEGORY=
  SERVER_NAMES=
SERVERPOOL
  NAME=pool2
  IMPORTANCE=0
  MAX_SIZE=1
  MIN_SIZE=1
  SERVER_CATEGORY=
SERVERPOOL
  NAME=pool3
  IMPORTANCE=0
  MAX_SIZE=1
  MIN_SIZE=1
  SERVER_CATEGORY=
POLICY
NAME=NightTime
DESCRIPTION=Test policy
SERVERPOOL
  NAME=pool1
  IMPORTANCE=0
  MAX_SIZE=1
  MIN_SIZE=1
  SERVER_CATEGORY=
SERVERPOOL
  NAME=pool2
  IMPORTANCE=0
  MAX_SIZE=2
  MIN_SIZE=2
  SERVER_CATEGORY=
SERVERPOOL
  NAME=pool3
  IMPORTANCE=0
  MAX_SIZE=1
  MIN_SIZE=1
  SERVER_CATEGORY=
POLICY
NAME=Weekend
DESCRIPTION=Test policy
SERVERPOOL
  NAME=pool1
  IMPORTANCE=0
  MAX_SIZE=0
  MIN_SIZE=0
  SERVER_CATEGORY=
SERVERPOOL
  NAME=pool2
  IMPORTANCE=0
  MAX_SIZE=1
  MIN_SIZE=1
  SERVER_CATEGORY=
SERVERPOOL
  NAME=pool3
  IMPORTANCE=0
  MAX_SIZE=3
  MIN_SIZE=3
  SERVER_CATEGORY=

crsctl status server

Use the crsctl status server command to obtain the status and configuration information of one or more particular servers.

Syntax

crsctl status server {server_name [...] | -w "filter"} [-g | -p | -v | -f] |
    [-category category_name | -w "filter"]

Parameters

Table E-67 crsctl status server Command Parameters

ParameterDescription
server_name [...]

Specify one or more space-delimited server names.

-w "filter"

Specify a filter to determine which servers are displayed. The filter must be enclosed in double quotation marks (""). Values that contain parentheses or spaces must be enclosed in single quotation marks (''). For example, "STATE = ONLINE" limits the display to servers that are online.

See Also: "Filters" for more information

-g | -p | -v | -f

You can specify one of the following parameters when Oracle Clusterware checks the status of specific servers:

  • -g: Use this parameter to check if the specified servers are registered

  • -p: Use this parameter to display static configuration of the specified servers

  • -v: Use this parameter to display the run-time configuration of the specified servers

  • -f: Use this parameter to display the full configuration of the specified servers

-category category_name

You can specify a particular category of servers for which to obtain status.

-w "filter"

Specify a filter to determine which categories are displayed. The filter must be enclosed in double quotation marks (""). Values that contain parentheses or spaces must be enclosed in single quotation marks (''). For example, "STATE = ONLINE" limits the display to servers that are online.

See Also: "Filters" for more information


Example

Example 1

The crsctl status server command returns output similar to the following:

NAME=node1
STATE=ONLINE

NAME=node2
STATE=ONLINE

Example 2

The full configuration of a specific server is similar to the following:

NAME=node2
MEMORY_SIZE=72626
CPU_COUNT=12
CPU_CLOCK_RATE=1711
CPU_HYPERTHREADING=0 
CPU_EQUIVALENCY=1000
DEPLOYMENT=other
CONFIGURED_CSS_ROLE=hub
RESOURCE_USE_ENABLED=1
SERVER_LABEL=
PHYSICAL_HOSTNAME=
STATE=ONLINE
ACTIVE_POOLS=ora.pool1
STATE_DETAILS=
ACTIVE_CSS_ROLE=hub

crsctl status serverpool

Use the crsctl status serverpool command to obtain the status and configuration information of one or more particular server pools.

Syntax

crsctl status serverpool [server_pool_name [...] | -w "filter"] [-p | -v | -f]

crsctl status serverpool {[server_pool_name [...]} -g

Parameters

Table E-68 crsctl status serverpool Command Parameters

ParameterDescription
[server_pool_name [...] -g

Specify one or more space-delimited server pool names to identify specific server pools.

-g: Use this parameter to check if the specified server pools are registered

Note: You cannot use the -g parameter with any other parameters after you specify the server pool names.

[-w "filter"]

Use this parameter to specify a filter, such as MIN_SIZE > 3, surrounded by double quotation marks (""). Use this parameter to identify server pools by a particular characteristic.

See Also: "Filters" for more information

[-p | -v | -f]

You can optionally specify one of the following parameters:

  • -p: Use this parameter to display static configuration of the specified server pools

  • -v: Use this parameter to display the run-time configuration of the specified server pools

  • -f: Use this parameter to display the full configuration of the specified server pools


Usage Notes

  • The server_pool_name parameter or a filter is required

  • Do not use this command for any server pools with names that begin with ora because these server pools are Oracle server pools

  • While you can use this command in either environment, it is only useful in the Oracle RAC environment

Examples

Example 1

To display the full configuration of the server pool sp1:

$ crsctl status serverpool sp1 -f
NAME=spl
IMPORTANCE=1
MIN_SIZE=0
MAX_SIZE=-1
SERVER_NAMES=node3 node4 node5
PARENT_POOLS=Generic
EXCLUSIVE_POOLS=
ACL=owner:oracle:rwx,pgrp:oinstall:rwx,other::r--
SERVER_CATEGORY=ora.hub.category
ACTIVE_SERVERS=node3 node4

Example 2

To display all the server pools and the servers associated with them, use the following command:

$ crsctl status serverpool
NAME=Free
ACTIVE_SERVERS=

NAME=Generic
ACTIVE_SERVERS=node1 node2

NAME=ora.orcl
ACTIVE_SERVERS=node1 node2

NAME=sp1
ACTIVE_SERVERS=node3 node4

Example 3

To find a server pool that meets certain criteria, use the following command:

$ crsctl status serverpool -w "MAX_SIZE > 1"
NAME=sp2
ACTIVE_SERVERS=node3 node4

crsctl status testdns

Use the crsctl status testdns command to query the test DNS server running on a specified address and local host name.

Syntax

crsctl status testdns [-server DNS_server_address] [-port port] [-v]

Parameters

Table E-69 crsctl status testdns Command Parameters

ParameterDescription
-server DNS_server_address

Specify the DNS server address for which you want to check the status.

-port port

The port on which the DNS server listens. If you do not specify a port, then it defaults to port 53.

-v

Verbose output.


Example

The crsctl status testdns command returns output similar to the following:

CRS-10024: DNS server returned 192.168.28.74 for name
stmjk07-vip.stmjk0462.foo.com

crsctl stop cluster

Use the crsctl stop cluster command on any node in the cluster to stop the Oracle Clusterware stack on all servers in the cluster or specific servers.

Syntax

crsctl stop cluster [-all | -n server_name [...]] [-f]

Usage Notes

  • If you do not specify -all or one or more space-delimited server names, then Oracle Clusterware stops the Oracle Clusterware stack on the local server.

  • You can use this cluster-aware command on any node in the cluster.

  • This command attempts to gracefully stop resources managed by Oracle Clusterware while attempting to stop the Oracle Clusterware stack.

    If any resources that Oracle Clusterware manages are still running after you run the crsctl stop cluster command, then the command fails. Use the -f parameter to unconditionally stop all resources and stop the Oracle Clusterware stack.

  • If you intend to stop Oracle Clusterware on all or a list of nodes, then use the crsctl stop cluster command, because it prevents certain resources from being relocated to other servers in the cluster before the Oracle Clusterware stack is stopped on a particular server. If you must stop the Oracle High Availability Services on one or more nodes, then wait until the crsctl stop cluster command completes and then run the crsctl stop crs command on any particular nodes, as necessary.

Example

To stop the Oracle Clusterware stack on a particular server:

# crsctl stop cluster -n node1

crsctl stop crs

Use the crsctl stop crs command to stop Oracle High Availability Services on the local server.

Syntax

crsctl stop crs [-f]

Usage Notes

  • You must run this command as root.

  • This command attempts to gracefully stop resources managed by Oracle Clusterware while attempting to stop Oracle High Availability Services on the local server.

    If any resources that Oracle Clusterware manages are still running after you run the crsctl stop crs command, then the command fails. Use the -f parameter to unconditionally stop all resources and stop Oracle High Availability Services on the local server.

  • If you intend to stop Oracle Clusterware on all or a list of nodes, then use the crsctl stop cluster command, because it prevents certain resources from being relocated to other servers in the cluster before the Oracle Clusterware stack is stopped on a particular server. If you must stop the Oracle High Availability Services on one or more nodes, then wait until the crsctl stop cluster command completes and then run the crsctl stop crs command on any particular nodes, as necessary.

  • Before attempting to shut down the Oracle Clusterware technology stack on all nodes with an Oracle ASM instance running in parallel in an Oracle Clusterware standard Cluster with Oracle Flex ASM enabled, you must first relocate at least one Oracle ASM instance to another node where Oracle ASM is not running.

  • In Oracle Clusterware 11g release 2 (11.2.0.3), when you run this command in Solaris Sparc and Solaris X64 environments, drivers remain loaded on shutdown and subsequent startup. This does not happen in Linux environments.

Example

To stop Oracle High Availability Services on the local server:

# crsctl stop crs

crsctl stop ip

Use the crsctl stop ip command to stop a given IP name or IP address on a specified interface with a specified subnet mask. Run this command on the server on which you want to stop the IP.

Syntax

crsctl stop ip -A {IP_name | IP_address}/interface_name

Parameters

Table E-70 crsctl stop ip Command Parameters

ParameterDescription
{IP_name | IP_address}

Specify either a domain name or an IP address.

If you do not specify a fully-qualified domain name, then CRSCTL uses a standard name search.

interface_name

Specify an interface on which to start the IP.


Example

To stop an IP on the local server, run the command similar to the following:

$ crsctl stop ip -A MyIP.domain.com/eth0

crsctl stop rollingpatch

The crsctl stop rollingpatch command transitions Oracle Clusterware and Oracle ASM out of rolling patch mode. Once transitioned out of rolling patch mode, the software does not tolerate nodes having different patch levels.

Syntax

crsctl stop rollingpatch

Usage Notes

  • This command queries the Oracle Clusterware rolling patch state and Oracle ASM cluster state. If either one is in rolling patch mode, it will use the appropriate method to transition Oracle Clusterware or Oracle ASM out of rolling patch mode.

  • This command verifies that all the nodes in the cluster have a consistent patch level, and returns an error otherwise.

  • If neither Oracle Clusterware or Oracle ASM are in rolling patch mode when you issue this command, then this command does nothing.

  • This command does not transition Oracle ASM out of rolling patch mode if issued within an Oracle ASM Client Cluster.

crsctl stop testdns

Use the crsctl stop testdns command to stop a test DNS server.

Syntax

crsctl stop testdns [-address address [-port port]] [-domain GNS_domain] [-v]

Parameters

Table E-71 crsctl stop testdns Command Parameters

ParameterDescription
-address address

Specify the server address for which you started the test DNS server in the form IP_address/netmask [/interface_name].

-port port

The port on which the DNS server listens. If you do not specify a port, then it defaults to port 53.

[-domain GNS_domain]

Domain for which the server should stop listening.

-v

Verbose output.


Example

The crsctl stop testdns command returns output similar to the following:

CRS-10032: successfully stopped the DNS listening daemon running on port 53 on
local node

crsctl unpin css

Use the crsctl unpin css command to unpin many servers. If a node is not pinned, its node number may change if the lease expires while it is down.

Syntax

crsctl unpin css -n node_name [node_name [...exit]]

Usage Notes

  • You can specify a space-delimited list of nodes.

  • Unpinned servers that stop for longer than a week are no longer reported by olsnodes. These servers are dynamic when they leave the cluster, so you do not need to explicitly remove them from the cluster.

  • Deleting a node with the crsctl delete node command implicitly unpins the node.

  • During upgrade of Oracle Clusterware, all servers are pinned, whereas after a fresh installation of Oracle Clusterware 12c, all servers you add to the cluster are unpinned.

  • You cannot unpin a server that has an instance of Oracle RAC that is older than 12c release 1 (12.1) if you installed Oracle Clusterware 12c on that server.

Example

To unpin two nodes:

$ crsctl unpin css -n node1 node4

crsctl unset css

Use the crsctl unset css command to unset the value of a Cluster Synchronization Services parameter and restore it to its default value.

Syntax

crsctl unset css parameter

Usage Notes

  • You can specify the following Cluster Synchronization Services parameters:

    • diagwait

    • disktimeout

    • misscount

    • reboottime

    • priority

    • logfilesize

Example

To restore the reboottime Cluster Synchronization Services parameter to its default value:

$ crsctl unset css reboottime

crsctl unset css ipmiconfig

Use the crsctl unset css ipmiconfig command to clear all previously stored IPMI configuration (login credentials and IP address) from the Oracle Local Registry. This is appropriate when deconfiguring IPMI in your cluster or if IPMI configuration was previously stored by the wrong user.

Syntax

crsctl unset css ipmiconfig

Usage Notes

  • This command must be run under the user account originally used to configure IPMI or by a privileged user.

  • This command only clears the IPMI configuration on the server on which you run it.

  • If Oracle Clusterware was able to access and use the configuration data to be deleted by this command, then it will continue to do so until you restart Oracle Clusterware.

Example

To clear the IPMI configuration data from the Oracle Local Registry and restart Oracle Clusterware to prevent further use of IPMI, log in as root or a member of the Administrator's group on Windows and run the following commands:

crsctl unset css ipmiconfig
crsctl stop crs
crsctl start crs

crsctl unset css leafmisscount

Use the crsctl unset css leafmisscount command to clear the amount of time that passes before the grace time begins after communication fails between a Hub Node and a Leaf Node and reset to the default.

Syntax

crsctl unset css leafmisscount

Oracle Restart Environment CRSCTL Commands

The commands listed in this section control Oracle High Availability Services. These commands manage the Oracle High Availability Services stack in the Oracle Restart environment, which consists of the Oracle High Availability Services daemon (ohasd), Oracle ASM (if installed), and Cluster Synchronization Services (if Oracle ASM is installed). These commands only affect the local server on which you run them.


Note:

Oracle does not support using crs_* commands in an Oracle Restart environment.

Each server in the cluster is in one of two possible states:

  • The whole stack is up, which means that Oracle High Availability Services is active

  • The whole stack is down, which means that Oracle High Availability Services is inactive

You can use the following commands in the Oracle Restart environment, only:

crsctl check has

Use the crsctl check has command to check the status of ohasd.

Syntax

crsctl check has

Example

The crsctl check has command returns output similar to the following:

CRS-4638: Oracle High Availability Services is online

crsctl config has

Use the crsctl check has command to display the automatic startup configuration of the Oracle High Availability Services stack on the server.

Syntax

crsctl config has

Example

The crsctl config has command returns output similar to the following:

CRS-4622 Oracle High Availability Services autostart is enabled.

crsctl disable has

Use the crsctl disable has command to disable automatic startup of the Oracle High Availability Services stack when the server boots up.

Syntax

crsctl disable has

Example

The crsctl disable has command returns output similar to the following:

CRS-4621 Oracle High Availability Services autostart is disabled.

crsctl enable has

Use the crsctl enable has command to enable automatic startup of the Oracle High Availability Services stack when the server boots up.

Syntax

crsctl enable has

Example

The crsctl enable has command returns output similar to the following:

CRS-4622 Oracle High Availability Services autostart is enabled.

crsctl query has releaseversion

Use the crsctl query has releaseversion command to display the release version of the Oracle Clusterware software that is stored in the binaries on the local node.

Syntax

crsctl query has releaseversion

Example

The crsctl query has releaseversion command returns output similar to the following:

Oracle High Availability Services release version on the local node is [11.2.0.0.2]

crsctl query has softwareversion

Use the crsctl query has softwareversion command to display the software version on the local node.

Syntax

crsctl query has softwareversion

Usage Notes

  • If you do not provide a server name, then Oracle Clusterware displays the version of Oracle Clusterware running on the local server.

Example

The crsctl query has softwareversion command returns output similar to the following:

Oracle High Availability Services version on the local node is [11.2.0.2.0]

crsctl start has

Use the crsctl start has command to start Oracle High Availability Services on the local server.

Syntax

crsctl start has [-noautostart]

Usage Notes

Use the -noautorestart parameter to start only Oracle High Availability Services.

Example

To start Oracle High Availability Services on the local server:

# crsctl start has

crsctl stop has

Use the crsctl stop has command to stop Oracle High Availability Services on the local server.

Syntax

crsctl stop has [-f]

Usage Notes

This command attempts to gracefully stop resources managed by Oracle Clusterware while attempting to stop Oracle High Availability Services.

If any resources that Oracle Clusterware manages are still running after you run the crsctl stop has command, then the command fails. Use the -f parameter to unconditionally stop all resources and stop Oracle High Availability Services.

Example

To stop Oracle High Availability Services on the local server:

# crsctl stop has

Troubleshooting and Diagnostic Output

You can use crsctl set log commands as the root user to enable dynamic debugging for Cluster Ready Services (CRS), Cluster Synchronization Services (CSS), and the Event Manager (EVM), and the clusterware subcomponents. You can dynamically change debugging levels using crsctl debug commands. Debugging information remains in the Oracle Cluster Registry (OCR) for use during the next startup. You can also enable debugging for resources.

This section covers the following topics:

Dynamic Debugging

This section includes the following CRSCTL commands that aid in debugging:

crsctl set log

Use the crsctl set log command to set log levels for Oracle Clusterware.

Syntax

crsctl set log {[crs | css | evm "component_name=log_level, [...]"] | 
[all=log_level]}

You can also set log levels for the agents of specific resources, as follows:

crsctl set log res "resource_name=log_level, [...]"

Usage Notes

  • You can set log levels for various components of the three modules, CRS, CSS, and EVM. If you choose the all param`\eter, then you can set log levels for all components of one module with one command. Use the crsctl lsmodules command to obtain a list of components for each module.

  • Enter a comma-delimited list of component name-log level pairs enclosed in double quotation marks ("").


    Note:

    Separate component name-log level pairs with an equals sign (=) in Oracle Clusterware 11g release 2 (11.2.0.3), and later. Previous Oracle Clusterware versions used a colon (:).

  • The log_level is a number from 1 to 5 that sets the log level for the component or resource, where 1 is the least amount of log output and 5 provides the most detailed log output. The default log level is 2.

  • To set log levels for resources, specify the name of a particular resource, or a comma-delimited list of resource name-log level pairs enclosed in double quotation marks ("").

Examples

To set log levels for the CRSRTI and CRSCOMM components of the CRS module:

$ crsctl set log crs "CRSRTI=1,CRSCOMM=2"

To set log levels for all components of the EVM module:

$ crsctl set log evm all=2

To set a log level for a resource:

$ crsctl set log res "myResource1=3"

Component Level Debugging

You can use crsctl set log and crsctl set trace commands as the root user to enable dynamic debugging for the various Oracle Clusterware modules.

This section includes the following topics:

Enabling Debugging for Oracle Clusterware Modules

You can enable debugging for Oracle Clusterware modules and their components, and for resources, by setting environment variables or by running crsctl set log commands, using the following syntax:

crsctl set {log | trace} module_name "component:debugging_level
[,component:debugging_level][,...]"

Run the crsctl set command as the root user, and supply the following information:

  • module_name: The name of one of the following modules:


    mdns: Multicast domain name server
    gpnp: Grid Plug and Play service
    css: Cluster Synchronization Services
    crf: Cluster Health Monitor
    crs: Cluster Ready Services
    ctss: Cluster Time Synchronization Service
    evm: Event Manager
    gipc: Grid Interprocess Communication
  • component: The name of a component for one of the modules. See Table E-72 for a list of all of the components.

  • debugging_level: A number from 1 to 5 to indicate the level of detail you want the debug command to return, where 1 is the least amount of debugging output and 5 provides the most detailed debugging output. The default debugging level is 2.

The following commands show examples of how to enable debugging for the various modules:

  • To enable debugging for Oracle Clusterware:

    crsctl set log crs "CRSRTI:1,CRSCOMM:2"
    
  • To enable debugging for OCR:

    crsctl set log crs "CRSRTI:1,CRSCOMM:2,OCRSRV:4"
    
  • To enable debugging for EVM:

    crsctl set log evm "EVMCOMM:1"
    
  • To enable debugging for resources

    crsctl set log res "resname:1"
    

To obtain a list of components that can be used for debugging, run the crsctl lsmodules command, as follows:

crsctl lsmodules {mdns | gpnp | css | crf | crs | ctss | evm | gipc}

Note:

You do not have to be the root user to run the crsctl lsmodulues command.

Table E-72 shows the components for the CRS, CSS, and EVM modules, respectively. Note that some component names are common between the CRS, EVM, and CSS daemons and may be enabled on that specific daemon. For example, COMMNS is the NS layer and because each daemon uses the NS layer, you can enable this specific module component on any of the daemons to get specific debugging information.

Table E-72 Components for the CRS, CSS, and EVM Modules

CRS ComponentsFoot 1 CSS ComponentsFoot 2 EVM ComponentsFoot 3 
CRSUI
CRSCOMM
CRSRTI
CRSMAIN
CRSPLACE
CRSAPP
CRSRES
CRSCOMM
CRSOCR
CRSTIMER
CRSEVT
CRSD
CLUCLS
CSSCLNT
COMMCRS
COMMNS
CSSD
COMMCRS
COMMNS
EVMD
EVMDMAIN
EVMCOMM
EVMEVT
EVMAPP
EVMAGENT
CRSOCR
CLUCLS
CSSCLNT
COMMCRS
COMMNS

Footnote 1 Obtain the list of CRS components using the crsctl lsmodules crs command.

Footnote 2 Obtain the list of CSS components using the crsctl lsmodules css command.

Footnote 3 Obtain the list of EVM components using the crsctl lsmodules evm command.

Example 1    

To set debugging levels on specific cluster nodes, include the -nodelist keyword and the names of the nodes, as follows:

crsctl set log crs "CRSRTI:1,CRSCOMM:2" -nodelist node1,node2

Table E-73 describes the Cluster Synchronization Services modules.

Table E-73 Cluster Synchronization Services (CSS) Modules and Functions

ModuleDescription

CSS

CSS client component

CSSD

CSS daemon component


Table E-74 describes the function of each communication (COMM) module.

Table E-74 Communication (COMM) Modules and Functions

ModuleDescription

COMMCRS

Clusterware communication layer

COMMNS

NS communication layer


Table E-75 describes the functions performed by each CRS module.

Table E-75 Oracle Clusterware (CRS) Modules and Functions

ModuleDescriptions

CRSUI

User interface module

CRSCOMM

Communication module

CRSRTI

Resource management module

CRSMAIN

Main module/driver

CRSPLACE

CRS placement module

CRSAPP

CRS application

CRSRES

CRS resources

CRSOCR

Oracle Cluster Registry interface

CRSTIMER

Various timers related to CRS

CRSEVT

CRS EVM/event interface module

CRSD

CRS daemon


Using the crsctl set log crs command, you can debug the OCR components listed in Table E-76. The components listed in Table E-76 can also be used for the Oracle Local Registry (OLR) except for OCRMAS and OCRASM. You can also use them for OCR and OLR clients, except for OCRMAS and OCRSRV. Some OCR and OLR clients are OCRCONFIG, OCRDUMP, and so on.

Table E-76 Oracle Cluster Registry (OCR) Component Names

ModuleDescription

OCRAPI

OCR abstraction component

OCRCLI

OCR client component

OCRSRV

OCR server component

OCRMAS

OCR master thread component

OCRMSG

OCR message component

OCRCAC

OCR cache component

OCRRAW

OCR raw device component

OCRUTL

OCR util component

OCROSD

OCR operating system dependent (OSD) layer

OCRASM

OCR ASM component


Table E-77 describes the OCR tool modules.

Table E-77 OCRCONFIG Modules and Functions

ModuleDescription

OCRCONFIG

OCRCONFIG component for configuring OCR

OCRDUMP

OCRDUMP component that lists the Oracle Cluster Registry contents

OCRCHECK

OCRCHECK component that verifies all of the configured OCRs


Enabling Debugging for Oracle Clusterware Resources

You can enable debugging for Oracle Clusterware resources by running the crsctl set log command, using the following syntax:

crsctl set log res "resource_name=debugging_level"

Run the crsctl set log command as the root user, and supply the following information:

  • resource_name: The name of the resource to debug.

  • debugging_level: A number from 1 to 5 to indicate the level of detail you want the debug command to return, where 1 is the least amount of debugging output and 5 provides the most detailed debugging output. The default debugging level is 2.

To obtain a list of resources that can be used for debugging, run the crsctl status resource command.

Example 1    

To generate a debugging log for the VIP resource on node1, issue the following command:

crsctl set log res "ora.node1.vip:1"

Enabling Additional Tracing for Oracle Clusterware Components

My Oracle Support may ask you to enable tracing to capture additional information. Because the procedures described in this section may affect performance, only perform these activities with the assistance of My Oracle Support.

You can enable tracing for Oracle Clusterware resources by running the crsctl set trace command, using the following syntax:

crsctl set trace module_name "component_name=tracing_level,..."

Run the crsctl set trace command as the root user, and supply the following information:

  • module_name: The name of one of the following modules:


    mdns: Multicast domain name server
    gpnp: Grid Plug and Play service
    css: Cluster Synchronization Services
    crf: Cluster Health Monitor
    crs: Cluster Ready Services
    ctss: Cluster Time Synchronization Service
    evm: Event Manager
    gipc: Grid Interprocess Communication
  • component_name: The name of the component for one of the modules. See Table E-72 for a list of components.

  • tracing_level: A number from 1 to 5 to indicate the level of detail you want the trace command to return, where 1 is the least amount of tracing output and 5 provides the most detailed tracing output.

Example 1    

To generate a trace file for Cluster Synchronization Services, use the following command:

crsctl set trace "css=3"
PKO+PKP DOEBPS/resatt.htm Oracle Clusterware Resource Reference

B Oracle Clusterware Resource Reference

This appendix is a reference for Oracle Clusterware resources. This appendix includes descriptions and usage examples of resource attributes and detailed descriptions and examples of resource attribute action scripts. This appendix includes the following topics:

Resource Attributes

This section lists and describes attributes used when you register applications as resources in Oracle Clusterware. Use these attributes with the crsctl add resource command, as follows:

$ crsctl add resource resource_name -type resource_type
{[-attr "attribute_name='attribute_value', attribute_name='attribute_value'
, ..."] | [-file file_name]}

List attribute-value pairs in a comma-delimited list after the -attr flag and enclose the value of each attribute in single quotation marks (''). Some resource attributes you cannot configure and are read only.

Alternatively, you can create a text file that contains the attribute-value pairs. For example:

PLACEMENT=favored
HOSTING_MEMBERS=node1 node2 node3
RESTART_ATTEMPTS@CARDINALITYID(1)=0
RESTART_ATTEMPTS@CARDINALITYID(2)=0
FAILURE_THRESHOLD@CARDINALITYID(1)=2
FAILURE_THRESHOLD@CARDINALITYID(2)=4
FAILURE_INTERVAL@CARDINALITYID(1)=300
FAILURE_INTERVAL@CARDINALITYID(2)=500
CHECK_INTERVAL=2
CARDINALITY=2

Note:

The length limit for these attributes is 254 characters.

This section includes the following topics:

Configurable Resource Attributes

This section describes the following resource attributes that you can configure when registering an application as a resource in Oracle Clusterware:


Note:

Values for all attributes must be in lowercase. Attribute names must be in all uppercase letters.

ACL

Defines the owner of a resource and the access privileges granted to various operating system users and groups. The resource owner defines the operating system user of the owner and its privileges. You configure this optional attribute when you create a resource. If you do not configure this attribute, then the value is based on the identity of the process creating the resource. You can change the value of the attribute if such a change is allowed based on the existing privileges of the resource.


Note:

All operating system user names and user groups, including owner, pgrp, user, and group, must be registered on all servers in the cluster.

In the string:

  • owner: The operating system user that owns a resource and the user under which the action script or application-specific agent runs, followed by the privileges of the owner.

  • pgrp: The operating system group that is the primary group of the owner of a resource, followed by the privileges of members of the primary group.

  • other: Operating system users that are neither the owner nor member of the primary group

  • r: The read option, which gives the ability to only see a resource, its state, and configuration

  • w: The write option, which gives the ability to modify a resource's attributes and to delete the resource

  • x: The execute option, which gives the ability to start, stop, and relocate a resource

By default, the identity of the client that creates a resource is the owner. Also by default, root, and the user specified in owner have full privileges. You can grant required operating system users and operating system groups their privileges by adding the following lines to the ACL attribute:

user:user_name:rwx
group:group_name:rwx

Usage Example

ACL=owner:user_1:rwx,pgrp:osdba:rwx,other::r-

In the preceding example, the owner of the resource is user_1, whose primary group is osdba. The user, user_1, has all privileges, as does the osdba group, while other users can only view the resource.

ACTION_SCRIPT

An absolute file name that includes the path and file name of an action script. The agent specified in the AGENT_FILENAME attribute calls the script specified in the ACTION_SCRIPT attribute.

Usage Example

ACTION_SCRIPT=fully_qualified_path_to_action_script

ACTION_TIMEOUT

A single timeout value, in seconds, for all supported actions that Oracle Clusterware can perform on a resource.

Usage Example

ACTION_TIMEOUT=30

ACTIONS

The ACTIONS attribute declares a table of names that lists the actions that Oracle Clusterware can perform on a resource and the permissions that correspond to the actions. The ACTIONS attribute contains a space-delimited list of action specifications, where each specification has the following format, where:

  • actionName is the name of the action (the maximum length is 32 US7ASCII alphanumeric, case-sensitive characters)

  • userName is an operating system user name that is enabled to perform the action

  • groupName is an operating system group name that is enabled to perform the action

actionName [,user:userName | group:groupName][ ...]

If you do not specify a userName or groupName, then Oracle Clusterware assumes that the actions are universally accessible.

Usage Example

The following example enables multiple actions:

ACTIONS='action1 action2,user:user2 action3,group:group1'

ACTIVE_PLACEMENT

When set to 1, Oracle Clusterware uses this attribute to reevaluate the placement of a resource during addition or restart of a cluster server. For resources where PLACEMENT=favored, Oracle Clusterware may relocate running resources if the resources run on a non-favored server when a favored one joins the cluster.

Usage Example

ACTIVE_PLACEMENT=1

AGENT_FILENAME

A fully qualified file name of an agent program that a resource type uses to manage its resources. Every resource type must have an agent program to manage its resources. Resource types use agent programs by either specifying a value for this attribute or inheriting it from their base resource type. There are two script agents included with Oracle Clusterware 12c: application and scriptagent. Oracle Clusterware uses the application script agent for resources of the deprecated application resource type. The default value for this attribute is scriptagent.


Note:

Once the resource is created, you can no longer modify this attribute.

Usage Example

AGENT_FILENAME=%Grid_home%/bin/application

ALERT_TEMPLATE

Use to specify additional resource attributes that are to be included in resource state alert messages. You can specify the attribute as a space-delimited list of resource attributes. These attributes must be accessible from the resource type to display in alert messages.

Usage Example

ALERT_TEMPLATE="DESCRIPTION HOSTING_MEMBERS"

AUTO_START

Indicates whether Oracle Clusterware automatically starts a resource after a cluster server restart. Valid AUTO_START values are:

  • always: Restarts the resource when the server restarts regardless of the state of the resource when the server stopped.

  • restore: Restores the resource to the same state that it was in when the server stopped. Oracle Clusterware attempts to restart the resource if the value of TARGET was ONLINE before the server stopped.

  • never: Oracle Clusterware never restarts the resource regardless of the state of the resource when the server stopped.

CARDINALITY

The number of servers on which a resource can run, simultaneously. This is the upper limit for resource cardinality.

Usage Example

CARDINALITY=1

You can also use a value such that cardinality always increases and decreases with the number of servers that are assigned to the server pool in which the resource is configured to run. The value is:

CARDINALITY=%CRS_SERVER_POOL_SIZE%

Only resources with PLACEMENT=restricted and that use the SERVER_POOLS attribute can use this value.

CHECK_INTERVAL

The time interval, in seconds, between repeated executions of the check action. Shorter intervals enable more frequent checks but also increase resource consumption if you use the script agent. Use an application-specific agent to reduce resource consumption.

Usage Example

CHECK_INTERVAL=60

CHECK_TIMEOUT

The maximum time, in seconds, in which a check action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify this attribute or if you specify 0 seconds, then Oracle Clusterware uses the value of the SCRIPT_TIMEOUT attribute.

Usage Example

CHECK_TIMEOUT=30

CLEAN_TIMEOUT

The maximum time, in seconds, in which a clean action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify a value for this attribute or you specify 0 seconds, then Oracle Clusterware uses the value of the STOP_TIMEOUT attribute.

Usage Example

CLEAN_TIMEOUT=30

DELETE_TIMEOUT

The maximum time, in seconds, in which a delete action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify a value for this attribute or you specify 0 seconds, then Oracle Clusterware uses the value of the SCRIPT_TIMEOUT attribute.

Usage Example

DELETE_TIMEOUT=30

DESCRIPTION

Enter a description of the resource you are adding.

Usage Example

DESCRIPTION=Apache Web server

ENABLED

Oracle Clusterware uses this attribute to manage the state of the resource. Oracle Clusterware does not attempt to manage a disabled (ENABLED=0) resource either directly or because of a dependency to another resource. A disabled resource cannot be started but it can be stopped. Oracle Clusterware does not actively monitor disabled resources, meaning that Oracle Clusterware does not check their state.

Usage Example

ENABLED=1

FAILURE_INTERVAL

The interval, in seconds, before which Oracle Clusterware stops a resource if the resource has exceeded the number of failures specified by the FAILURE_THRESHOLD attribute. If the value is zero (0), then tracking of failures is disabled.

Usage Example

FAILURE_INTERVAL=30

FAILURE_THRESHOLD

The number of failures of a resource detected within a specified FAILURE_INTERVAL for the resource before Oracle Clusterware marks the resource as unavailable and no longer monitors it. If a resource fails the specified number of times, then Oracle Clusterware stops the resource. If the value is zero (0), then tracking of failures is disabled. The maximum value is 20.

Usage Example

FAILURE_THRESHOLD=3

HOSTING_MEMBERS

A space-delimited, ordered list of cluster server names that can host a resource. This attribute is required only when using administrator management, and when the value of the PLACEMENT attribute is set to favored or restricted. When registering applications as Oracle Clusterware resources, use the SERVER_POOLS attribute, instead.


Note:

For resources of application type, Oracle Clusterware places servers listed in the HOSTING_MEMBERS attribute in the Generic server pool.


See Also:


To obtain a list of candidate node names, run the olsnodes command to display a list of your server names.

Usage Example

HOSTING_MEMBERS=server1 server2 server3

INSTANCE_FAILOVER

Use the INSTANCE_FAILOVER attribute for resources of type CLUSTER_RESOURCE. Using this attribute enables you to disallow the failover of resource instances from the servers on which they fail. This enables you to bind the resource to a particular server.

Set to 0 to disable instance failover.

Usage Example

INSTANCE_FAILOVER=1

INTERMEDIATE_TIMEOUT

Denotes the maximum amount of time in seconds that a resource can remain in the INTERMEDIATE state before the resource is declared as failed. The value of INTERMEDIATE_TIMEOUT must be greater than 0 to take effect.

Usage Example

INTERMEDIATE_TIMEOUT=60

LOAD

Oracle Clusterware interprets the value of this attribute along with that of the PLACEMENT attribute. When the value of PLACEMENT is balanced, the value of LOAD determines where best to place a resource. A nonnegative, numeric value that quantitatively represents how much server capacity an instance of a resource consumes relative to other resources. Oracle Clusterware attempts to place resources on servers with the least total load of running resources.

Usage Example

LOAD=1

MODIFY_TIMEOUT

The maximum time, in seconds, in which a modify action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify a value for this attribute or you specify 0 seconds, then Oracle Clusterware uses the value of the SCRIPT_TIMEOUT attribute.

Usage Example

MODIFY_TIMEOUT=30

NAME

A case-sensitive alphanumeric string that names the resource. Oracle recommends a naming convention that starts with an alphanumeric prefix, such as myApache, and complete the name with an identifier to describe it. A resource name can contain any platform-supported characters except the exclamation point (!) and the tilde (~). A resource name cannot begin with a period (.) nor with the string ora.

Usage Example

NAME=myApache

OFFLINE_CHECK_INTERVAL

Controls offline monitoring of a resource. The value represents the interval (in seconds) that Oracle Clusterware monitors a resource when its state is OFFLINE. Monitoring is disabled if the value is 0.

Usage Example

OFFLINE_CHECK_INTERVAL=30

PLACEMENT

Specifies how Oracle Clusterware selects a cluster server on which to start a resource. Valid values are balanced, favored, or restricted.

If you set the PLACEMENT attribute to favored or restricted, then you must also assign values to the SERVER_POOLS and HOSTING_MEMBERS attributes. If you set the value of the PLACEMENT attribute to balanced, then the HOSTING_MEMBERS attribute is not required.


See Also:


Usage Example

PLACEMENT=favored

RELOCATE_BY_DEPENDENCY

Use to declare whether a resource will be enabled for relocation if requested to do so because of a dependency on the resource for which the relocation was requested. If 0, the resource will not be allowed to relocate because of a dependency on the resource for which relocate request was issued. The valid values are 1 or 0.

Usage Example

RELOCATE_BY_DEPENDENCY=1

RESTART_ATTEMPTS

The number of times that Oracle Clusterware attempts to restart a resource on the resource's current server before attempting to relocate it. A value of 1 indicates that Oracle Clusterware only attempts to restart the resource once on a server. A second failure causes Oracle Clusterware to attempt to relocate the resource. A value of 0 indicates that there is no attempt to restart but Oracle Clusterware always tries to fail the resource over to another server.

Usage Example

RESTART_ATTEMPTS=2

SCRIPT_TIMEOUT

The maximum time (in seconds) for an action to run. Oracle Clusterware returns an error message if the action script does not complete within the time specified. The timeout applies to all actions (start, stop, check, and clean).

Usage Example

SCRIPT_TIMEOUT=45

SERVER_CATEGORY

For local resources, the definition of a local_resource type is extended to be category-aware. In other words, you can restrict local resources to belong to a particular server category. For cluster resources, the value for the SERVER_CATEGORY attribute always functions with the value for the PLACEMENT attribute. Set SERVER_POOLS to * when PLACEMENT is restricted and SERVER_CATEGORY is used. If you set PLACEMENT to restricted, then Oracle Clusterware expects one of the following attributes to also be set:

For example, a resource, known as resource1, can have a policy that sets the value of PLACEMENT to be restricted, and SERVER_CATEGORY is set to HubCategory. In such a case, Oracle Clusterware would only enable resource1 to run on the servers that belong to the HubCategory.

If PLACEMENT is set to favored and if only one of HOSTING_MEMBERS, SERVER_POOLS, or SERVER_CATEGORY is set, then that value expresses a preference. If HOSTING_MEMBERS is populated and one of SERVER_POOLS or SERVER_CATEGORY is set, then the HOSTING_MEMBERS indicates placement preference and SERVER_POOLS or SERVER_CATEGORY indicates a restriction. For example, the ora.cluster.vip resource can have a policy that sets the value of PLACEMENT to favored, and SERVER_CATEGORY is set to HubCategory and HOSTING_MEMBERS is set to server_name1. In such a case, Oracle Clusterware restricts the placement of ora.cluster.vip to the servers in the HubCategory and then it prefers the server known as server_name1.

Usage Example

SERVER_CATEGORY=my_category

SERVER_POOLS

A space-delimited list of the server pools to which a particular resource can belong. If a resource can run on any server in a cluster, then use the default value, *, unless the resource is a cluster_resource type, in which case, the default value for the SERVER_POOLS attribute is empty. Only cluster administrators can specify * as the value for this attribute.

  • Use the PLACEMENT attribute with the SERVER_POOLS attribute, as follows: If you set the value of the PLACEMENT attribute to either restricted or favored, then you must also provide a value for the SERVER_POOLS attribute when using policy management for the resource.

  • If the value for PLACEMENT is set to balanced, then the resource only runs in the Generic and Free pools, unless SERVER_POOLS=*.

This attribute creates an affinity between a resource and one or more server pools regarding placement, and depends on the value of the PLACEMENT attribute.


See Also:


Usage Example

SERVER_POOLS=pool1 pool2 pool3

START_CONCURRENCY

Describes the maximum number of start actions that can be concurrent at a time. A value of 0 means ”no limit.”

Usage Example

START_CONCURRENCY=10

START_DEPENDENCIES

Specifies a set of relationships that Oracle Clusterware considers when starting a resource. You can specify a space-delimited list of dependencies on several resources and resource types on which a particular resource can depend.

Syntax

START_DEPENDENCIES=dependency(resource_set) [dependency(resource_set)] [...]

In the preceding syntax example the variables are defined, as follows:

  • dependency: Possible values are attraction, dispersion, exclusion, hard, pullup, and weak. You can specify each dependency only once, except for pullup, which you can specify multiple times.

  • resource_set: A comma-delimited list of resource entities—either individual resources or resource types—enclosed in parentheses (), in the form of res1[, res2[, ...]], upon which the resource you are configuring depends.

    Each resource entity is defined, as follows:

    [modifier1:[modifier2:]] {resource_name | type:resource_type}
    

    In the preceding syntax example, resource_name is the name of a specific resource and type:resource_type is the name of a specific resource type. The resource type must be preceded by type and the type modifier must be the last resource entity in the list.

    Optionally, you can specify modifiers to further configure resource entity dependencies. You can modify each dependency by prefixing the following modifiers to the resource entity:

    • attraction([intermediate:]{resource_name | type:resource_type})—Use the attraction start dependency when you want this resource to run on the same server with a particular named resource or any resource of a particular type.

      Use intermediate to specify that this resource is attracted to resource entities on which it depends that are in the INTERMEDIATE state. If not specified, then resources must be in the ONLINE state to attract the dependent resource.

      If you specify the attraction dependency on a resource type for a resource, then any resource of that particular type attracts the dependent resource.

    • exclusion([[preempt_pre: | preempt_post:]] target_resource_name | type:target_resource_type])—Use the exclusion start dependency to keep resources with this dependency from running on the same node.

      Use the preempt_pre modifier to configure the exclusion dependency to stop the specified target resource or resources defined by a specific resource type before starting the source resource.

      Use the preempt_post modifier to configure the exclusion dependency to stop and relocate, if possible, the specified target resource or resources defined by a specific resource type after starting the source resource.

    • dispersion[:active]([intermediate:][pool:]{resource_name | type:resource_type})—Specify the dispersion start dependency for a resource that you want to run on a server that is different from the named resources or resources of a particular type. Resources may still end up running on the same server, depending on availability of servers.

      Use the active modifier to configure the dispersion dependency so that Oracle Clusterware attempts to relocate the dependent resource to another server if it is collocated with another resource and another server comes online. Oracle Clusterware does not relocate resources to newly available servers unless you specify the active modifier.

      Use the intermediate modifier to specify that Oracle Clusterware can relocate the dependent resource if a resource is in either the ONLINE or INTERMEDIATE state. If not specified, then resources must be in the ONLINE state for dispersion of the dependent resource to occur.

      Use the pool modifier if you want a resource to be located in a different server pool than the target, rather than just a different server.

    • hard([intermediate:][global:][uniform:]{resource_name | type:resource_type})—Specify a hard start dependency for a resource when you want the resource to start only when a particular resource or resource of a particular type starts.

      Use the intermediate modifier to specify that Oracle Clusterware can start this resource if a resource on which it depends is in either the ONLINE or INTERMEDIATE state. If not specified, then resources must be in the ONLINE state for Oracle Clusterware to start this resource.

      Use the global modifier to specify that resources are not required to reside on the same server as a condition to Oracle Clusterware starting this resource. If not specified, then resources must reside on the same server for Oracle Clusterware to start this resource.

      Use the uniform modifier to attempt to start all instances of resource B, but only one instance, at least must start to satisfy the dependency.

      If you specify the hard dependency on a resource type for a resource, then the resource can start if any resource of that particular type is running.


      Note:

      Oracle recommends that resources with hard start dependencies also have pullup start dependencies.

    • pullup[:always]([intermediate:][global:]{resource_name | type:resource_type})—When you specify the pullup start dependency for a resource, then this resource starts because of named resources starting.

      Use the always modifier for pullup so that Oracle Clusterware starts this resource despite the value of its TARGET attribute, whether that value is ONLINE or OFFLINE. Otherwise, if you do not specify the always modifier, then Oracle Clusterware starts this resource only if the value of the TARGET attribute is ONLINE for this resource.

      Use the intermediate modifier to specify that Oracle Clusterware can start this resource if a resource on which it depends is in either the ONLINE or INTERMEDIATE state. If not specified, then resources must be in the ONLINE state for Oracle Clusterware to start this resource.

      Use the global modifier to specify that resources on which this resource depends are not required to reside on the same server as a condition to Oracle Clusterware starting this resource. If not specified, then resources on which this resource depends must reside on the same server for Oracle Clusterware to start this resource.

      If you specify the pullup dependency on a resource type for a resource, then, when any resource of that particular type starts, Oracle Clusterware can start this resource.


      Note:

      Oracle recommends that resources with hard start dependencies also have pullup start dependencies.

    • weak([concurrent:][global:][uniform:]{resource_name | type:resource_type})—Specify a weak start dependency for a resource when you want that resource to start despite whether named resources are running, or not. An attempt to start this resource also attempts to start any resources on which this resource depends if they are not running.

      Use the concurrent modifier to specify that Oracle Clusterware can start a dependent resource while a resource on which it depends is in the process of starting. If concurrent is not specified, then resources must complete startup before Oracle Clusterware can start the dependent resource.

      Use the global modifier to specify that resources are not required to reside on the same server as a condition to Oracle Clusterware starting the dependent resource.

      Use the uniform modifier to start all instances of the resource everywhere the resource can run. If you do not specify a modifier (the default), then the resource starts on the same server as the resource on which it depends.

      If you specify the weak start dependency on a resource type for a resource, then the resource can start if any resource of that particular type is running.


See Also:

"Start Dependencies" for more details about start dependencies

START_TIMEOUT

The maximum time (in seconds) in which a start action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify a value for this attribute or you specify 0 seconds, then Oracle Clusterware uses the value of the SCRIPT_TIMEOUT attribute.

Usage Example

START_TIMEOUT=30

See Also:

"SCRIPT_TIMEOUT" for more information about this attribute

STOP_CONCURRENCY

Describes the maximum number of stop actions that can be concurrent at a time. A value of 0 means ”no limit.”

Usage Example

STOP_CONCURRENCY=10

STOP_DEPENDENCIES

Specifies a set of relationships that Oracle Clusterware considers when stopping a resource.

Syntax

STOP_DEPENDENCIES=dependency(resource_set) [dependency(resource_set)] ...

In the preceding syntax example the variables are defined, as follows:

  • dependency: The only possible value is hard.

  • resource_set: A comma-delimited list, in the form of res1[, res2 [,...]], of resource entities—either individual resources or resource types—upon which the resource you are configuring depends.

    Each resource entity is defined, as follows:

    [modifier1:[modifier2:][modifier3:]] resource_name | type:resource_type
    

    In the preceding syntax example, resource_name is the name of a specific resource and type:resource_type is the name of a specific resource type. The resource type must be preceded by type:.

    Optionally, you can specify modifiers to further configure resource entity dependencies. You can modify each dependency by prefixing the following modifiers to the resource entity:

    hard([intermediate:][global:][shutdown:]{resource_name | type:resource_type})—Specify a hard stop dependency for a resource that you want to stop when named resources or resources of a particular resource type stop.

    Use intermediate to specify that the dependent resource can remain in an ONLINE state if a resource is in either the ONLINE or INTERMEDIATE state. If not specified, then Oracle Clusterware stops the dependent resource unless resources are in the ONLINE state.

    Use global to specify that the dependent resource remains in an ONLINE state if a resource is in an ONLINE state on any node in the cluster. If not specified, then when resources residing on the same server go offline, Oracle Clusterware stops the dependent resource.

    Use shutdown to apply this dependency when the Oracle Clusterware stack is shut down. This is a convenient way to affect the order of stopping resources when stopping the stack, without having any affect on planned or unplanned events on the individual resources. This dependency, when used with the shutdown modifier, does not go into effect if somebody stops the resource directly, but only when the stack is shut down.


See Also:

"Stop Dependencies" for more details about stop dependencies

STOP_TIMEOUT

The maximum time (in seconds) in which a stop or clean action can run. Oracle Clusterware returns an error message if the action does not complete within the time specified. If you do not specify this attribute or if you specify 0 seconds, then Oracle Clusterware uses the value of the SCRIPT_TIMEOUT attribute.

Usage Example

STOP_TIMEOUT=30

See Also:


UPTIME_THRESHOLD

The value for UPTIME_THRESHOLD represents the length of time that a resource must be up before Oracle Clusterware considers the resource to be stable. By setting a value for the UPTIME_THRESHOLD attribute, you can indicate the stability of a resource.

Enter values for this attribute as a number followed by a letter that represents seconds (s), minutes (m), hours (h), days (d), or weeks (w). For example, a value of 7h represents an uptime threshold of seven hours.

After the time period you specify for UPTIME_THRESHOLD elapses, Oracle Clusterware resets the value for RESTART_COUNT to 0 at the next resource state change event, such as stop, start, relocate, or failure. Oracle Clusterware can alert you when the value for RESTART_COUNT reaches the value that you set for RESTART_ATTEMPTS. The counter is effectively reset the next time the resource fails or restarts. The threshold represents the amount of time that restarts are to be counted and discarded. If the resource fails after the threshold, it will still restart.


Note:

Oracle Clusterware writes an alert to the clusterware alert log file when the value for RESTART_COUNT reaches the value that you set for RESTART_ATTEMPTS.


See Also:


USER_WORKLOAD

Use to indicate whether a resource is a workload generating resource for what-if analysis. Possible values are yes or no.

Usage Example

USER_WORKLOAD=yes

USE_STICKINESS

Use to indicate that a resource should run where it last ran, if possible, and to not permit load-balancing that would otherwise apply. If set to 1, Oracle Clusterware attempts to start the resource where it last ran. Enabling USE_STICKINESS also disables load-balancing. The default value is 0. Possible values are 0 and 1.

Usage Example

USE_STICKINESS=1

Read-Only Resource Attributes

You can view these attributes when you run the crsctl status resource command on a particular resource. Oracle Clusterware sets these attributes when you register resources.

ACTION_FAILURE_EVENT_TEMPLATE

This is an internally-managed attribute for an ora.* resource. You cannot edit this attribute.

INSTANCE_COUNT

The INSTANCE_COUNT attribute is an internally managed attribute that contains the number of instances that the resource currently has.

INTERNAL_STATE

An internally managed, read-only attribute that describes what, if any, action the policy engine is currently executing on the resource. Possible values and their meanings are as follows:

  • STARTING: The policy engine is currently starting the resource

  • STOPPING: The policy engine is currently stopping the resource

  • CLEANING: The policy engine is currently cleaning the resource

  • STABLE: The policy engine is not currently executing any action on the resource

    Note, however, that the resource can still be locked as part of some other command.

LAST_SERVER

For cluster_resource-type resources, this is an internally managed, read-only attribute that contains the name of the server on which the last start action for the resource succeeded.

For local_resource-type resources, this is the name of the server to which the resource instance is pinned.

LAST_STATE_CHANGE

An internally managed, read-only attribute that describes when the policy engine registers the current state of the resource. Note that this may either be the timestamp of when state of the resource changed or when the policy engine discovered the state, as occurs when CRSD restarts.

PROFILE_CHANGE_EVENT_TEMPLATE

This is an internally-managed attribute for an ora.* resource. You cannot edit this attribute.

RESTART_COUNT

An internally-managed attribute used by the Oracle Clusterware daemon to count the number of attempts to restart a resource, starting from zero up to the value specified in the RESTART_ATTEMPTS attribute. You cannot edit this attribute.

STATE

An internally-managed attribute that reflects the current state of the resource as reported by Oracle Clusterware. The state of a resource can be one of the following:

  • ONLINE: The resource is online and resource monitoring is enabled (see CHECK_INTERVAL).

  • OFFLINE: The resource is offline and only offline resource monitoring is enabled, if configured (see OFFLINE_CHECK_INTERVAL).

  • INTERMEDIATE: The resource is either partially online or was known to be online before and subsequent attempts to determine its state have failed; resource monitoring is enabled (see CHECK_INTERVAL).

  • UNKNOWN: The resource is unmanageable and its current state is unknown; manual intervention is required to resume its operation. A resource in this state is not monitored.

STATE_CHANGE_EVENT_TEMPLATE

This is an internally-managed attribute for an ora.* resource. You cannot edit this attribute.

STATE_DETAILS

An internally managed, read-only attribute that contains details about the state of a resource.

The four resource states—ONLINE, OFFLINE, UNKNOWN, and INTERMEDIATE—may map to different resource-specific values, such as mounted, unmounted, and open. Resource agent developers can use the STATE_DETAILS attribute to provide a more detailed description of this mapping, resource to the resource state.

Providing details is optional. If details are not provided, then Oracle Clusterware uses only the four possible resource states. Additionally, if the agent cannot provide these details (as may also happen to the value of the resource state), then Oracle Clusterware sets the value of this attribute to provide minimal details about why the resource is in its current state.

TARGET

An internal, read-only attribute that describes the desired state of a resource. Using the crsctl start resource_name or crsctl  stop resource_name commands, however, can affect the value of this attribute.

TARGET_SERVER

This is an internally-managed attribute that contains the name of the server where the resource is starting. This value is relevant when the resource is starting.

TYPE

The type of resource indicated when you create a resource. This attribute is required when creating a resource and cannot be changed after the resource is created.


See Also:

"Resource Types" for details of resource types

Deprecated Resource Attributes

The following resource attributes are deprecated in Oracle Clusterware 12c:

DEGREE

The number of instances of a cluster resource that can run on a single server.

Examples of Action Scripts for Third-party Applications

This section includes examples of third-party applications using script agents.

Example B-1 shows an action script that fails over the Apache Web server.

Example B-1 Apache Action Script

#!/bin/sh

HTTPDCONFLOCATION=/etc/httpd/conf/httpd.conf
WEBPAGECHECK=http://<MyVIP>:80/icons/apache_pb.gif

case $1 in
'start')
    /usr/sbin/apachectl -k start -f $HTTPDCONFLOCATION
   RET=$?
    ;;
sleep(10)
    ;;
'stop')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'clean')
    /usr/sbin/apachectl -k stop
   RET=$?
    ;;
'check')
    /usr/bin/wget -q --delete-after $WEBPAGECHECK
   RET=$?
    ;;
*)
   RET=0
    ;;
esac
# 0: success; 1 : error
if [ $RET -eq 0 ]; then
exit 0
else
exit 1
fi

Example B-2 shows the xclock script, which is a simple action script using xclock available as a default binary on all Linux and UNIX platforms.

Example B-2 xclock Action Script

#!/bin/bash
# start/stop/check script for xclock example
# To test this change BIN_DIR to the directory where xclock is based
# and set the DISPLAY variable to a server within your network.

BIN_DIR=/usr/X11R6/bin
LOG_DIR=/tmp
BIN_NAME=xclock
DISPLAY=yourhost.domain.com:0.0
export DISPLAY
exit_code=0

if [ ! -d $BIN_DIR ]
then
        echo "start failed"
        exit 2
fi

PID1=`ps -ef | grep $BIN_NAME | grep -v grep | grep -v xclock_app | awk '{ print $2 }'`
case $1 in
'start')
        if [ "$PID1" != "" ]
        then
           status_p1="running"
        else
           if [ -x $BIN_DIR/$BIN_NAME ]
           then
             umask 002
             ${BIN_DIR}/${BIN_NAME} & 2>${LOG_DIR}/${BIN_NAME}.log
             status_p1="started"
           else
             echo `basename $0`": $BIN_NAME: Executable not found"
             exit_code=1
           fi
        fi

         echo "$BIN_NAME: $status_p1"
         exit $exit_code
        ;;

'stop')
        if [ "${PID1}" != "" ]
        then
           kill -9 ${PID1} && echo "$BIN_NAME daemon killed"
        else
           echo "$BIN_NAME: no running Process!"
        fi
        exit $exit_code
        ;;
'check')
        if [ "$PID1" != "" ]
        then
           echo "running"
           exit 0
        else
           echo "not running"
           exit 1
        fi
        ;;*)
        echo "Usage: "`basename $0`" {start|stop|check}"
        ;;
esac

Example B-3 shows an example of a shell script for an agent to monitor a file. When the agent is started, it creates the file (which is specified through an attribute) and when it is stopped, it deletes the file. The CHECK action consists of only checking whether the file exists. The variables with the _CRS_ prefix are attribute values that are provided to the script in its environment.

Example B-3 Action Script Example

#!/bin/sh
TOUCH=/bin/touch
RM=/bin/rm
PATH_NAME=/tmp/$_CRS_NAME

#
# These messages go into the CRSD agent log file.
echo " *******   `date` ********** "
echo "Action script '$_CRS_ACTION_SCRIPT' for resource[$_CRS_NAME] called for action $1"
#

case "$1" in
  'start')
     echo "START entry point has been called.."
     echo "Creating the file: $PATH_NAME"
     $TOUCH $PATH_NAME
     exit 0
     ;;

  'stop')
     echo "STOP entry point has been called.." 
     echo "Deleting the file: $PATH_NAME"
     $RM $PATH_NAME
     exit 0
     ;;

  'check')
    echo "CHECK entry point has been called.."
    if [ -e $PATH_NAME ]; then
        echo "Check -- SUCCESS"
        exit 0
    else
        echo "Check -- FAILED"
        exit 1
    fi
    ;;

  'clean')
     echo "CLEAN entry point has been called.."
     echo "Deleting the file: $PATH_NAME"
     $RM -f $PATH_NAME
     exit 0
     ;;

esac
PKÉJ PKP DOEBPS/glossary.htmG Glossary

Glossary

action script

A script that defines the start, stop and check actions for a resource. The start action is invoked while starting the resource, the stop action for stopping the resource, and the check action while checking the running status of a resource.

administrator managed

Database administrators define on which servers a database resource should run, and place resources manually as needed. This is the management strategy used in previous releases.

agent

A program that contains the agent framework and user code to manage resources.

agent framework

A C library that enables users to plug in application-specific code to manage customized applications.

availability directive

Instructions to Oracle Clusterware to reconfigure the system when a server leaves or joins a cluster.

cardinality

The number of servers on which a resource can run, simultaneously.

client cluster

In a multicluster environment, a client cluster advertises its names with the server cluster.

cluster

Multiple interconnected computers or servers that appear as if they are one server to end users and applications.

cluster-aware

Any application designed to be deployed using clusterware.

cluster administrator

An administrator who can manage a certain part of a cluster based on set policies and privileges.

cluster configuration policy

A document for policy-managed clusters, which contains exactly one definition for each server pool defined in the system.

cluster configuration policy set

A document that defines the names of all server pools configured in the cluster, which contains one or more configuration policies. Only one policy can be in effect at any one time, but administrators can set different policies to be in effect at different dates or times of the day in accordance with business needs and system demands.

Cluster Health Monitor

Detects and analyzes operating system and cluster resource-related degradation and failures.

cluster resource

A resource that is aware of the cluster environment and subject to cross-node switchover and failover.

Cluster Time Synchronization Service

A time synchronization mechanism that ensures that all internal clocks of all nodes in a cluster are synchronized.

Cluster Verification Utility (CVU)

A tool that verifies a wide range of Oracle RAC components such as shared storage devices, networking configurations, system requirements, Oracle Clusterware, groups, and users.

database cloud

A set of databases integrated by the global service and load management framework into a single virtual server that offers one or more global services, while ensuring high performance, availability, and optimal utilization of resources.

dependency

The relationship between two or more resources. and the interaction expressed between them.

disk group

An Oracle ASM disk group is a collection of disks that Oracle ASM manages as a unit. Within a disk group, Oracle ASM exposes a file system interface for Oracle Database files. The content of files that is stored in a disk group is evenly distributed, or striped, to eliminate hot spots and to provide uniform performance across the disks. Oracle ASM files may also be optionally mirrored within a disk group. The performance of disks in a disk group is comparable to the performance of raw devices.

Dynamic Host Configuration Protocol (DHCP)

A network application protocol used by devices (DHCP clients) to obtain configuration information for operation in an Internet Protocol network. This protocol reduces system administration workload, allowing devices to be added to the network with little or no manual intervention.

fixup script

Oracle Universal Installer detects when minimum requirements for installation are not completed, and creates shell script programs, called fixup scripts, to resolve many incomplete system configuration requirements. If Oracle Universal Installer detects an incomplete task, it prompts you to create a fixup script and then to run the fixup script in a separate terminal session. You can also generate fixup scripts with certain CVU commands by using the -fixup flag.

gold image

A copy of a software only, installed Oracle home, where the home-specific details are unavailable. This enables you to easily copy an image of an Oracle home to a new host on a new file system to serve as an active usable Oracle home.

A gold image can correspond, generally, to any application but in the context of this document, gold images correspond to Oracle Databases. Also, a gold image is not a ship home, which means that you do not have to run Oracle Universal Installer when deploying a gold image.

Grid Home client

The Grid Home client is a remote Oracle Grid Infrastructure cluster that subscribes to the Rapid Home Provisioning Server for provisioning of gold images.

Oracle Grid Infrastructure

The software that provides the infrastructure for an enterprise grid architecture. In a cluster this software includes Oracle Clusterware and Oracle ASM. For a standalone server, this software includes Oracle Restart and Oracle ASM. Oracle Database 12c combines these infrastructure products into one software installation called the Oracle Grid Infrastructure home (Grid_home).

Grid Naming Service (GNS)

A generic service which resolves the names of hosts in a delegated normal DNS zone by mapping them to IP addresses within the zone. GNS enables the use of Dynamic Host Configuration Protocol (DHCP) address for Oracle RAC database nodes, simplifying deployment. GNS also resolves host names passed back from a SCAN listener.

Hub anchor

A Hub Node in an Oracle Flex Cluster that acts as the connection point for purposes of cluster membership for one or more Leaf Nodes. Leaf Nodes exchange heartbeats with a single Hub anchor.

Hub Node

A node in and Oracle Flex Cluster that is tightly connected with other servers and has direct access to a shared disk.

image

Master copy of Oracle database home software which resides on the Rapid Home Provisioning Server.

IPv4

Internet Protocol Version 4. IPv4 is the current standard for the IP protocol. IPv4 uses 32-bit (four-byte) addresses, which are typically represented in dotted-decimal notation. The decimal value of each octet is separated by a period, as in 192.168.2.22.

IPv6

Internet Protocol Version 6. The protocol designed to replace IPv4. In IPv6, an IP address is typically represented in eight fields of hexadecimal values separated by colons, as in 2001:0DB8:0000:0000:0000:0000:1428:57AB. In some cases, fields with 0 values can be compressed, as in 2001:DB8::1428:57AB.

Leaf Node

Servers that are loosely coupled with Hub Nodes, which may not have direct access to the shared storage.

local resource

A resource that runs on the nodes of a cluster but is unaware of anything outside of the scope of the node.

OCR writer

Each CRSD process also acts as an OCR server. One of the CRSD processes in the cluster is the OCR server that performs I/O to the disk group or file, or block or raw device.

Oracle Automatic Storage Management (Oracle ASM)

Oracle ASM manages the storage disks used by Oracle Clusterware and Oracle RAC in disk groups. By default, Oracle Clusterware uses Oracle ASM to store OCR and voting files.

Oracle Cluster Registry (OCR)

The Oracle RAC configuration information repository that manages information about the cluster node list and instance-to-node mapping information. OCR also manages information about Oracle Clusterware resource profiles for customized applications.

Oracle Clusterware

Software that allows groups (clusters) of connected servers to operate or be controlled as a unit.

Oracle Clusterware stack

The Oracle Clusterware stack includes Oracle Cluster Ready Services, Event Manager, Cluster Synchronization Services, and Oracle ASM (if used).

Oracle Flex Cluster

Large clusters that are made of up of two types of nodes: Hub Nodes and Leaf Nodes, where the Hub Nodes form a cluster using current membership algorithms and Leaf Nodes connect for membership to a single Hub Node called a Hub anchor.

policy managed

Database administrators specify the server pool (excluding Generic or Free) in which the database resource runs. Oracle Clusterware places the database resource on a server.

Rapid Home Provisioning Server

The Rapid Home Provisioning Server is one server on the server cluster that can hold a repository of gold images.

resource

A database, application, or process managed by Oracle Clusterware.

resource state

The state of a particular resource at any given time that determines its availability to the cluster.

resource type

Defines whether a resource is either a cluster resource or a local resource.

server categorization

A method of identifying servers in the cluster into different categories by using a set of server attributes. Administrators can configure server pools to restrict which categories they accept. Server categories are created by providing attributes for the SERVER_CATEGORY parameter.

server cluster

In a multicluster environment, a server cluster is a cluster in which the Grid Naming Service (GNS) process runs.

server pool

A logical division of servers in a cluster into a group that hosts applications, databases, or both.

Single Client Access Name (SCAN)

A single name that resolves to three IP addresses in the public network.

start effort evaluation

The process that Oracle Clusterware goes through when it starts a resource. During this process, Oracle Clusterware considers resource dependencies contained in the profile of the resource.

state

A set of values that describes what the condition of a particular resource.

working copy

An Oracle ACFS snapshot of a gold image that can either be a software-only installed Oracle home or an instantiated, configured Oracle home.

PK?GGPKP DOEBPS/admin.htm Administering Oracle Clusterware

2 Administering Oracle Clusterware

This chapter describes how to administer Oracle Clusterware. It includes the following topics:

Role-Separated Management

This section includes the following topics

About Role-Separated Management

Role-separated management is a feature you can implement that enables multiple applications and databases to share the same cluster and hardware resources, in a coordinated manner, by setting permissions on server pools or resources, to provide or restrict access to resources, as required. By default, this feature is not implemented during installation.

You can implement role-separated management in one of two ways:

  • Vertical implementation (between layers) describes a role separation approach based on different operating system users and groups used for various layers in the technology stack. Permissions on server pools and resources are granted to different users (and groups) for each layer in the stack using access control lists. Oracle Automatic Storage Management (ASM) offers setting up role separation as part of the Oracle Grid Infrastructure installation based on a granular assignment of operating system groups for specific roles.

  • Horizontal implementation (within one layer) describes a role separation approach that restricts resource access within one layer using access permissions for resources that are granted using access control lists assigned to server pools and policy-managed databases or applications.

For example, consider an operating system user called grid, with primary operating system group oinstall, that installs Oracle Grid Infrastructure and creates two database server pools. The operating system users ouser1 and ouser2 must be able to operate within a server pool, but should not be able to modify those server pools so that hardware resources can be withdrawn from other server pools either accidentally or intentionally.

You can configure server pools before you deploy database software and databases by configuring a respective policy set.

Role-separated management in Oracle Clusterware no longer depends on a cluster administrator (but backward compatibility is maintained). By default, the user that installed Oracle Clusterware in the Oracle Grid Infrastructure home (Grid home) and root are permanent cluster administrators. Primary group privileges (oinstall by default) enable database administrators to create databases in newly created server pools using the Database Configuration Assistant (DBCA), but do not enable role separation.


Note:

Oracle recommends that you enable role separation before you create the first server pool in the cluster. Create and manage server pools using configuration policies and a respective policy set. Access permissions are stored for each server pool in the ACL attribute, described in Table 3-1, "Server Pool Attributes".

Managing Cluster Administrators in the Cluster

The ability to create server pools in a cluster is limited to the cluster administrators. In prior releases, by default, every registered operating system user was considered a cluster administrator and, if necessary, the default could be changed using crsctl add | delete crs administrator commands. The use of these commands, however, is deprecated in this release and, instead, you should use the access control list (ACL) of the policy set to control the ability to create server pools.

As a rule, to have permission to create a server pool, the operating system user or an operating system group of which the user is a member must have the read, write, and execute permissions set in the ACL attribute. Use the crsctl modify policyset –attr "ACL=value" command to add or remove permissions for operating system users and groups.

Configuring Horizontal Role Separation

Use the crsctl setperm command to configure horizontal role separation using ACLs that are assigned to server pools, resources, or both. The CRSCTL utility is located in the path Grid_home/bin, where Grid_home is the Oracle Grid Infrastructure for a cluster home.

The command uses the following syntax, where the access control (ACL) string is indicated by italics:

crsctl setperm {resource | type | serverpool} name {-u acl_string | 
-x acl_string | -o user_name | -g group_name}

The flag options are:

  • -u: Update the entity ACL

  • -x: Delete the entity ACL

  • -o: Change the entity owner

  • -g: Change the entity primary group

The ACL strings are:

{ user:user_name[:readPermwritePermexecPerm]   |
     group:group_name[:readPermwritePermexecPerm] |
     other[::readPermwritePermexecPerm] }

where:

  • user: Designates the user ACL (access permissions granted to the designated user)

  • group: Designates the group ACL (permissions granted to the designated group members)

  • other: Designates the other ACL (access granted to users or groups not granted particular access permissions)

  • readperm: Location of the read permission (r grants permission and "-" forbids permission)

  • writeperm: Location of the write permission (w grants permission and "-" forbids permission)

  • execperm: Location of the execute permission (x grants permission, and "-" forbids permission)

For example, to set permissions on a server pool called psft for the group personnel, where the administrative user has read/write/execute privileges, the members of the personnel group have read/write privileges, and users outside of the group are granted no access, enter the following command as the root user:

# crsctl setperm serverpool psft -u user:personadmin:rwx,group:personnel:rw-,
  other::---

Overview of Grid Naming Service

Review the following sections to use Grid Naming Service (GNS) for address resolution:

Network Administration Tasks for GNS and GNS Virtual IP Address

To implement GNS, your network administrator must configure the DNS to set up a domain for the cluster, and delegate resolution of that domain to the GNS VIP. You can use a separate domain, or you can create a subdomain of an existing domain for the cluster.

GNS distinguishes between nodes by using cluster names and individual node identifiers as part of the host name for that cluster node, so that cluster node 123 in cluster A is distinguishable from cluster node 123 in cluster B.

However, if you configure host names manually, then the subdomain you delegate to GNS should have no subdomains. For example, if you delegate the subdomain mydomain.example.com to GNS for resolution, then there should be no other.mydomain.example.com domains. Oracle recommends that you delegate a subdomain to GNS that is used by GNS exclusively.


Note:

You can use GNS without DNS delegation in configurations where static addressing is being done, such as in Oracle Flex ASM or Oracle Flex Clusters. However, GNS requires a domain be delegated to it if addresses are assigned using DHCP.

Example 2-1 shows DNS entries required to delegate a domain called myclustergns.example.com to a GNS VIP address 10.9.8.7:

Example 2-1 DNS Entries

# Delegate to gns on mycluster
mycluster.example.com NS myclustergns.example.com
#Let the world know to go to the GNS vip
myclustergns.example.com. 10.9.8.7

See Also:

Oracle Grid Infrastructure Installation Guide for more information about network domains and delegation for GNS

The GNS daemon and the GNS VIP run on one node in the server cluster. The GNS daemon listens on the GNS VIP using port 53 for DNS requests. Oracle Clusterware manages the GNS daemon and the GNS VIP to ensure that they are always available. If the server on which the GNS daemon is running fails, then Oracle Clusterware fails over the GNS daemon and the GNS VIP to a surviving cluster member node. If the cluster is an Oracle Flex Cluster configuration, then Oracle Clusterware fails over the GNS daemon and the GNS VIP to a Hub Node.


Note:

Oracle Clusterware does not fail over GNS addresses to different clusters. Failovers occur only to members of the same cluster.


See Also:

Chapter 4, "Oracle Flex Clusters" for more information about Oracle Flex Clusters and GNS

Understanding Grid Naming Service Configuration Options

GNS can run in either automatic or standard cluster address configuration mode. Automatic configuration uses either the Dynamic Host Configuration Protocol (DHCP) for IPv4 addresses or the Stateless Address Autoconfiguration Protocol (autoconfig) (RFC 2462 and RFC 4862) for IPv6 addresses.

This section includes the following topics:

Automatic Configuration Option for Addresses

With automatic configurations, a DNS administrator delegates a domain on the DNS to be resolved through the GNS subdomain. During installation, Oracle Universal Installer assigns names for each cluster member node interface designated for Oracle Grid Infrastructure use during installation or configuration. SCANs and all other cluster names and addresses are resolved within the cluster, rather than on the DNS.

Automatic configuration occurs in one of the following ways:

  • For IPv4 addresses, Oracle Clusterware assigns unique identifiers for each cluster member node interface allocated for Oracle Grid Infrastructure, and generates names using these identifiers within the subdomain delegated to GNS. A DHCP server assigns addresses to these interfaces, and GNS maintains address and name associations with the IPv4 addresses leased from the IPv4 DHCP pool.

  • For IPv6 addresses, Oracle Clusterware automatically generates addresses with autoconfig.

Static Configuration Option for Addresses

With static configurations, no subdomain is delegated. A DNS administrator configures the GNS VIP to resolve to a name and address configured on the DNS, and a DNS administrator configures a SCAN name to resolve to three static addresses for the cluster. A DNS administrator also configures a static public IP name and address, and virtual IP name and address for each cluster member node. A DNS administrator must also configure new public and virtual IP names and addresses for each node added to the cluster. All names and addresses are resolved by DNS.

GNS without subdomain delegation using static VIP addresses and SCANs enables Oracle Flex Cluster and CloudFS features that require name resolution information within the cluster. However, any node additions or changes must be carried out as manual administration tasks.

Shared GNS Option for Addresses

With dynamic configurations, you can configure GNS to provide name resolution for one cluster, or to advertise resolution for multiple clusters, so that a single GNS instance can perform name resolution for multiple registered clusters. This option is called shared GNS.


Note:

All of the node names in a set of clusters served by GNS must be unique.

Shared GNS provides the same services as standard GNS, and appears the same to clients receiving name resolution. The difference is that the GNS daemon running on one cluster is configured to provide name resolution for all clusters in domains that are delegated to GNS for resolution, and GNS can be centrally managed using SRVCTL commands. You can use shared GNS configuration to minimize network administration tasks across the enterprise for Oracle Grid Infrastructure clusters.

You cannot use the static address configuration option for a cluster providing shared GNS to resolve addresses in a multi-cluster environment. Shared GNS requires automatic address configuration, either through addresses assigned by DHCP, or by IPv6 stateless address autoconfiguration.

Oracle Universal Installer enables you to configure static addresses with GNS for shared GNS clients or servers, with GNS used for discovery.

Configuring Oracle Grid Infrastructure Using Configuration Wizard

After performing a software-only installation of the Oracle Grid Infrastructure, you can configure the software using Configuration Wizard. This wizard assists you with editing the crsconfig_params configuration file. Similar to the Oracle Grid Infrastructure installer, the Configuration Wizard performs various validations of the Grid home and inputs before and after you run through the wizard.

Using the Configuration Wizard, you can configure a new Oracle Grid Infrastructure on one or more nodes, or configure an upgraded Oracle Grid Infrastructure. You can also run the Configuration Wizard in silent mode.


Notes:

  • Before running the Configuration Wizard, ensure that the Oracle Grid Infrastructure home is current, with all necessary patches applied.

  • To launch the Configuration Wizard in the following procedures:

    On Linux and UNIX, run the following command:

    Oracle_home/crs/config/config.sh
    

    On Windows, run the following command:

    Oracle_home\crs\config\config.bat
    

This section includes the following topics:

Configuring a Single Node

To use the Configuration Wizard to configure a single node:

  1. Start the Configuration Wizard, as follows:

    $ Oracle_home/crs/config/config.sh
    
  2. On the Select Installation Option page, select Configure Oracle Grid Infrastructure for a Cluster.

  3. On the Cluster Node Information page, select only the local node and corresponding VIP name.

  4. Continue adding your information on the remaining wizard pages.

  5. Review your inputs on the Summary page and click Finish.

  6. Run the root.sh script as instructed by the Configuration Wizard.

Configuring Multiple Nodes

To use the Configuration Wizard to configure multiple nodes:

  1. Start the Configuration Wizard, as follows:

    $ Oracle_home/crs/config/config.sh
    
  2. On the Select Installation Option page, select Configure Oracle Grid Infrastructure for a Cluster.

  3. On the Cluster Node Information page, select the nodes you want to configure and their corresponding VIP names. The Configuration Wizard validates the nodes you select to ensure that they are ready.

  4. Continue adding your information on the remaining wizard pages.

  5. Review your inputs on the Summary page and click Finish.

  6. Run the root.sh script as instructed by the Configuration Wizard.

Upgrading Oracle Grid Infrastructure

To use the Configuration Wizard to upgrade Oracle Grid Infrastructure for a cluster:

  1. Start the Configuration Wizard:

    $ Oracle_home/crs/config/config.sh
    
  2. On the Select Installation Option page, select Upgrade Oracle Grid Infrastructure.

  3. On the Oracle Grid Infrastructure Node Selection page, select the nodes you want to upgrade.

  4. Continue adding your information on the remaining wizard pages.

  5. Review your inputs on the Summary page and click Finish.

  6. Run the rootupgrade.sh script as instructed by the Configuration Wizard.


Note:

Oracle Restart cannot be upgraded using the Configuration Wizard.


See Also:

Oracle Database Installation Guide for your platform for Oracle Restart procedures

Running the Configuration Wizard in Silent Mode

To use the Configuration Wizard in silent mode to configure or upgrade nodes, start the Configuration Wizard from the command line with -silent -responseFile file_name. The wizard validates the response file and proceeds with the configuration. If any of the inputs in the response file are found to be invalid, then the Configuration Wizard displays an error and exits. Run the root and configToolAllCommands scripts as prompted.

Configuring IPMI for Failure Isolation

This section contains the following topics:

About Using IPMI for Failure Isolation

Failure isolation is a process by which a failed node is isolated from the rest of the cluster to prevent the failed node from corrupting data. The ideal fencing involves an external mechanism capable of restarting a problem node without cooperation either from Oracle Clusterware or from the operating system running on that node. To provide this capability, Oracle Clusterware 12c supports the Intelligent Platform Management Interface specification (IPMI) (also known as Baseboard Management Controller (BMC)), an industry-standard management protocol.

Typically, you configure failure isolation using IPMI during Oracle Grid Infrastructure installation, when you are provided with the option of configuring IPMI from the Failure Isolation Support screen. If you do not configure IPMI during installation, then you can configure it after installation using the Oracle Clusterware Control utility (CRSCTL), as described in "Postinstallation Configuration of IPMI-based Failure Isolation Using CRSCTL".

To use IPMI for failure isolation, each cluster member node must be equipped with an IPMI device running firmware compatible with IPMI version 1.5, which supports IPMI over a local area network (LAN). During database operation, failure isolation is accomplished by communication from the evicting Cluster Synchronization Services daemon to the failed node's IPMI device over the LAN. The IPMI-over-LAN protocol is carried over an authenticated session protected by a user name and password, which are obtained from the administrator during installation.

To support dynamic IP address assignment for IPMI using DHCP, the Cluster Synchronization Services daemon requires direct communication with the local IPMI device during Cluster Synchronization Services startup to obtain the IP address of the IPMI device. (This is not true for HP-UX and Solaris platforms, however, which require that the IPMI device be assigned a static IP address.) This is accomplished using an IPMI probe command (OSD), which communicates with the IPMI device through an IPMI driver, which you must install on each cluster system.

If you assign a static IP address to the IPMI device, then the IPMI driver is not strictly required by the Cluster Synchronization Services daemon. The driver is required, however, to use ipmitool or ipmiutil to configure the IPMI device but you can also do this with management consoles on some platforms.

Configuring Server Hardware for IPMI

Install and enable the IPMI driver, and configure the IPMI device, as described in the Oracle Grid Infrastructure Installation Guide for your platform.

Postinstallation Configuration of IPMI-based Failure Isolation Using CRSCTL

This section contains the following topics:

IPMI Postinstallation Configuration with Oracle Clusterware

When you install IPMI during Oracle Clusterware installation, you configure failure isolation in two phases. Before you start the installation, you install and enable the IPMI driver in the server operating system, and configure the IPMI hardware on each node (IP address mode, admin credentials, and so on), as described in Oracle Grid Infrastructure Installation Guide. When you install Oracle Clusterware, the installer collects the IPMI administrator user ID and password, and stores them in an Oracle Wallet in node-local storage, in OLR.

After you complete the server configuration, complete the following procedure on each cluster node to register IPMI administrators and passwords on the nodes.


Note:

If IPMI is configured to obtain its IP address using DHCP, it may be necessary to reset IPMI or restart the node to cause it to obtain an address.

  1. Start Oracle Clusterware, which allows it to obtain the current IP address from IPMI. This confirms the ability of the clusterware to communicate with IPMI, which is necessary at startup.

    If Oracle Clusterware was running before IPMI was configured, you can shut Oracle Clusterware down and restart it. Alternatively, you can use the IPMI management utility to obtain the IPMI IP address and then use CRSCTL to store the IP address in OLR by running a command similar to the following:

    crsctl set css ipmiaddr 192.168.10.45
    
  2. Use CRSCTL to store the previously established user ID and password for the resident IPMI in OLR by running the crsctl set css ipmiadmin command, and supplying password at the prompt. For example:

    crsctl set css ipmiadmin administrator_name
    IPMI BMC password: password
    

    This command validates the supplied credentials and fails if another cluster node cannot access the local IPMI using them.

    After you complete hardware and operating system configuration, and register the IPMI administrator on Oracle Clusterware, IPMI-based failure isolation should be fully functional.

Modifying IPMI Configuration Using CRSCTL

To modify an existing IPMI-based failure isolation configuration (for example to change IPMI passwords, or to configure IPMI for failure isolation in an existing installation), use CRSCTL with the IPMI configuration tool appropriate to your platform. For example, to change the administrator password for IPMI, you must first modify the IMPI configuration as described in Oracle Grid Infrastructure Installation Guide, and then use CRSCTL to change the password in OLR.

The configuration data needed by Oracle Clusterware for IPMI is kept in an Oracle Wallet in OCR. Because the configuration information is kept in a secure store, it must be written by the Oracle Clusterware installation owner account (the Grid user), so you must log in as that installation user.

Use the following procedure to modify an existing IPMI configuration:

  1. Enter the crsctl set css ipmiadmin administrator_name command. For example, with the user IPMIadm:

    crsctl set css ipmiadmin IPMIadm
    

    Provide the administrator password. Oracle Clusterware stores the administrator name and password for the local IPMI in OLR.

    After storing the new credentials, Oracle Clusterware can retrieve the new credentials and distribute them as required.

  2. Enter the crsctl set css ipmiaddr bmc_ip_address command. For example:

    crsctl set css ipmiaddr 192.0.2.244
    

    This command stores the new IPMI IP address of the local IPMI in OLR, After storing the IP address, Oracle Clusterware can retrieve the new configuration and distribute it as required.

  3. Enter the crsctl get css ipmiaddr command. For example:

    crsctl get css ipmiaddr
    

    This command retrieves the IP address for the local IPMI from OLR and displays it on the console.

  4. Remove the IPMI configuration information for the local IPMI from OLR and delete the registry entry, as follows:

    crsctl unset css ipmiconfig
    

See Also:

"Oracle RAC Environment CRSCTL Commands" for descriptions of these CRSCTL commands

Removing IPMI Configuration Using CRSCTL

You can remove an IPMI configuration from a cluster using CRSCTL if you want to stop using IPMI completely or if IPMI was initially configured by someone other than the user that installed Oracle Clusterware. If the latter is true, then Oracle Clusterware cannot access the IPMI configuration data and IPMI is not usable by the Oracle Clusterware software, and you must reconfigure IPMI as the user that installed Oracle Clusterware.

To completely remove IPMI, perform the following steps. To reconfigure IPMI as the user that installed Oracle Clusterware, perform steps 3 and 4, then repeat steps 2 and 3 in "Modifying IPMI Configuration Using CRSCTL".

  1. Disable the IPMI driver and eliminate the boot-time installation, as follows:

    /sbin/modprobe –r
    

    See Also:

    Oracle Grid Infrastructure Installation Guide for your platform for more information about the IPMI driver

  2. Disable IPMI-over-LAN for the local IPMI using either ipmitool or ipmiutil, to prevent access over the LAN or change the IPMI administrator user ID and password.

  3. Ensure that Oracle Clusterware is running and then use CRSCTL to remove the IPMI configuration data from OLR by running the following command:

    crsctl unset css ipmiconfig
    
  4. Restart Oracle Clusterware so that it runs without the IPMI configuration by running the following commands as root:

    # crsctl stop crs
    # crsctl start crs
    

Understanding Network Addresses on Manually Configured Networks

This section contains the following topics:

Understanding Network Address Configuration Requirements

An Oracle Clusterware configuration requires at least two interfaces:

  • A public network interface, on which users and application servers connect to access data on the database server

  • A private network interface for internode communication.

You can configure a network interface for either IPv4, IPv6, or both types of addresses on a given network. If you use redundant network interfaces (bonded or teamed interfaces), then be aware that Oracle does not support configuring one interface to support IPv4 addresses and the other to support IPv6 addresses. You must configure network interfaces of a redundant interface pair with the same IP protocol.

All the nodes in the cluster must use the same IP protocol configuration. Either all the nodes use only IPv4, or all the nodes use only IPv6, or all the nodes use both IPv4 and IPv6. You cannot have some nodes in the cluster configured to support only IPv6 addresses, and other nodes in the cluster configured to support only IPv4 addresses.

The VIP agent supports the generation of IPv6 addresses using the Stateless Address Autoconfiguration Protocol (RFC 2462), and advertises these addresses with GNS. Run the srvctl config network command to determine if DHCP or stateless address autoconfiguration is being used.

This section includes the following topics:

About IPv6 Address Formats

Each node in an Oracle Grid Infrastructure cluster can support both IPv4 and IPv6 addresses on the same network. The preferred IPv6 address format is as follows, where each x represents a hexadecimal character:

xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx

The IPv6 address format is defined by RFC 2460 and Oracle Grid Infrastructure supports IPv6 addresses, as following:

  • Global and site-local IPv6 addresses as defined by RFC 4193.


    Note:

    Link-local and site-local IPv6 addresses as defined in RFC 1884 are not supported.

  • The leading zeros compressed in each field of the IP address.

  • Empty fields collapsed and represented by a '::' separator. For example, you could write the IPv6 address 2001:0db8:0000:0000:0000:8a2e:0370:7334 as 2001:db8::8a2e:370:7334.

  • The four lower order fields containing 8-bit pieces (standard IPv4 address format). For example 2001:db8:122:344::192.0.2.33.

Name Resolution and the Network Resource Address Type

You can review the network configuration and control the network address type using the srvctl config network (to review the configuration) or srvctl status network (to review the current addresses allocated for dynamic networks), and srvctl modify network -iptype commands, respectively.

You can configure how addresses are acquired using the srvctl modify network -nettype command. Set the value of the -nettype parameter to dhcp or static to control how IPv4 network addresses are acquired. Alternatively, set the value of the -nettype parameter to autoconfig or static to control how IPv6 addresses are generated.

The -nettype and -iptype parameters are not directly related but you can use -nettype dhcp with -iptype ipv4 and -nettype autoconfig with -iptype ipv6.


Note:

If a network is configured with both IPv4 and IPv6 subnets, then Oracle does not support both subnets having -nettype set to mixed.

Oracle does not support making transitions from IPv4 to IPv6 while -nettype is set to mixed. You must first finish the transition from static to dhcp before you add IPv6 into the subnet.

Similarly, Oracle does not support starting a transition to IPv4 from IPv6 while -nettype is set to mixed. You must first finish the transition from autoconfig to static before you add IPv4 into the subnet.



See Also:


Understanding SCAN Addresses and Client Service Connections

Public network addresses are used to provide services to clients. If your clients are connecting to the Single Client Access Name (SCAN) addresses, then you may need to change public and virtual IP addresses as you add or remove nodes from the cluster, but you do not need to update clients with new cluster addresses.


Note:

You can edit the listener.ora file to make modifications to the Oracle Net listener parameters for SCAN and the node listener. For example, you can set TRACE_LEVEL_listener_name. However, you cannot set protocol address parameters to define listening endpoints, because the listener agent dynamically manages them.


See Also:

Oracle Database Net Services Reference for more information about the editing the listener.ora file

SCANs function like a cluster alias. However, SCANs are resolved on any node in the cluster, so unlike a VIP address for a node, clients connecting to the SCAN no longer require updated VIP addresses as nodes are added to or removed from the cluster. Because the SCAN addresses resolve to the cluster, rather than to a node address in the cluster, nodes can be added to or removed from the cluster without affecting the SCAN address configuration.

The SCAN is a fully qualified name (host name and domain) that is configured to resolve to all the addresses allocated for the SCAN. The SCAN resolves to one of the three addresses configured for the SCAN name on the DNS server, or resolves within the cluster in a GNS configuration. SCAN listeners can run on any node in the cluster. SCANs provide location independence for the databases, so that client configuration does not have to depend on which nodes run a particular database.

Oracle Database 11g release 2 (11.2), and later, instances only register with SCAN listeners as remote listeners. Upgraded databases register with SCAN listeners as remote listeners, and also continue to register with all node listeners.


Note:

Because of the Oracle Clusterware installation requirement that you provide a SCAN name during installation, if you resolved at least one IP address using the server /etc/hosts file to bypass the installation requirement but you do not have the infrastructure required for SCAN, then, after the installation, you can ignore the SCAN and connect to the databases in the cluster using VIPs.

Oracle does not support removing the SCAN address.


SCAN Listeners and Service Registration Restriction With Valid Node Checking

You can use valid node checking to specify the nodes and subnets from which the SCAN listener accepts registrations. You can specify the nodes and subnet information using SRVCTL. SRVCTL stores the node and subnet information in the SCAN listener resource profile. The SCAN listener agent reads that information from the resource profile and writes it to the listener.ora file.

For non-cluster (single-instance) databases, the local listener accepts service registrations only from database instances on the local node. Oracle RAC releases before Oracle RAC 11g release 2 (11.2) do not use SCAN listeners, and attempt to register their services with the local listener and the listeners defined by the REMOTE_LISTENERS initialization parameter. To support service registration for these database instances, the default value of valid_node_check_for_registration_alias for the local listener in Oracle RAC 12c is set to the value SUBNET, rather than to the local node. To change the valid node checking settings for the node listeners, edit the listener.ora file.

SCAN listeners must accept service registration from instances on remote nodes. For SCAN listeners, the value of valid_node_check_for_registration_alias is set to SUBNET in the listener.ora file so that the corresponding listener can accept service registrations that originate from the same subnet.

You can configure the listeners to accept service registrations from a different subnet. For example, you might want to configure this environment when SCAN listeners share with instances on different clusters, and nodes in those clusters are on a different subnet. Run the srvctl modfiy scan_listener -invitednodes -invitedsubnets command to include the nodes in this environment.

You must also run the srvctl modify nodeapps -remoteservers host:port,... command to connect the Oracle Notification Service networks of this cluster and the cluster with the invited instances.


See Also:


Administering Grid Naming Service

Use SRVCTL to administer Grid Naming Service (GNS) in both single-cluster and multi-cluster environments.

This section includes the following topics:


Note:

The GNS server and client must run on computers using the same operating system and processor architecture. Oracle does not support running GNS on computers with different operating systems, processor architectures, or both.


See Also:

Oracle Real Application Clusters Administration and Deployment Guide for usage information for the SRVCTL commands used in the procedures described in this section

Starting and Stopping GNS with SRVCTL

Start and stop GNS on the server cluster by running the following commands as root, respectively:

# srvctl start gns
# srvctl stop gns

Note:

You cannot start or stop GNS on a client cluster.

Converting Clusters to GNS Server or GNS Client Clusters

You can convert clusters that are not running GNS into GNS server or client clusters, and you can change GNS cluster type configurations for server and client clusters.

This section includes the following cluster conversion scenarios:

Converting a Non-GNS Cluster to a GNS Server Cluster

To convert a cluster that is not running GNS to a GNS server cluster, run the following command as root, providing a valid IP address and a domain:

# srvctl add gns -vip IP_address -domain domain

Notes:

  • Specifying a domain is not required when adding a GNS VIP.

  • The IP address you specify cannot currently be used by another GNS instance.

  • The configured cluster must have DNS delegation for it to be a GNS server cluster.


Converting a Non-GNS Cluster to a Client Cluster

To convert a cluster that is not running GNS to a GNS client cluster:

  1. Log in as root and run the following command in the server cluster to export the GNS instance client data configuration to a file:

    # srvctl export gns -clientdata path_to_file
    

    You must specify the fully-qualified path to the file.


    Note:

    You can use the GNS configuration Client Data file you generate with Oracle Universal Installer as an input file for creating shared GNS clients.

  2. Import the file you created in the preceding step on a node in the cluster to make that cluster a client cluster by running the following command as root:

    # srvctl add gns -clientdata path_to_file
    

    Note:

    You must copy the file containing the GNS data from the server cluster to a node in the cluster where you run this command.

  3. Change the SCAN name, as follows:

    $ srvctl modify scan -scanname scan.client_clustername.server_GNS_subdomain
    

Converting a Single Cluster Running GNS to a Server Cluster

You do not need to do anything to convert a single cluster running GNS to be a GNS server cluster. It is automatically considered to be a server cluster when a client cluster is added.

Converting a Single Cluster Running GNS to be a GNS Client Cluster

Because it is necessary to stay connected to the current GNS during this conversion process, the procedure is more involved than that of converting a single cluster to a server cluster.

  1. Run the following command as root in the server cluster to export the GNS client information to a file:

    # srvctl export gns -clientdata path_to_client_data_file
    

    You must specify the fully-qualified path to the file.

  2. Stop GNS on the cluster you want to convert to a client cluster.

    # srvctl stop gns
    

    Note:

    While the conversion is in progress, name resolution using GNS will be unavailable.

  3. Run the following command as root in the server cluster to export the GNS instance:

    # srvctl export gns -instance path_to_file
    

    You must specify the fully-qualified path to the file.

  4. Run the following command as root in the server cluster to import the GNS instance file:

    # srvctl import gns -instance path_to_file
    

    You must specify the fully-qualified path to the file.

  5. Run the following command as root on the node where you imported the GNS instance file to start the GNS instance:

    # srvctl start gns
    

    By not specifying the name of the node on which you want to start the GNS instance, the instance will start on a random node.

  6. Remove GNS from the GNS client cluster using the following command:

    # srvctl remove gns
    
  7. Make the former cluster a client cluster, as follows:

    # srvctl add gns -clientdata path_to_client_data_file
    

    Note:

    You must copy the file containing the GNS data from the server cluster to a node in the cluster where you run this command.

  8. Modify the SCAN in the GNS client cluster to use the GNS subdomain qualified with the client cluster name, as follows:

    srvctl modify scan -scanname scan_name.gns_domain
    

    In the preceding command, gns_domain is in the form client clustername.server GNS subdomain

Moving GNS to Another Cluster


Note:

This procedure requires server cluster and client cluster downtime. Additionally, you must import GNS client data from the new server cluster to any Oracle Flex ASM and Grid Home servers and clients.

If it becomes necessary to make another cluster the GNS server cluster, either because a cluster failure, or because of an administration plan, then you can move GNS to another cluster using the following procedure:

  1. Stop the GNS instance on the current server cluster.

    # srvctl stop gns
    
  2. Export the GNS instance configuration to a file.

    # srvctl export gns -instance path_to_file
    

    Specify the fully-qualified path to the file.

  3. Remove the GNS configuration from the former server cluster.

    # srvctl remove gns
    
  4. Add GNS to the new cluster.

    # srvctl add gns -domain domain_name -vip vip_name
    

    Alternatively, you can specify an IP address for the VIP.

  5. Configure the GNS instance in the new server cluster using the instance information stored in the file you created in step 2, by importing the file, as follows:

    # srvctl import gns -instance path_to_file
    

    Note:

    The file containing the GNS data from the former server cluster must reside on the node in the cluster where you run the srvctl import gns command.

  6. Start the GNS instance in the new server cluster.

    # srvctl start gns
    

Rolling Conversion from DNS to GNS Cluster Name Resolution

You can convert Oracle Grid Infrastructure cluster networks using DNS for name resolution to cluster networks using Grid Naming Service (GNS) obtaining name resolution through GNS.

Use the following procedure to convert from a standard DNS name resolution network to a GNS name resolution network, with no downtime:


See Also:

Oracle Grid Infrastructure Installation Guide for your platform to complete preinstallation steps for configuring GNS

  1. Log in as the Grid user (grid), and use the following Cluster Verification Utility to check the status for moving the cluster to GNS, where nodelist is a comma-delimited list of cluster member nodes:

    $ cluvfy stage –pre crsinst –n nodelist
    
  2. As the Grid user, check the integrity of the GNS configuration using the following commands, where domain is the domain delegated to GNS for resolution, and gns_vip is the GNS VIP:

    $ cluvfy comp gns -precrsinst -domain domain -vip gns_vip
    
  3. Log in as root, and use the following SRVCTL command to configure the GNS resource, where domain_name is the domain that your network administrator has configured your DNS to delegate for resolution to GNS, and ip_address is the IP address on which GNS listens for DNS requests:

    # srvctl add gns -domain domain_name -vip ip_address
    
  4. Use the following command to start GNS:

    # srvctl start gns
    

    GNS starts and registers VIP and SCAN names.

  5. As root, use the following command to change the network CRS resource to support a mixed mode of static and DHCP network addresses:

    # srvctl modify network -nettype MIXED
    

    The necessary VIP addresses are obtained from the DHCP server, and brought up.

  6. As the Grid user, enter the following command to ensure that Oracle Clusterware is using the new GNS, dynamic addresses, and listener end points:

    cluvfy stage -post crsinst -n all
    
  7. After the verification succeeds, change the remote endpoints that previously used the SCAN or VIPs resolved through the DNS to use the SCAN and VIPs resolved through GNS.

    For each client using a SCAN, change the SCAN that the client uses so that the client uses the SCAN in the domain delegated to GNS.

    For each client using VIP names, change the VIP name on each client so that they use the same server VIP name, but with the domain name in the domain delegated to GNS.

  8. Enter the following command as root to update the system with the SCAN name in the GNS subdomain:

    # srvctl modify scan -scanname scan_name.gns_domain
    

    In the preceding command syntax, gns_domain is the domain name you entered in step 3 of this procedure.

Changing Network Addresses on Manually Configured Systems

This section includes the following topics:

Changing the Virtual IP Addresses Using SRVCTL

Clients configured to use public VIP addresses for Oracle Database releases before Oracle Database 11g release 2 (11.2) can continue to use their existing connection addresses. Oracle recommends that you configure clients to use SCANs, but you are not required to use SCANs. When an earlier version of Oracle Database is upgraded, it is registered with the SCAN, and clients can start using the SCAN to connect to that database, or continue to use VIP addresses for connections.

If you continue to use VIP addresses for client connections, you can modify the VIP address while Oracle Database and Oracle ASM continue to run. However, you must stop services while you modify the address. When you restart the VIP address, services are also restarted on the node.

You cannot use this procedure to change a static public subnet to use DHCP. Only the srvctl add network -subnet command creates a DHCP network.


See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about the srvctl add network command


Note:

The following instructions describe how to change only a VIP address, and assume that the host name associated with the VIP address does not change. Note that you do not need to update VIP addresses manually if you are using GNS, and VIPs are assigned using DHCP.

If you are changing only the VIP address, then update the DNS and the client hosts files. Also, update the server hosts files, if those are used for VIP addresses.


Perform the following steps to change a VIP address:

  1. Stop all services running on the node whose VIP address you want to change using the following command syntax, where database_name is the name of the database, service_name_list is a list of the services you want to stop, and my_node is the name of the node whose VIP address you want to change:

    srvctl stop service -db database_name -service "service_name_list" -node node_name
    

    The following example specifies the database name (grid) using the -db option and specifies the services (sales,oltp) on the appropriate node (mynode).

    $ srvctl stop service -db grid -service "sales,oltp" -node mynode
    
  2. Confirm the current IP address for the VIP address by running the srvctl config vip command. This command displays the current VIP address bound to one of the network interfaces. The following example displays the configured VIP address for a VIP named node03-vip:

    $ srvctl config vip -vipname node03-vip
    VIP exists: /node03-vip/192.168.2.20/255.255.255.0/eth0
    
  3. Stop the VIP resource using the srvctl stop vip command:

    $ srvctl stop vip -node node_name
    
  4. Verify that the VIP resource is no longer running by running the ifconfig -a command on Linux and UNIX systems (or issue the ipconfig /all command on Windows systems), and confirm that the interface (in the example it was eth0:1) is no longer listed in the output.

  5. Make any changes necessary to the /etc/hosts files on all nodes on Linux and UNIX systems, or the %windir%\system32\drivers\etc\hosts file on Windows systems, and make any necessary DNS changes to associate the new IP address with the old host name.

  6. To use a different subnet or network interface card for the default network before you change any VIP resource, you must use the srvctl modify network -subnet subnet/netmask/interface command as root to change the network resource, where subnet is the new subnet address, netmask is the new netmask, and interface is the new interface. After you change the subnet, then you must change each node's VIP to an IP address on the new subnet, as described in the next step.

  7. Modify the node applications and provide the new VIP address using the following srvctl modify nodeapps syntax:

    $ srvctl modify nodeapps -node node_name -address new_vip_address
    

    The command includes the following flags and values:

    • -n node_name is the node name

    • -A new_vip_address is the node-level VIP address: name|ip/netmask/[if1[|if2|...]]

      For example, issue the following command as the root user:

      srvctl modify nodeapps -node mynode -address 192.168.2.125/255.255.255.0/eth0
      

      Attempting to issue this command as the installation owner account may result in an error. For example, if the installation owner is oracle, then you may see the error PRCN-2018: Current user oracle is not a privileged user.To avoid the error, run the command as the root or system administrator account.

  8. Start the node VIP by running the srvctl start vip command:

    $ srvctl start vip -node node_name
    

    The following command example starts the VIP on the node named mynode:

    $ srvctl start vip -node mynode
    
  9. Repeat the steps for each node in the cluster.

    Because the SRVCTL utility is a clusterwide management tool, you can accomplish these tasks for any specific node from any node in the cluster, without logging in to each of the cluster nodes.

  10. Run the following command to verify node connectivity between all of the nodes for which your cluster is configured. This command discovers all of the network interfaces available on the cluster nodes and verifies the connectivity between all of the nodes by way of the discovered interfaces. This command also lists all of the interfaces available on the nodes which are suitable for use as VIP addresses.

    $ cluvfy comp nodecon -n all -verbose
    

Changing Oracle Clusterware Private Network Configuration

This section contains the following topics:

About Private Networks and Network Interfaces

Oracle Clusterware requires that each node is connected through a private network (in addition to the public network). The private network connection is referred to as the cluster interconnect. Table 2-1 describes how the network interface card and the private IP address are stored.

Oracle only supports clusters in which all of the nodes use the same network interface connected to the same subnet (defined as a global interface with the oifcfg command). You cannot use different network interfaces for each node (node-specific interfaces). Refer to Appendix D, "Oracle Interface Configuration Tool (OIFCFG) Command Reference" for more information about global and node-specific interfaces.

Table 2-1 Storage for the Network Interface, Private IP Address, and Private Host Name

EntityStored In...Comments

Network interface name

Operating system

For example: eth1

You can use wildcards when specifying network interface names.

For example: eth*

Private network Interfaces

Oracle Clusterware, in the Grid Plug and Play (GPnP) Profile

Configure an interface for use as a private interface during installation by marking the interface as Private, or use the oifcfg setif command to designate an interface as a private interface.

See Also: "OIFCFG Commands" for more information about the oifcfg setif command


Redundant Interconnect Usage

You can define multiple interfaces for Redundant Interconnect Usage by classifying the role of interfaces as private either during installation or after installation using the oifcfg setif command. When you do, Oracle Clusterware creates from one to four (depending on the number of interfaces you define) highly available IP (HAIP) addresses, which Oracle Database and Oracle ASM instances use to ensure highly available and load balanced communications.

The Oracle software (including Oracle RAC, Oracle ASM, and Oracle ACFS, all 11g release 2 (11.2.0.2), or later), by default, uses the HAIP address of the interfaces designated with the private role as the HAIP address for all of its traffic, enabling load balancing across the provided set of cluster interconnect interfaces. If one of the defined cluster interconnect interfaces fails or becomes non-communicative, then Oracle Clusterware transparently moves the corresponding HAIP address to one of the remaining functional interfaces.

For example, after installation, if you add a new interface to a server named eth3 with the subnet number 172.16.2.0, then use the following command to make this interface available to Oracle Clusterware for use as a private interface:

$ oifcfg setif -global eth3/172.16.2.0:cluster_interconnect

While Oracle Clusterware brings up a HAIP address on eth3 of 169.254.*.* (which is the reserved subnet for HAIP), and the database, Oracle ASM, and Oracle ACFS use that address for communication, Oracle Clusterware also uses the 172.16.2.0 address for its own communication.


Caution:

Do not use OIFCFG to classify HAIP subnets (169.264.*.*). You can use OIFCFG to record the interface name, subnet, and type (public, cluster interconnect, or Oracle ASM) for Oracle Clusterware. However, you cannot use OIFCFG to modify the actual IP address for each interface.


Note:

Oracle Clusterware uses at most four interfaces at any given point, regardless of the number of interfaces defined. If one of the interfaces fails, then the HAIP address moves to another one of the configured interfaces in the defined set.

When there is only a single HAIP address and multiple interfaces from which to select, the interface to which the HAIP address moves is no longer the original interface upon which it was configured. Oracle Clusterware selects the interface with the lowest numeric subnet to which to add the HAIP address.



See Also:

Oracle Grid Infrastructure Installation Guide for your platform for information about defining interfaces

Consequences of Changing Interface Names Using OIFCFG

The consequences of changing interface names depend on which name you are changing, and whether you are also changing the IP address. In cases where you are only changing the interface names, the consequences are minor. If you change the name for the public interface that is stored in OCR, then you also must modify the node applications for the cluster. Therefore, you must stop the node applications for this change to take effect.


See Also:

My Oracle Support (formerly OracleMetaLink) note 276434.1 for more details about changing the node applications to use a new public interface name, available at the following URL:
https://metalink.oracle.com

Changing a Network Interface

You can change a network interface and its associated subnet address using the following procedure. You must perform this change on all nodes in the cluster.

This procedure changes the network interface and IP address on each node in the cluster used previously by Oracle Clusterware and Oracle Database.


Caution:

The interface that the Oracle RAC (RDBMS) interconnect uses must be the same interface that Oracle Clusterware uses with the host name. Do not configure the private interconnect for Oracle RAC on a separate interface that is not monitored by Oracle Clusterware.

  1. Ensure that Oracle Clusterware is running on all of the cluster nodes by running the following command:

    $ olsnodes -s
    

    The command returns output similar to the following, showing that Oracle Clusterware is running on all of the nodes in the cluster:

    ./olsnodes -s
    myclustera Active
    myclusterc Active
    myclusterb Active
    
  2. Ensure that the replacement interface is configured and operational in the operating system on all of the nodes. Use the ifconfig command (or ipconfig on Windows) for your platform. For example, on Linux, use:

    $ /sbin/ifconfig..
    
  3. Add the new interface to the cluster as follows, providing the name of the new interface and the subnet address, using the following command:

    $ oifcfg setif -global if_name/subnet:cluster_interconnect
    

    You can use wildcards with the interface name. For example, oifcfg setif -global "eth*/192.168.0.0:cluster_interconnect is valid syntax. However, be careful to avoid ambiguity with other addresses or masks used with other cluster interfaces. If you use wildcards, then you see a warning similar to the following:

    eth*/192.168.0.0 global cluster_interconnect
    PRIF-29: Warning: wildcard in network parameters can cause mismatch
    among GPnP profile, OCR, and system
    

    Note:

    Legacy network configuration does not support wildcards; thus wildcards are resolved using current node configuration at the time of the update.


    See Also:

    Appendix D, "Oracle Interface Configuration Tool (OIFCFG) Command Reference" for more information about using OIFCFG commands

  4. After the previous step completes, you can remove the former subnet, as follows, by providing the name and subnet address of the former interface:

    oifcfg delif -global if_name/subnet
    

    For example:

    $ oifcfg delif -global eth1/10.10.0.0
    

    Caution:

    This step should be performed only after a replacement interface is committed into the Grid Plug and Play configuration. Simple deletion of cluster interfaces without providing a valid replacement can result in invalid cluster configuration.

  5. Verify the current configuration using the following command:

    oifcfg getif
    

    For example:

    $ oifcfg getif
    eth2 10.220.52.0 global cluster_interconnect
    eth0 10.220.16.0 global public
    
  6. Stop Oracle Clusterware on all nodes by running the following command as root on each node:

    # crsctl stop crs
    

    Note:

    With cluster network configuration changes, the cluster must be fully stopped; do not use rolling stops and restarts.

  7. When Oracle Clusterware stops, you can deconfigure the deleted network interface in the operating system using the ifconfig command. For example:

    $ ifconfig down
    

    At this point, the IP address from network interfaces for the old subnet is deconfigured from Oracle Clusterware. This command does not affect the configuration of the IP address on the operating system.

    You must update the operating system configuration changes, because changes made using ifconfig are not persistent.

  8. Restart Oracle Clusterware by running the following command on each node in the cluster as the root user:

    # crsctl start crs
    

    The changes take effect when Oracle Clusterware restarts.

    If you use the CLUSTER_INTERCONNECTS initialization parameter, then you must update it to reflect the changes.

Creating a Network Using SRVCTL

Use the following procedure to create a network for a cluster member node, and to add application configuration information:

  1. Log in as root.

  2. Add a node application to the node, using the following syntax, where:

    srvctl add nodeapps -node node_name -address {vip |
       addr}/netmask[/if1[|if2|...]] [-pingtarget "ping_target_list"]
    

    In the preceding syntax:

    • node_name is the name of the node

    • vip is the VIP name or addr is the IP address

    • netmask is the netmask

    • if1[|if2|...] is a pipe-delimited list of interfaces bonded for use by the application

    • -ping_target_list is a comma-delimited list of IP addresses or host names to ping


    Notes:

    • Use the -pingtarget parameter when link status monitoring does not work as it does in a virtual machine environment.

    • Enter the srvctl add nodeapps -help command to review other syntax options.


    In the following example of using srvctl add nodeapps to configure an IPv4 node application, the nod4de name is node1, the netmask is 255.255.252.0, and the interface is eth0:

    # srvctl add nodeapps -node node1 -address node1-vip.mycluster.example.com/255.255.252.0/eth0
    

See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about the SRVCTL commands used in this procedure

Changing Network Address Types Using SRVCTL

You can configure a network interface for either IPv4, IPv6, or both types of addresses on a given network. If you configure redundant network interfaces using a third-party technology, then Oracle does not support configuring one interface to support IPv4 addresses and the other to support IPv6 addresses. You must configure network interfaces of a redundant interface pair with the same IP address type. If you use the Oracle Clusterware Redundant Interconnect feature, then you must use IPv4 addresses for the interfaces.

All the nodes in the cluster must use the same IP protocol configuration. Either all the nodes use only IPv4, or all the nodes use only IPv6, or all the nodes use both IPv4 and IPv6. You cannot have some nodes in the cluster configured to support only IPv6 addresses, and other nodes in the cluster configured to support only IPv4 addresses.

The local listener listens on endpoints based on the address types of the subnets configured for the network resource. Possible types are IPV4, IPV6, or both.

Changing Static IPv4 Addresses To Static IPv6 Addresses Using SRVCTL


Note:

If the IPv4 network is in mixed mode with both static and dynamic addresses, then you cannot perform this procedure. You must first transition all addresses to static.

When you change from IPv4 static addresses to IPv6 static addresses, you add an IPv6 address and modify the network to briefly accept both IPv4 and IPv6 addresses, before switching to using static IPv6 addresses, only.

To change a static IPv4 address to a static IPv6 address:

  1. Add an IPv6 subnet using the following command as root once for the entire network:

    # srvctl modify network –subnet ipv6_subnet/prefix_length
    

    In the preceding syntax ipv6_subnet/prefix_length is the subnet of the IPv6 address to which you are changing along with the prefix length, such as 3001::/64)

  2. Add an IPv6 VIP using the following command as root once on each node:

    # srvctl modify vip -node node_name -netnum network_number -address vip_name/netmask
    

    In the preceding syntax:

    • node_name is the name of the node

    • network_number is the number of the network

    • vip_name/netmask is the name of a local VIP that resolves to both IPv4 and IPv6 addresses

      The IPv4 netmask or IPv6 prefix length that follows the VIP name must satisfy two requirements:

      • If you specify a netmask in IPv4 format (such as 255.255.255.0), then the VIP name resolves to IPv4 addresses (but can also resolve to IPv6 addresses). Similarly, if you specify an IPv6 prefix length (such as 64), then the VIP name resolves to IPv6 addresses (but can also resolve to IPv4 addresses).

      • If you specify an IPv4 netmask, then it should match the netmask of the registered IPv4 network subnet number, regardless of whether the -iptype of the network is IPv6. Similarly, if you specify an IPv6 prefix length, then it must match the prefix length of the registered IPv6 network subnet number, regardless of whether the -iptype of the network is IPv4.

  3. Add the IPv6 network resource to OCR using the following command:

    oifcfg setif -global if_name/subnet:public
    

    See Also:

    "OIFCFG Command Format" for usage information for this command

  4. Update the SCAN in DNS to have as many IPv6 addresses as there are IPv4 addresses. Add IPv6 addresses to the SCAN VIPs using the following command as root once for the entire network:

    # srvctl modify scan -scanname scan_name
    

    scan_name is the name of a SCAN that resolves to both IPv4 and IPv6 addresses.

  5. Convert the network IP type from IPv4 to both IPv4 and IPv6 using the following command as root once for the entire network:

    srvctl modify network -netnum network_number -iptype both
    

    This command brings up the IPv6 static addresses.

  6. Change all clients served by the cluster from IPv4 networks and addresses to IPv6 networks and addresses.

  7. Transition the network from using both protocols to using only IPv6 using the following command:

    # srvctl modify network -iptype ipv6
    
  8. Modify the VIP using a VIP name that resolves to IPv6 by running the following command as root:

    # srvctl modify vip -node node_name -address vip_name -netnum network_number
    

    Do this once for each node.

  9. Modify the SCAN using a SCAN name that resolves to IPv6 by running the following command:

    $ srvctl modify scan -scanname scan_name
    

    Do this once for the entire cluster.


See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about the SRVCTL commands used in this procedure

Changing Dynamic IPv4 Addresses To Dynamic IPv6 Addresses Using SRVCTL


Note:

If the IPv4 network is in mixed mode with both static and dynamic addresses, then you cannot perform this procedure. You must first transition all addresses to dynamic.

To change a dynamic IPv4 address to a dynamic IPv6 address:

  1. Add an IPv6 subnet using the srvctl modify network command.

    To add the IPv6 subnet, log in as root and use the following command syntax:

    srvctl modify network -netnum network_number –subnet ipv6_subnet/
       ipv6_prefix_length[/interface] -nettype autoconfig
    

    In the preceding syntax:

    • network_number is the number of the network

    • ipv6_subnet is the subnet of the IPv6 address to which you are changing (for example, 2001:db8:122:344:c0:2:2100::)

    • ipv6_prefix_length is the prefix specifying the IPv6 network address (for example, 64)

    For example, the following command modifies network 3 by adding an IPv6 subnet, 2001:db8:122:344:c0:2:2100::, and the prefix length 64:

    # srvctl modify network -netnum 3 -subnet
         2001:db8:122:344:c0:2:2100::/64 -nettype autoconfig
    
  2. Add the IPv6 network resource to OCR using the following command:

    oifcfg setif -global if_name/subnet:public
    

    See Also:

    "OIFCFG Command Format" for usage information for this command

  3. Start the IPv6 dynamic addresses, as follows:

    srvctl modify network -netnum network_number -iptype both
    

    For example, on network number 3:

    # srvctl modify network -netnum 3 -iptype both
    
  4. Change all clients served by the cluster from IPv4 networks and addresses to IPv6 networks and addresses.

    At this point, the SCAN in the GNS-delegated domain scan_name.gns_domain will resolve to three IPv4 and three IPv6 addresses.

  5. Turn off the IPv4 part of the dynamic addresses on the cluster using the following command:

    # srvctl modify network -iptype ipv6
    

    After you run the preceding command, the SCAN (scan_name.gns_domain) will resolve to only three IPv6 addresses.

Changing an IPv4 Network to an IPv4 and IPv6 Network

To change an IPv4 network to an IPv4 and IPv6 network, you must add an IPv6 network to an existing IPv4 network, as you do in steps 1 through 5 of the procedure documented in "Changing Static IPv4 Addresses To Static IPv6 Addresses Using SRVCTL".

After you complete those three steps, log in as the Grid user, and run the following command:

$ srvctl status scan

Review the output to confirm the changes to the SCAN VIPs.

Transitioning from IPv4 to IPv6 Networks for VIP Addresses Using SRVCTL

Enter the following command to remove an IPv4 address type from a combined IPv4 and IPv6 network:

# srvctl modify network -iptype ipv6

This command starts the removal process of IPv4 addresses configured for the cluster.

PK.xdPKP DOEBPS/olsnodes.htm#{ OLSNODES Command Reference

C OLSNODES Command Reference

This appendix describes the syntax and command options for the olsnodes command.

This appendix contains the following topics:

Using OLSNODES

This section contains topics which relate to using the OLSNODES command.

Overview

The olsnodes command provides the list of nodes and other information for all nodes participating in the cluster.

You can use this command to quickly check that your cluster is operational, and all nodes are registered as members of the cluster. This command also provides an easy method for obtaining the node numbers.

Operational Notes

Usage Information

This command is used by the Cluster Verification Utility (CLUVFY) to obtain a list of node names when the -n all option is used.

This command utility is located in the $ORA_CRS_HOME/bin directory. You can only use this command if the CRS daemon is started.

Privileges and Security

You can run this command as either the root user, the user that installed Oracle Clusterware, or the user that installed Oracle Database.

Summary of the OLSNODES Command

The olsnodes command does not use keywords, but accepts one or more options. The available options are described in Table C-1.

Syntax

olsnodes [[-n] [-i] [-s] [-t] [node_name | -l [-p]] | [-c]] [-a] [-g] [-v]

If you issue the olsnodes command without any command parameters, the command returns a listing of the nodes in the cluster:

[root@node1]# olsnodes
node1
node2
node3
node4

Table C-1 OLSNODES Command Options

CommandDescription

-n

Lists all nodes participating in the cluster and includes the assigned node numbers.

-i

Lists all nodes participating in the cluster and includes the Virtual Internet Protocol (VIP) address (or VIP address with the node name) assigned to each node.

-s

Displays the status of the node: active or inactive.

-t

Displays node type: pinned or unpinned.

node_name

Displays information for a particular node.

-l [-p]

Lists the local node and includes the private interconnect for the local node. The -p option is only valid when you specify along with the -l option.

-c

Displays the name of the cluster.

-a

Displays only active nodes in the cluster with no duplicates.

-g

Logs cluster verification information with more details.

-v

Logs cluster verification information in verbose mode. Use in debug mode and only at the direction of My Oracle Support.


Examples

Example 1: List the VIP addresses for all nodes currently in the cluster

To list the VIP addresses for each node that is currently a member of the cluster, use the command:

[root@node1]# olsnodes -i
node1    168.92.1.1
node2   168.192.2.1
node3   168.192.3.1
node4   168.192.4.1

Example 2: List the node names and node numbers for cluster members

To list the node name and the node number for each node in the cluster, use the command:

[root@node1]# olsnodes -n
node1    1
node2    2
node3    3
node4    4

Example 3: Display node roles for cluster members

To list the node roles for each node in the cluster, use the command:

[root@node1]# olsnodes -a
node1    Hub
node2    Hub
node3    Leaf
node4    Leaf
PK$@q##PKP DOEBPS/preface.htm< Preface

Preface

The Oracle Clusterware Administration and Deployment Guide describes the Oracle Clusterware architecture and provides an overview of this product. This book also describes administrative and deployment topics for Oracle Clusterware.

Information in this manual applies to Oracle Clusterware as it runs on all platforms unless otherwise noted. In addition, the content of this manual supplements administrative and deployment topics for Oracle single-instance databases that appear in other Oracle documentation. Where necessary, this manual refers to platform-specific documentation. This Preface contains these topics:

Audience

The Oracle Clusterware Administration and Deployment Guide is intended for database administrators, network administrators, and system administrators who perform the following tasks:

  • Install and configure Oracle Real Application Clusters (Oracle RAC) databases

  • Administer and manage Oracle RAC databases

  • Manage and troubleshoot clusters and networks that use Oracle RAC

Documentation Accessibility

For information about Oracle's commitment to accessibility, visit the Oracle Accessibility Program website at http://www.oracle.com/pls/topic/lookup?ctx=acc&id=docacc.

Access to Oracle Support

Oracle customers have access to electronic support through My Oracle Support. For information, visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=info or visit http://www.oracle.com/pls/topic/lookup?ctx=acc&id=trs if you are hearing impaired.

Related Documents

For more information, see the Oracle resources listed in this section.

Conventions

The following text conventions are used in this document:

ConventionMeaning
boldfaceBoldface type indicates graphical user interface elements associated with an action, or terms defined in text or the glossary.
italicItalic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospaceMonospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.

PKqA<PKP DOEBPS/adddelclusterware.htm Adding and Deleting Cluster Nodes

7 Adding and Deleting Cluster Nodes

This chapter describes how to add nodes to an existing cluster, and how to delete nodes from clusters. This chapter provides procedures for these tasks for Linux, UNIX, and Windows systems.


Notes:

  • Unless otherwise instructed, perform all add and delete node steps as the user that installed Oracle Clusterware.

  • Oracle recommends that you use the cloning procedure described in Chapter 8, "Cloning Oracle Clusterware" to create clusters.


The topics in this chapter include the following:

Prerequisite Steps for Adding Cluster Nodes


Note:

Ensure that you perform the preinstallation tasks listed in Oracle Grid Infrastructure Installation Guide for Linux before adding a node to a cluster.

Do not install Oracle Clusterware. The software is copied from an existing node when you add a node to the cluster.


Complete the following steps to prepare nodes to add to the cluster:

  1. Make physical connections.

    Connect the nodes' hardware to the network infrastructure of your cluster. This includes establishing electrical connections, configuring network interconnects, configuring shared disk subsystem connections, and so on. See your hardware vendor documentation for details about this step.

  2. Install the operating system.

    Install a cloned image of the operating system that matches the operating system on the other nodes in your cluster. This includes installing required service patches, updates, and drivers. See your operating system vendor documentation for details about this process.


    Note:

    Oracle recommends that you use a cloned image. However, if the installation fulfills the installation requirements, then install the operating system according to the vendor documentation.

  3. Create Oracle users.

    You must create all Oracle users on the new node that exist on the existing nodes. For example, if you are adding a node to a cluster that has two nodes, and those two nodes have different owners for the Oracle Grid Infrastructure home and the Oracle home, then you must create those owners on the new node, even if you do not plan to install an Oracle home on the new node.


    Note:

    Perform this step only for Linux and UNIX systems.

    As root, create the Oracle users and groups using the same user ID and group ID as on the existing nodes.

  4. Ensure that SSH is configured on the node.


    Note:

    SSH is configured when you install Oracle Clusterware 12c. If SSH is not configured, then see Oracle Grid Infrastructure Installation Guide for information about configuring SSH.

  5. Verify the hardware and operating system installations with the Cluster Verification Utility (CVU).

    After you configure the hardware and operating systems on the nodes you want to add, you can run the following commands to verify that the nodes you want to add are reachable by other nodes in the cluster. You can also use this command to verify user equivalence to all given nodes from the local node, node connectivity among all of the given nodes, accessibility to shared storage from all of the given nodes, and so on.

    1. From the Grid_home/bin directory on an existing node, run the CVU command to obtain a detailed comparison of the properties of the reference node with all of the other nodes that are part of your current cluster environment. Replace ref_node with the name of a node in your existing cluster against which you want CVU to compare the nodes to be added. Specify a comma-delimited list of nodes after the -n option. In the following example, orainventory_group is the name of the Oracle Inventory group, and osdba_group is the name of the OSDBA group:

      $ cluvfy comp peer [-refnode ref_node] -n node_list
      [-orainv orainventory_group] [-osdba osdba_group] [-verbose]
      
    2. Ensure that the Grid Infrastructure Management Repository has at least an additional 500 MB of space for each node added above four, as follows:

      $ oclumon manage -get repsize
      

      Add additional space, if required, as follows:

      $ oclumon manage -repos changerepossize total_in_MB
      

      See Also:

      "OCLUMON Command Reference" for more information about using OCLUMON


    Note:

    For the reference node, select a cluster node against which you want CVU to compare, for example, the nodes that you want to add that you specify with the -n option.

After completing the procedures in this section, you are ready to add the nodes to the cluster.


Note:

Avoid changing host names after you complete the Oracle Clusterware installation, including adding or deleting domain qualifications. Nodes with changed host names must be deleted from the cluster and added back with the new name.

Adding and Deleting Cluster Nodes on Linux and UNIX Systems

This section explains cluster node addition and deletion on Linux and UNIX systems. The procedure in the section for adding nodes assumes that you have performed the steps in the "Prerequisite Steps for Adding Cluster Nodes" section.

The last step of the node addition process includes extending the Oracle Clusterware home from an Oracle Clusterware home on an existing node to the nodes that you want to add.

This section includes the following topics:


Note:

Beginning with Oracle Clusterware 11g release 2 (11.2), Oracle Universal Installer defaults to silent mode when adding nodes.

Adding a Cluster Node on Linux and UNIX Systems

This procedure describes how to add a node to your cluster. This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3 using a virtual node name, node3-vip, that resolves to an IP address, if you are not using DHCP and Grid Naming Service (GNS)

  • You have successfully installed Oracle Clusterware on node1 and node2 in a local (non-shared) home, where Grid_home represents the successfully installed home

To add a node:

  1. Ensure that you have successfully installed Oracle Clusterware on at least one node in your cluster environment. To perform the following procedure, Grid_home must identify your successfully installed Oracle Clusterware home.


    See Also:

    Oracle Grid Infrastructure Installation Guide for Oracle Clusterware installation instructions

  2. Verify the integrity of the cluster and node3:

    $ cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]
    

    You can specify the -fixup option to attempt to fix the cluster or node if the verification fails.

  3. To extend the Oracle Grid Infrastructure home to the node3, navigate to the Grid_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle Clusterware.

    To run addnode.sh in interactive mode, run addnode.sh from Grid_home/addnode.

    You can also run addnode.sh in silent mode for both Oracle Clusterware standard Clusters and Oracle Flex Clusters.

    For an Oracle Clusterware standard Cluster:

    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
      HOSTNAMES={node3-vip}"
    

    If you are adding node3 to an Oracle Flex Cluster, then you can specify the node role on the command line, as follows:

    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3}" "CLUSTER_NEW_VIRTUAL_
      HOSTNAMES={node3-vip}" "CLUSTER_NEW_NODE_ROLES={hub}"
    

    Note:

    Hub Nodes always have VIPs but Leaf Nodes may not. If you use the preceding syntax to add multiple nodes to the cluster, then you can use syntax similar to the following, where node3 is a Hub Node and node4 is a Leaf Node:
    ./addnode.sh -silent "CLUSTER_NEW_NODES={node3,node4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip,}" "CLUSTER_NEW_NODE_ROLES={hub,leaf}"
    

  4. If prompted, then run the orainstRoot.sh script as root to populate the /etc/oraInst.loc file with the location of the central inventory. For example:

    # /opt/oracle/oraInventory/orainstRoot.sh
    
  5. If you have an Oracle RAC or Oracle RAC One Node database configured on the cluster and you have a local Oracle home, then do the following to extend the Oracle database home to node3:

    1. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh "CLUSTER_NEW_NODES={node3}"
      
    2. Run the Oracle_home/root.sh script on node3 as root, where Oracle_home is the Oracle RAC home.

    If you have a shared Oracle home that is shared using Oracle Automatic Storage Management Cluster File System (Oracle ACFS), then do the following to extend the Oracle database home to node3:

    1. Run the Grid_home/root.sh script on node3 as root, where Grid_home is the Oracle Grid Infrastructure home.

    2. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER_NODES={node3}"
      LOCAL_NODE="node3" ORACLE_HOME_NAME="home_name" -cfs
      
    3. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
      

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

    If you have a shared Oracle home on a shared file system that is not Oracle ACFS, then you must first create a mount point for the Oracle RAC database home on the target node, mount and attach the Oracle RAC database home, and update the Oracle Inventory, as follows:

    1. Run the srvctl config database -db db_name command on an existing node in the cluster to obtain the mount point information.

    2. Run the following command as root on node3 to create the mount point:

      # mkdir -p mount_point_path
      
    3. Mount the file system that hosts the Oracle RAC database home.

    4. Run the following command as the user that installed Oracle RAC from the Oracle_home/oui/bin directory on the node you are adding to add the Oracle RAC database home:

      $ ./runInstaller -attachHome ORACLE_HOME="ORACLE_HOME" "CLUSTER
      _NODES={local_node_name}" LOCAL_NODE="node_name" ORACLE_HOME_NAME="home_name" -cfs
      
    5. Navigate to the Oracle_home/addnode directory on node1 and run the addnode.sh script as the user that installed Oracle RAC using the following syntax:

      $ ./addnode.sh -noCopy "CLUSTER_NEW_NODES={node3}"
      

      Note:

      Use the -noCopy option because the Oracle home on the destination node is already fully populated with software.

  6. Run the Grid_home/root.sh script on the node3 as root and run the subsequent script, as instructed.


    Notes:

    • If you ran the root.sh script in the step 5, then you do not need to run it again.

    • If you have a policy-managed database, then you must ensure that the Oracle home is cloned to the new node before you run the root.sh script.


  7. Start the Oracle ACFS resource on the new node by running the following command as root from the Grid_home/bin directory:

    # srvctl start filesystem -device volume_device_name -node node3
    

    Note:

    Ensure the Oracle ACFS resources, including Oracle ACFS registry resource and Oracle ACFS file system resource where the Oracle home is located, are online on the newly added node.

  8. Run the following CVU command as the user that installed Oracle Clusterware to check cluster integrity. This command verifies that any number of specified nodes has been successfully added to the cluster at the network, shared storage, and clusterware levels:

    $ cluvfy stage -post nodeadd -n node3 [-verbose]
    

    See Also:

    "cluvfy stage [-pre | -post] nodeadd" for more information about this CVU command

Check whether either a policy-managed or administrator-managed Oracle RAC database is configured to run on node3 (the newly added node). If you configured an administrator-managed Oracle RAC database, you may need to use DBCA to add an instance to the database to run on this newly added node.


See Also:


Deleting a Cluster Node on Linux and UNIX Systems

This section describes the procedure for deleting a node from a cluster.


Notes:

  • You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance

  • If you delete the last node of a cluster that is serviced by GNS, then you must delete the entries for that cluster from GNS.

  • If you have nodes in the cluster that are unpinned, then Oracle Clusterware ignores those nodes after a time and there is no need for you to remove them.

  • If one creates node-specific configuration for a node (such as disabling a service on a specific node, or adding the node to the candidate list for a server pool) that node-specific configuration is not removed when the node is deleted from the cluster. Such node-specific configuration must be removed manually.

  • Voting files are automatically backed up in OCR after any changes you make to the cluster.

  • When you want to delete a Leaf Node from an Oracle Flex Cluster, you need only complete steps 1 through 4 of this procedure.


To delete a node from a cluster:

  1. Ensure that Grid_home correctly specifies the full directory path for the Oracle Clusterware home on each node, where Grid_home is the location of the installed Oracle Clusterware software.

  2. Run the following command as either root or the user that installed Oracle Clusterware to determine whether the node you want to delete is active and whether it is pinned:

    $ olsnodes -s -t
    

    If the node is pinned, then run the crsctl unpin css command. Otherwise, proceed to the next step.

  3. On the node you want to delete, run the following command as the user that installed Oracle Clusterware from the Grid_home/oui/bin directory where node_to_be_deleted is the name of the node that you are deleting:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {node_to_be_deleted}" CRS=TRUE -silent -local
    
  4. On the node that you are deleting, depending on whether you have a shared or local Oracle home, complete one of the following procedures as the user that installed Oracle Clusterware:

    • For a local home, deinstall the Oracle Clusterware home from the node that you want to delete, as follows, by running the following command, where Grid_home is the path defined for the Oracle Clusterware home:

      $ Grid_home/deinstall/deinstall -local
      

      Caution:

      • If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster.

      • If you cut and paste the preceding command, then paste it into a text editor before pasting it to the command line to remove any formatting this document might contain.


    • If you have a shared home, then run the following commands in the following order on the node you want to delete.

      Run the following command to deconfigure Oracle Clusterware:

      $ Grid_home/perl/bin/perl Grid_home/crs/install/rootcrs.pl -deconfig -force
      

      Run the following command from the Grid_home/oui/bin directory to detach the Grid home:

      $ ./runInstaller -detachHome ORACLE_HOME=Grid_home -silent -local
      

      Manually delete any configuration files, as prompted by the installation utility.

  5. On any node other than the node you are deleting (except for a Leaf Node in an Oracle Flex Cluster), run the following command from the Grid_home/oui/bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

    $ ./runInstaller -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {remaining_nodes_list}" CRS=TRUE -silent
    

    Notes:

    • You must run this command a second time from the Oracle RAC home, where ORACLE_HOME=ORACLE__RAC_HOME and CRS=TRUE -silent is omitted from the syntax, as follows:

      $ ./runInstaller -updateNodeList ORACLE_HOME=ORACLE_HOME
       "CLUSTER_NODES={remaining_nodes_list}"
      
    • Because you do not have to run this command if you are deleting a Leaf Node from an Oracle Flex Cluster, remaining_nodes_list must list only Hub Nodes.

    • If you have a shared Oracle Grid Infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.


  6. From any node that you are not deleting, run the following command from the Grid_home/bin directory as root to delete the node from the cluster:

    # crsctl delete node -n node_to_be_deleted
    
  7. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

    $ cluvfy stage -post nodedel -n node_list [-verbose]
    

    See Also:

    "cluvfy stage -post nodedel" for more information about this CVU command

Adding and Deleting Cluster Nodes on Windows Systems

This section explains cluster node addition and deletion on Windows systems. This section includes the following topics:


See Also:

Oracle Grid Infrastructure Installation Guide for more information about deleting an entire cluster

Adding a Node to a Cluster on Windows Systems

Ensure that you complete the prerequisites listed in "Prerequisite Steps for Adding Cluster Nodes" before adding nodes.

This procedure describes how to add a node to your cluster. This procedure assumes that:

  • There is an existing cluster with two nodes named node1 and node2

  • You are adding a node named node3

  • You have successfully installed Oracle Clusterware on node1 and node2 in a local home, where Grid_home represents the successfully installed home


Note:

Do not use the procedures described in this section to add cluster nodes in configurations where the Oracle database has been upgraded from Oracle Database 10g release 1 (10.1) on Windows systems.

To add a node:

  1. Verify the integrity of the cluster and node3:

    C:\>cluvfy stage -pre nodeadd -n node3 [-fixup] [-verbose]
    

    You can specify the -fixup option and a directory into which CVU prints instructions to fix the cluster or node if the verification fails.

  2. On node1, go to the Grid_home\addnode directory and run the addnode.bat script, as follows:

    C:\>addnode.bat "CLUSTER_NEW_NODES={node3}"
    "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node3-vip}"
    
  3. Run the following command on the new node:

    C:\>Grid_home\crs\config\gridconfig.bat
    
  4. The following steps are required only if you have database homes configured to use Oracle ACFS:

    1. For each database configured to use Oracle ACFS, run the following command from the Oracle RAC database home:

      C:\> ORACLE_HOME/bin/srvctl stop database -db database_unique_name -node newly_added_node_name
      

      Note:

      Run the srvctl config database command to list all of the databases configured with Oracle Clusterware. Use the srvctl config database -db database_unique_name to find the database details. If the ORACLE_HOME path leads to the Oracle ACFS mount path, then the database uses Oracle ACFS. Use the command output to find the database instance name configured to run on the newly added node.

    2. Use Windows Server Manager Control to stop and delete services.

    3. For each of the databases and database homes collected in step a, run the following command:

      C:\> ORACLE_HOME/bin/srvctl start database -db database_unique_name -node newly_added_node_name
      
  5. Run the following command to verify the integrity of the Oracle Clusterware components on all of the configured nodes, both the preexisting nodes and the nodes that you have added:

    C:\>cluvfy stage -post crsinst -n all [-verbose]
    

After you complete the procedure in this section for adding nodes, you can optionally extend Oracle Database with Oracle RAC components to the new nodes, making them members of an existing Oracle RAC database.


See Also:

Oracle Real Application Clusters Administration and Deployment Guide for more information about extending Oracle Database with Oracle RAC to new nodes

Deleting a Cluster Node on Windows Systems

This section describes how to delete a cluster node on Windows systems. This procedure assumes that Oracle Clusterware is installed on node1, node2, and node3, and that you are deleting node3 from the cluster.


Notes:

  • Oracle does not support using Oracle Enterprise Manager to delete nodes on Windows systems.

  • If you delete the last node of a cluster that is serviced by GNS, then you must delete the entries for that cluster from GNS.

  • You can remove the Oracle RAC database instance from the node before removing the node from the cluster but this step is not required. If you do not remove the instance, then the instance is still configured but never runs. Deleting a node from a cluster does not remove a node's configuration information from the cluster. The residual configuration information does not interfere with the operation of the cluster.

    See Also: Oracle Real Application Clusters Administration and Deployment Guide for more information about deleting an Oracle RAC database instance


To delete a cluster node on Windows systems:

  1. Only if you have a local home, on the node you want to delete, run the following command with -local option to update the node list:

    C:\>Grid_home\oui\bin\setup.exe -updateNodeList ORACLE_HOME=Grid_home "CLUSTER_NODES=
    {node_to_be_deleted}" CRS=TRUE -local
    

    Note:

    If you are deleting a Leaf Node from an Oracle Flex Cluster, then you do not have run this command.

  2. Run the deinstall tool on the node you want to delete to deinstall and deconfigure the Oracle Clusterware home, as follows:

    C:\Grid_home\deinstall\>deinstall.bat -local
    

    Caution:

    • If you do not specify the -local flag, then the command removes the Oracle Grid Infrastructure home from every node in the cluster.

    • If you cut and paste the preceding command, then paste it into a text editor before pasting it to the command line to remove any formatting this document might contain.


  3. On any node that you are not deleting, run the following command from the Grid_home\oui\bin directory where remaining_nodes_list is a comma-delimited list of the nodes that are going to remain part of your cluster:

    C:\>setup.exe –updateNodeList ORACLE_HOME=Grid_home
    "CLUSTER_NODES={remaining_nodes_list}" CRS=TRUE -silent
    

    Notes:

    • You must run this command a second time where ORACLE_HOME=ORACLE_HOME, and CRS=TRUE -silent is omitted from the syntax, as follows:

      C:\>setup.exe -updateNodeList ORACLE_HOME=ORACLE_HOME
       "CLUSTER_NODES={remaining_nodes_list}"
      
    • If you have a shared Oracle Grid Infrastructure home, then append the -cfs option to the command example in this step and provide a complete path location for the cluster file system.


  4. On a node that you are not deleting, run the following command:

    C:\>Grid_home\bin\crsctl delete node -n node_to_be_deleted
    
  5. Run the following CVU command to verify that the specified nodes have been successfully deleted from the cluster:

    C:\>cluvfy stage -post nodedel -n node_list [-verbose]
    
PKuPKP DOEBPS/index.htm Index

Index

A  B  C  D  E  F  G  H  I  L  M  N  O  P  Q  R  S  T  U  V  W 

A

access control list (ACL), 2.1.2
ACL
resource attribute, B
action entry points
defined, 9.2
action script, 9.1.4
action scripts
actions, 9.2
ACTION_FAILURE_EVENT_TEMPLATE
resource attribute, B
ACTION_SCRIPT
resource attribute, B
ACTION_TIMEOUT
resource attribute, B
ACTIONS
resource attribute, B
ACTIVE_PLACEMENT
resource attribute, B
addresses, configuring manually, 1.2.4.2
administrative tools
overview and concepts, 1.6
ADR, J.2
directory structure, J.2.1
ADR home, J.2.1
directories
incident, J.2.1
trace, J.2.1
agent
defined, 9.1.3
agent framework
defined, 9.1.3
agent programs, 9.1.3
AGENT_FILENAME
resource attribute, B
agents, 9.1.3
appagent, 1.3.1.2, 9.1.3
appagent.exe, 1.3.1.2, 9.1.3
scriptagent, 1.3.1.2, 9.1.3
scriptagent.exe, 1.3.1.2, 9.1.3
alert messages
CRSD, J.7
ALERT_TEMPLATE
resource attribute, B
APIs
agent framework
clsagfw_add_type(), G
clsagfw_check_resource(), G
clsagfw_create_attr_iterator(), G
clsagfw_delete_cookie(), G
clsagfw_exit2(), G
clsagfw_get_attr_from_iterator(), G
clsagfw_get_attrvalue(), G
clsagfw_get_check_type(), G
clsagfw_get_cmdid(), G
clsagfw_get_cookie(), G
clsagfw_get_request_action_name(), G
clsagfw_get_resource_id(), G
clsagfw_get_resource_name(), G
clsagfw_get_retry_count(), G
clsagfw_get_type_name(), G
clsagfw_init(), G
clsagfw_is_cmd_timedout(), G
clsagfw_log(), G
clsagfw_modify_attribute(), G
clsagfw_reset_attr_iterator(), G
clsagfw_send_status2(), G
clsagfw_set_cookie(), G
clsagfw_set_entrypoint(), G
clsagfw_set_exitcb(), G
clsagfw_set_resource_state_label(), G
clsagfw_startup(), G
clscrs_stat2, H
miscellaneous
clscrs_get_error_details, H
clscrs_request_action, H
clscrs_restart_resource, H
clscrs_start_resource_in_pools, H
clscrs_stat3, H
clscrs_stop_resource_in_pools, H
server categorization
clscrs_get_server_by_category, H
clscrs_register_server, H
clscrs_register_servercategory, H
clscrs_servercategory_create, H
clscrs_servercategory_destroy, H
clscrs_unregister_servercategory, H
What-If
clscrs_whatif_add_server, H
clscrs_whatif_delete_server, H
clscrs_whatif_fail_resource, H
clscrs_whatif_register_resource, H
clscrs_whatif_register_serverpool, H
clscrs_whatif_relocate_resource, H
clscrs_whatif_relocate_server, H
clscrs_whatif_set_activepolicy, H
clscrs_whatif_start_resource, H
clscrs_whatif_stop_resource, H
clscrs_whatif_unregister_serverpool, H
application agent, 1.3.1.2, 9.1.3
application VIP
deleting, 9.3.1
applications
defining a VIP address, 1.8
highly available, 1.8
managing with CLSCRS commands, H
appvipcfg create, 9.3.1
attraction start dependency, 9.2.3.1
modifiers, B
AUTO_START
resource attribute, B
autoconfig, 1.2.4, 2.2.2
automatic address configuration, 2.2.2.1
automatic cluster address configuration, 2.2.2
Automatic Diagnostic Repository
See ADR

B

background processes, 1.3.1
Baseboard Management Controller (BMC)
See Intelligent Platform Management Interface (IPMI)
block-by-block checksum operation, I
built-in agents, 9.1.3

C

Cache Fusion
communication, D
CARDINALITY
resource attribute, B
changing a network interface
network interface
changing, 2.8.2.4
changing interface names, consequences, 2.8.2.3
changing the cluster mode, 4.2.1
changing VIP addresses, 2.8.1
CHECK_INTERVAL
resource attribute, B
CHECK_TIMEOUT
resource attribute, B
checksum operation
block-by-block, I
CHM
See Cluster Health Monitor (CHM)
CLEAN_TIMEOUT
resource attribute, B
client cluster, 1.2.4
Client Data File
Using to configure shared GNS, 2.6.2.2
client-side diagnosability infrastructure (CRSD)
alerts, J.7
cloning
Oracle Clusterware, 8
Oracle Grid Infrastructure, 8
cloning, overview for clusterware, 1.7
CLSCRS
What-If APIs, H
described, H
CLSCRS APIs
callback mechanism, H
data structures, H
clscrs_crsentity, H
clscrs_crsentitylist, H
clscrs_entity_type, H
clscrs_sp, H
clscrs_splist, H
deprecated, H
error handling and tracing, H
filters, E, H
comparison filter, H
expression filter, H
initialization and persistence, H
memory management, H
overview, H
threading support, H
CLSCRS commands
overview, H
clscrs_crsentity, H
clscrs_crsentitylist, H
clscrs_entity_type, H
clscrs_sp, H
clscrs_splist, H
CLSD-1009 message
resolving, 6.1.2.5
CLSD-1011 message
resolving, 6.1.2.5
cluster configuration policy, 3.1.7
Cluster Health Monitor (CHM), 1.6
collecting CHM data, J.1.2.2
daemons, J.1.2.1
OCLUMON, J.1.3
services
cluster logger, J.1.2.1
system monitor, J.1.2.1
Cluster Health Monitor (crf)
debugging, E
cluster interconnect
Cache Fusion communication, D
changing private network addresses, 2.8.2.1
cluster mode
changing, 4.2.1
Cluster Ready Services (CRS)
debugging, E
defined, 1.3.1.1
cluster resource, 9.1.2
cluster storage
recording with OCR, 1.2.3
Cluster Synchronization Services (css)
debugging, E
Cluster Synchronization Services (CSS), 1.3.1.1
defined, 1.3.1.1
cluster time management, 1.9
Cluster Time Synchronization Service (CTSS), 1.9
debugging, E
Cluster Verification Utility (CVU)
See CVU
CLUSTER_INTERCONNECT interface
specifying with OIFCFG, D
CLUSTER_INTERCONNECTS initialization parameter, 2.8.2.4
cluster_resource, 9.1.2
cluster-aware resource, 9.1.2
clusters
converting, 2.6.2
converting to an Oracle Flex Cluster, 4.2.1
clusterware, cloning overview, 1.7
cluvfy
See CVU
compatibility
Oracle Clusterware, Oracle ASM, and database, 1.4.1
component parameter
supplying to CRSCTL commands, E
component_name parameter
supplying to crsctl set trace command, E
configurations
reinitializing OCR, 6.1.6
configuring
voting files, 6
converting clusters, 2.6.2
CPUS
node view, J.1.3.2
CRS
See Cluster Ready Services (CRS)
CRSCTL
checking the Oracle Clusterware status, 6.1.2.3
command reference, E
commands
add category, E
add crs administrator, E
add css votedisk, E
add policy, E
add resource, E
add serverpool, E
add type, E
add wallet, E
check cluster, E
check crs, E
check css, E
check ctss, E
check evm, E
check has, E
check resource, E
config crs, E
config has, E
create policyset, E
delete category, E
delete crs administrator, E
delete css votedisk, E
delete node, E
delete policy, E
delete resource, E
delete serverpool, E
delete type, E
delete wallet, E
disable crs, E
disable has, E
discover dhcp, E
enable crs, E
enable has, E
eval activate policy, E
eval add resource, E
eval add server, E
eval add serverpool, E
eval delete server, E
eval delete serverpool, E
eval fail resource, E
eval modify serverpool, E
eval relocate resource, E
eval relocate server, E
eval start resource, E
eval stop resource, E
get clientid dhcp, E
get cluster hubsize, E
get cluster mode, E
get cpu equivalency, E
get css, E
get css ipmiaddr, E
get css leafmisscount, E
get hostname, E
get node role, E
get nodename, E
get resource use, E
get server label, E
getperm resource, E
getperm serverpool, E
getperm type, E
lsmodules, E
modify category, E
modify policy, E
modify policyset, E
modify resource, E, E
modify serverpool, E
modify type, E
modify wallet, E
pin css, E
query crs activeversion, E
query crs administrator, E
query crs autostart, E
query crs releasepatch, E
query crs releaseversion, E
query crs softwarepatch, E
query crs softwareversion, E
query css ipmiconfig, E
query css ipmidevice, E
query css votedisk, E
query dns, E
query has releaseversion, E
query has softwareversion, E
query socket udp, E
query wallet, E
release dhcp, E
relocate resource, E, E
relocate server, E
replace discoverystring, E
replace votedisk, E
request action, E
request dhcp, E
restart resource, E
set cluster hubsize, E
set cluster mode, E
set cpu equivalency, E
set crs autostart, E
set css, E
set css ipmiaddr, E
set css ipmiadmin, E
set css leafmisscount, E
set log, E
set node role, E
set resource use, E
set server label, E
setperm resource, E
setperm serverpool, E
setperm type, E
start cluster, E
start crs, E
start has, E
start ip, E
start resource, E
start rollingpatch, E
start rollingupgrade, E
start testdns, E
status category, E
status ip, E
status policy, E
status policyset, E
status resource, E
status server, E
status serverpool, E
status testdns, E
status type, E
stop cluster, E
stop crs, E
stop has, E
stop ip, E
stop resource, E
stop rollingpatch, E
stop testdns, E
unpin css, E
unset css, E
unset css ipmiconfig, E
unset css leafmisscount, E
dual environment commands, E
Oracle RAC environment commands, E
Oracle Restart environment commands, E
CRSCTL commands
cluster aware, 1.6
component parameter, E
component_name parameter, E
debug log, E
debugging_level parameter, E, E
lsmodules, E
module_name parameter, E
overview, 1.6
resource_name parameter, E
set log, E
set trace, E
tracing_level parameter, E
crsctl set log, E
crsctl set trace, E
CRSD background process
alert messages, J.7
CSS
See Cluster Synchronization Services (CSS)
CVU
about, A
baseline creation, A
commands
cluvfy comp acfs, A
cluvfy comp admprv, A
cluvfy comp asm, A
cluvfy comp baseline, A
cluvfy comp clocksync, A
cluvfy comp clumgr, A
cluvfy comp crs, A
cluvfy comp dns, A
cluvfy comp freespace, A
cluvfy comp gns, A
cluvfy comp gpnp, A
cluvfy comp ha, A
cluvfy comp healthcheck, A
cluvfy comp nodeapp, A
cluvfy comp nodecon, 2.8.1, A
cluvfy comp nodereach, A
cluvfy comp ocr, A
cluvfy comp ohasd, A
cluvfy comp olr, A
cluvfy comp peer, A
cluvfy comp scan, A
cluvfy comp software, A
cluvfy comp space, A
cluvfy comp ssa, A
cluvfy comp sys, A
cluvfy comp vdisk, A
cluvfy stage -post acfscfg, A
cluvfy stage -post cfs, A
cluvfy stage -post crsinst, A
cluvfy stage -post hacfg, A
cluvfy stage -post hwos, A
cluvfy stage -post nodeadd, A
cluvfy stage -pre acfscfg, A
cluvfy stage -pre cfs, A
cluvfy stage -pre crsinst, A
cluvfy stage -pre dbcfg, A
cluvfy stage -pre dbinst, A
cluvfy stage -pre hacfg, A
cluvfy stage -pre nodeadd, A
component verifications
checking Oracle Clusterware and Oracle Database installations, A
Cluster Manager subcomponent, A
connectivity between cluster nodes, A
CTSS integrity, A
Domain Name Service (DNS), A
free space, A
Grid Naming Service (GNS), A
Grid Plug and Play service and profile, A
integrity of high availability, A
integrity of OCR, A
integrity of ohasd, A
integrity of OLR, A
integrity of Oracle ACFS, A
integrity of Oracle ASM, A
integrity of voting files, A
node applications, A
node comparison, A
Oracle Clusterware component, A
reachability of nodes, A
SCAN configuration, A
software distribution across nodes, A
storage, A
system requirements, A
user and permissions, A
difference between runcluvfy.sh and cluvfy, A
installation requirements, A
known issues, A
node list shortcuts, A
online Help system, A
overview and concepts, 1.6
performing verifications, A
runcluvfy.sh, A
stage verifications
database configuration, A
high availability installation, A
network and storage on all nodes, A
node installation, A
Oracle ACFS, A
Oracle ACFS configuration, A
Oracle Clusterware installation, A
Oracle RAC installation, A
UNKNOWN output, A
verbose mode, A

D

debugging
CRS, CSS, and EVM modules, E
Oracle Clusterware resources, E
debugging_level parameter
supplying to CRSCTL commands, E, E
default application VIP
creating
appvipcfg create, 9.3.1
defining network interfaces
OIFCFG command-line interface, D
DELETE_TIMEOUT
resource attribute, B
delif command
OIFCFG command-line interface, D
deployment scheme
deciding, 9.3.2.1
DESCRIPTION
resource attribute, B
DEVICES
node view, J.1.3.2
DHCP configuration, 2.2.2.1
diagcollection.pl, J.3
diagnostic directories
alert, J.2.3
core, J.2.5
incident, J.2.4
output, J.2.5
trace, J.2.2
diagnostics collection script, J.3
disk group redundancy, 6.2.1
external, 6.2.1
high, 6.2.1
normal, 6.2.1
dispersion start dependency, 9.2.3.1
modifiers, B
DNS, entries example for GNS and SCAN, 2.2.1
DRUID
diagnostic record unique ID, J.7.1
Dynamic Host Configuration Protocol (DHCP), 1.2.4

E

ENABLED
resource attribute, B
enabling debugging for Oracle Clusterware resources, E
enabling debugging for the CRS, CSS, and EVM modules, E
enabling tracing for Oracle Clusterware components, E
entry points
ABORT, 9.1.3
ACTION, 9.1.3
CHECK, 9.1.3
CLSAGFW_FAILED state, 9.1.3
CLSAGFW_ONLINE state, 9.1.3
CLSAGFW_PARTIAL state, 9.1.3
CLSAGFW_PLANNED_OFFLINE state, 9.1.3
CLSAGFW_UNKNOWN state, 9.1.3
CLSAGFW_UNPLANNED_OFFLINE state, 9.1.3
CLEAN, 9.1.3
defined, 9.1.3
DELETE, 9.1.3
MODIFY, 9.1.3
monitor, 9.1.3
START, 9.1.3
STOP, 9.1.3
Event Management (EVM)
defined, 1.3.1.1
Event Manager (evm)
debugging, E
EVM
overview, 1.3.1.1
See Event Management (EVM)
exclusion start dependency, 9.2.3.1
modifiers, B
extending Oracle database home
on non-shared storage, 7.2.1
on shared storage
network-attached storage, 7.2.1
Oracle ACFS, 7.2.1

F

failure groups, 6.2.1
quorum, 6.2.1
failure isolation
configuring IPMI for, 2.4.3.1
FAILURE_INTERVAL
resource attribute, B
FAILURE_THRESHOLD
resource attribute, B
Fast Application Notification (FAN), 1.3.1.1
FILESYSTEMS
node view, J.1.3.2
Free server pool, 3.1.4
described, 3.1.4.1

G

generic server pool
described, 3.1.4.2
Generic server pool, 3.1.4
generic_application, 9.1.2
resource type
creating resources of, 9.3.2.4
getif command
OIFCFG command-line interface, D
global interface
network interface stored as, D
GNS, 1.2.4.1
administering, 2.6
See Grid Naming Service (GNS)
starting, 2.6.1
stopping, 2.6.1
GNS daemon
and GNS VIP, 2.2.1
port number for, 2.2.1
GNS, GNS VIP, 2.2.1
gold image, 5
gold images
adding to Rapid Home Provisioning server, 5.2.2
Grid Interprocess Communication (gipc)
debugging, E
Grid Interprocess Communication (GIPC), 1.3.1.2
Grid Naming Service (GNS)
See GNS
defined, 1.3.1.1
Grid Plug and Play (gpnp)
debugging, E
Grid Plug and Play (GPNPD), 1.3.1.2

H

HAIP
highly available IP address, 2.8.2.2
hard start dependency, 9.2.3.1
modifiers, B
hard stop dependency, 9.2.3.2
modifiers, B
hardware requirements, 1.2.1
high availability
and Oracle Clusterware, 1.3.1.1
application programming interface, 1.8
framework, 1.8
HOSTING_MEMBERS
resource attribute, B
Hub Node, 4.1

I

iflist command
OIFCFG command-line interface, D
importing
OCR, 6.1.6
incident trace files, J.2.4
initialization parameters
CLUSTER_INTERCONNECTS, 2.8.2.4
installation
introduction, 1.4
installations
configuring voting files, 6
INSTANCE_COUNT
resource attribute, B
INSTANCE_FAILOVER
resource attribute, B
Intelligent Management Platform Interface (IPMI), 2.4.1
configuring for failure isolation, 2.4.3.1
modifying IPMI configuration, 2.4.3.2
removing IPMI configuration, 2.4.3.3
Intelligent Platform Management Interface (IPMI)
CRSCTL commands
get css ipmiaddr, E
query css ipmiconfig, E
query css ipmidevice, E
set css ipmiaddr, E
set css ipmiadmin, E
unset css ipmiconfig, E
Interconnects page
monitoring clusterware with Oracle Enterprise Manager, J.1
interface names, consequences of changing, 2.8.2.3
INTERMEDIATE_TIMEOUT
resource attribute, B
INTERNAL_STATE
resource attribute, B
IPMI
modifying administrator, E
obtaining IP address, E
See Intelligent Management Platform Interface (IPMI)
storing IP address, E
IPv4, 2.5.1
changing to a dynamic IPv6 address, 2.8.6
changing to a static IPv6 address, 2.8.5
network configuration
adding an IPv6 network to, 2.8.7
networks
transitioning to IPv6 networks, 2.8.8
IPv6, 2.5.1
name resolution, 2.5.1.2
IPv6 Stateless Address Autoconfiguration Protocol, 2.2.2.1

L

LAST_SERVER
resource attribute, B
LAST_STATE_CHANGE
resource attribute, B
Leaf Node, 4.1
listeners
in OCR, 1.2.3
LOAD
resource attribute, B
local resource, 9.1.2
local_resource, 9.1.2
log levels
setting for Oracle Clusterware, E
lsmodules parameter
with the CRSCTL command, E

M

managing applications
CLSCRS commands, H
managing Oracle Clusterware
with CRSCTL, 1.6
manual address configuration, 1.2.4.2
mDNS
See Multicast Domain Name Service (mDNS)
mDNSResponder
purpose, 1.3.2
memory pressure, 3.4.1
mirroring
OCR (Oracle Cluster Registry), 6.1.2
MODIFY_TIMEOUT
resource attribute, B
module_name parameter
supplying to CRSCTL commands, E
modules
debugging
crf, E
crs, E
css, E
ctss, E
evm, E
gipc, E
gpnp, E
Multicast Domain Name Service (mDNS)
defined, 1.3.1.2

N

NAME
resource attribute, B
network file system
See NFS
network interface
configuration, 2.5.1
global, D
node-specific, D
OIFCFG syntax, D
network interface card, 2.8.2.1
network interfaces
defining with OIFCFG, D
types, D
updating subnet classification, D
networks
creating, 2.8.3
for Oracle Flex Clusters, 4.1
NFS home client, 5.1.2
NICS
node view, J.1.3.2
node roles
changing, 4.2.2
node view
defined, J.1.3.2
node views
CPUS, J.1.3.2
DEVICES, J.1.3.2
FILESYSTEMS, J.1.3.2
NICS
PROCESSES, J.1.3.2
PROTOCOL ERRORS, J.1.3.2
SYSTEM, J.1.3.2
TOP CONSUMERS, J.1.3.2
nodes
adding to a cluster
on Linux or UNIX, 7.2.1
on Windows, 7.3, 7.3.1
deleting from a cluster
on Linux or UNIX, 7.2.2
VIP address, 1.2.4
node-specific interface
network interface stored as, D
non-default application VIP
creating, 9.3.1

O

OCLUMON
commands
debug, J.1.3.1
dumpnodeview, J.1.3.2
manage, J.1.3.3
version, J.1.3.4
OCR (Oracle Cluster Registry)
adding, 6.1.2, 6.1.2.1
automatic backups, 6.1.3
backing up, 6.1.3
changing backup file location, 6.1.3
contents, 6.1
diagnosing problems with OCRDUMP, 6.1.5
downgrading, 6.1.8
exporting, 6.1.6
importing
on Windows systems, 6.1.6.2
importing content
on Linux and UNIX systems, 6.1.6.1
listing backup files, 6.1.3
managing, 6.1
manual backups, 6.1.3
migrating from Oracle ASM, 6.1.1.1
migrating to Oracle ASM, 6.1.1
OCRDUMP utility command examples, I
ocr.loc file, 6.1.2
overriding data loss protection mechanism, 6.1.2.5
recording cluster configuration information, 1.1
recording cluster storage, 1.2.3
removing, 6.1.2, 6.1.2.2
repairing, 6.1.2, 6.1.2.4
replacing, 6.1.2, 6.1.2.3
restoring, 6.1.4
in Oracle Restart, 6.1.4
on Linux and UNIX systems, 6.1.4
on Windows systems, 6.1.4
using automatically generated OCR backups, 6.1.4
troubleshooting, 6.1.5, I
upgrading, 6.1.8
viewing content with OCRDUMP, I
OCR configuration tool
See OCRCONFIG utility
OCRCHECK
commands
-config, I
-local, I
OCRCHECK utility
changing the amount of logging, I
check status of OLR, 6.1.7
diagnosing OCR problems with, 6.1.5
log files, I
sample output, I
OCRCONFIG utility
administering OLR, 6.1.7
commands
-add, I
-backuploc, I
-copy, I
-delete, I
-downgrade, I
-export, I
-import, 6.1.6.1, I
-manualbackup, I
-overwrite, I
-repair, I
-replace, I
-restore, I
-showbackup, I
-showbackuploc, I
-upgrade, I
log files, I
overview and concepts, 1.6
syntax, I
OCRDUMP utility
changing the amount of logging, I
command examples, I
commands, I
backup, I
diagnosing OCR problems with, 6.1.5, I
dump content of OLR, 6.1.7
log files, I
sample output, I
syntax and options, I
SYSTEM.language key output, I
SYSTEM.version key output, I
ocr.loc file, 6.1.2
ocrlog.ini file
editing, I, I
OFFLINE_CHECK_INTERVAL
resource attribute, B
OIFCFG command-line interface
commands, D
interface types, D, D
invoking, D
overview and concepts, 1.6, D
syntax, D
OLR (Oracle Local Registry)
administering, 6.1.7
backing up, 6.1.7
check status of, 6.1.7
defined, 6.1.7
dump content of, 6.1.7
exporting to a file, 6.1.7
importing a file, 6.1.7
restoring, 6.1.7
viewing backup files, 6.1.7
viewing content with OCRDUMP, I
OLSNODES command
reference, C
ONS
See Oracle Notification Service (ONS)
operating systems
requirements for Oracle Clusterware, 1.1
oraagent
defined, 1.3.1.1, 1.3.1.2
Oracle agent, 1.3.1.1, 1.3.1.2
Oracle ASM
disk groups
redundancy, 6.2.1
failure group, 6.2.1
migrating OCR locations to, 6.1.1
Oracle Cluster Registry
See OCR (Oracle Cluster Registry)
Oracle Clusterware
adding a home to a new node, 7.2.1
alert log, J.2.3
background processes
on Windows, 1.3.2
debugging
component level, E
dynamic, E
defined, 1
OCR
migrating from Oracle ASM, 6.1.1.1
migrating to Oracle ASM, 6.1.1
processes
Cluster Ready Services (CRS), 1.3.1.1
Cluster Synchronization Services (CSS), 1.3.1.1
Event Management (EVM), 1.3.1.1
Grid Interprocess Communication (GIPC), 1.3.1.2
Grid Naming Service (GNS), 1.3.1.1
Multicast Domain Name Service (mDNS), 1.3.1.2
oraagent, 1.3.1.1, 1.3.1.2
Oracle Notification Service (ONS), 1.3.1.1
orarootagent, 1.3.1.1, 1.3.1.2
upgrade
out-of-place, 1.5
Oracle Clusterware Control (CRSCTL)
See CRSCTL
Oracle Clusterware home
adding, 7.2.1, 7.3.1
deleting manually, 7.2.2
Oracle Database
fault diagnosability infrastructure, J.2
Oracle Enterprise Manager
adding resources with, 9.3.3
adding VIPs with, 9.3.1
overview and concepts, 1.6
using the Interconnects page to monitor Oracle Clusterware, J.1
Oracle Flex Clusters
and Oracle Flex ASM cluster, 4.1
changing from Oracle Clusterware standard Clusters to, 4.2.1.1
CRSCTL commands
get cluster hubsize, E
get cluster mode, E
get css leafmisscount, E
get node role, E
set cluster hubsize, E
set cluster mode, E
set css leafmisscount, E
set node role, E
unset css leafmisscount, E
Hub Node, 4.1
Leaf Node, 4.1
managing, 4.2
Oracle Grid Infrastructure
cloning, 8
cloning Oracle Clusterware in, 8
Configuration Wizard, 2.3
Oracle Grid Infrastructure Management Repository, J.1.2.1, J.1.2.1
attributes and requirements, J.1.2.1
Oracle Interface Configuration tool
see OIFCFG
Oracle Local Registry
See OLR (Oracle Local Registry)
Oracle Notification Service (ONS)
defined, 1.3.1.1
Oracle Real Application Clusters
overview of administration, 1
Oracle Restart
restoring OCR, 6.1.4
Oracle root agent, 1.3.1.1, 1.3.1.2
Oracle Trace File Analyzer (TFA) Collector
See TFA
Oracle Universal Installer
Client Data File for shared GNS clients, 2.6.2.2
Oracle Clusterware installation, 1.4
OracleHAService
purpose, 1.3.2
orarootagent
defined, 1.3.1.1, 1.3.1.2
out-of-place upgrade, 1.5

P

PLACEMENT
resource attribute, B
policy-based management, 3
adding resources to server pools, 9.3.2.2
private network address
changing, 2.8.2.1
PROCESSES
node view, J.1.3.2
PROFILE_CHANGE_EVENT_TEMPLATE
resource attribute, B
PROTOCOL ERRORS
node view, J.1.3.2
provisioning software, 5.2.3
public interface
specifying with OIFCFG, D
pullup start dependency, 9.2.3.1
modifiers, B

Q

quorum failure groups, 6.2.1

R

Rapid Home Provisioning, 5
architecture, 5.1
components
NFS home client, 5.1.2
Rapid Home Provisioning Client, 5.1.2
Rapid Home Provisioning Server, 5.1.1
image series, 5.1.4
image state, 5.1.4
images, 5.1.4
managing Clients, 5.3
roles, 5.1.3
basic built-in, 5.1.3
composite built-in, 5.1.3
Rapid Home Provisioning Client, 5.1.2
assigning roles to users, 5.3.2
creating, 5.2.4
creating users, 5.3.2
enabling and disabling, 5.3.1
managing, 5.3
managing the password, 5.3.3
Rapid Home Provisioning Server, 5
creating, 5.2.1
recording cluster configuration information, 1.1
recording node membership information
voting file, 1.1
redundancy
voting file, 1.2.3
Redundant Interconnect Usage, 2.8.2.2
registering resources, 9.1.6
RELOCATE_BY_DEPENDENCY
resource attribute, B
resource attributes
ACL, B
ACTION_FAILURE_EVENT_TEMPLATE, B
ACTION_SCRIPT, B
ACTION_TIMEOUT, B
ACTIONS, B
ACTIVE_PLACEMENT, B
AGENT_FILENAME, B
ALERT_TEMPLATE, B
AUTO_START, B
CARDINALITY, B
CHECK_INTERVAL, B
CHECK_TIMEOUT, B
CLEAN_TIMEOUT, B
DELETE_TIMEOUT, B
DESCRIPTION, B
ENABLED, B
FAILURE_INTERVAL, B
FAILURE_THRESHOLD, B
HOSTING_MEMBERS, B
INSTANCE_COUNT, B
INSTANCE_FAILOVER, B
INTERMEDIATE_TIMEOUT, B
INTERNAL_STATE, B
LAST_SERVER, B
LAST_STATE_CHANGE, B
LOAD, B
MODIFY_TIMEOUT, B
NAME, B
OFFLINE_CHECK_INTERVAL, B
PLACEMENT, B
PROFILE_CHANGE_EVENT_TEMPLATE, B
RELOCATE_BY_DEPENDENCY, B
RESTART_ATTEMPTS, B
RESTART_COUNT, B
SCRIPT_TIMEOUT, B
SERVER_CATEGORY, B
SERVER_POOLS, B
START_CONCURRENCY, B
START_DEPENDENCIES, B
START_TIMEOUT, B
STATE_CHANGE_EVENT_TEMPLATE, B
STATE_DETAILS, B
STOP_CONCURRENCY, B
STOP_DEPENDENCIES, B
STOP_TIMEOUT, B
TARGET, B
TARGET_SERVER, B
TYPE, B
UPTIME_THRESHOLD, B
USE_STICKINESS, B
USER_WORKLOAD, B
resource dependencies
defined, 9.2.3
start dependencies, 9.2.3.1
attraction, 9.2.3.1, B
dispersion, 9.2.3.1, B
exclusion, 9.2.3.1, B
hard, 9.2.3.1, B
pullup, 9.2.3.1, B
weak, 9.2.3.1, B
stop dependencies, 9.2.3.2
hard, 9.2.3.2, B
resource permissions
changing, 9.3.4
resource type
cluster_resource, 9.1.2
defined, 9.1.2
generic_application, 9.1.2
local_resource, 9.1.2
resource_name parameter
supplying to CRSCTL commands, E
resources
adding, 9.3.2
with Oracle Enterprise Manager, 9.3.3
adding to a server pool, 9.3.2.2
adding using a server-specific deployment, 9.3.2.3
creating with the generic_application resource type, 9.3.2.4
defined, 9.1.1
registering in Oracle Clusterware, 9.1.6
RESTART_ATTEMPTS
resource attribute, B
RESTART_COUNT
resource attribute, B
restoring
OCR, 6.1.6
restricting service registration, 2.5.3
RHPCTL
command syntax, F
commands
add client, F
add database, F
add image, F
add role, F
add series, F
add workingcopy, F
allow image, F
delete client, F
delete database, F
delete image, F
delete series, F
delete user, F
delete workingcopy, F
deleteimage series, F
disallow image, F
export client, F
grant role, F
import image, F
insertimage series, F
modify client, F
move database, F
promote image, F
query client, F
query image, F
query role, F
query series, F
query server, F
query workingcopy, F
revoke role, F
help, F
role-separated management, 2.1.1
horizontal implementation, 2.1.1
configuring, 2.1.3
vertical implementation, 2.1.1
runcluvfy.sh, A

S

scalability
adding nodes and instances, quick-start format, 7.2
SCAN, 1.2.4.1, 1.2.4.1
about, 2.5.2
SCAN listeners, 2.5.3
SCRIPT_TIMEOUT
resource attribute, B
sequence ID, H
server categorization, 3.2
using attributes to create, 3.1.7
server cluster, 1.2.4
Server Control Utility (SRVCTL), 1.2.3
see SRVCTL
server pools
creating, 9.3.2.3
described, 3.1.3
Free, 3.1.4
Generic, 3.1.4
SERVER_CATEGORY, 3.1.7
resource attribute, B
SERVER_POOLS
resource attribute, B
server-centric resource, 9.1.2
servers
described, 3.4
how Oracle Clusterware assigns, 3.1.6
Oracle Clusterware requirements, 1.1
states, 3.4
services
restricting registration with listeners, 2.5.3
setif command
OIFCFG command-line interface, D
shared GNS, 2.2.2.3
administering, 2.6
generating Client Data file for configuring, 2.6.2.2
See also GNS
starting, 2.6.1
stopping, 2.6.1
Single Client Access Name (SCAN)
See SCAN
singleton, 3.1.3
slew time synchronization, 1.9
software requirements, 1.2.3
SRVCTL
commands
add exportfs, F
add mountfs, F
add rhpclient, F
add rhpserver, F
config exportfs, F
config rhpclient, F
config rhpserver, F
disable exportfs, F
disable mountfs, F
disable rhpclient, F
disable rhpserver, F
enable exportfs, F
enable mountfs, F
enable rhpclient, F
enable rhpserver, F
modify exportfs, F
modify mountfs, F
modify rhpclient, F
modify rhpserver, F
relocate rhpclient, F
s="l3ix">relocate rhpserver, F
remove exportfs, F
remove rhpclient, F
remove rhpserver, F
start exportfs, F
start rhpclient, F
start rhpserver, F
status exportfs, F
status mountfs, F
status rhpclient, F
status rhpserver, F
stop exportfs, F
stop mountfs, F
stop rhpclient, F
stop rhpserver, F
overview and concepts, 1.6
srvctl stop nodeapps command, 2.8.1
start effort evaluation
described, 9.2.4
START_CONCURRENCY
resource attribute, B
START_DEPENDENCIES
resource attribute, B
START_TIMEOUT
resource attribute, B
starting the OIFCFG interface, D
STATE_CHANGE_EVENT_TEMPLATE
resource attribute, B
STATE_DETAILS
resource attribute, B
Stateless Address Autoconfiguration Protocol
See autoconfig
step time synchronization, 1.9
STOP_CONCURRENCY
resource attribute, B
STOP_DEPENDENCIES
resource attribute, B
STOP_TIMEOUT
resource attribute, B
subnet
changing, 2.8.2.4
configuring for VIP address, 1.2.4
syntax
OCRDUMP utility, I
SYSTEM
node view, J.1.3.2
SYSTEM.language key
output, I
SYSTEM.version key
output, I

T

TARGET
resource attribute, B
TARGET_SERVER
resource attribute, B
TFA, J.6
daemon
restarting, J.6.1
shutting down, J.6.1
starting, J.6.1
stopping, J.6.1
data redaction feature, J.6.3
managing the TFA daemon, J.6.1
TFACTL
command-line utility, J.6.2
TFACTL
commands
tfactl diagcollect, J.6.2.6
tfactl directory, J.6.2.3
tfactl host, J.6.2.4
tfactl print, J.6.2.1
tfactl purge, J.6.2.2
tfactl set, J.6.2.5
TFA command-line utility, J.6.2
thread safety, H
TOP CONSUMERS
node view, J.1.3.2
trace files, J.2.2
tracing
enabling for Oracle Clusterware, E
enabling for Oracle Clusterware components, E
Oracle Clusterware components, E
tracing_level parameter
supplying to crsctl set trace command, E
troubleshooting
OCR, I
TYPE
resource attribute, B

U

uniform, 3.1.3
upgrade
migrating storage after, 6.1.1, 6.1.1.1
out-of-place, 1.5
upgrades
and SCANs, 2.8.1
UPTIME_THRESHOLD
resource attribute, B
USE_STICKINESS
resource attribute, B
USER_WORKLOAD
resource attribute, B

V

valid node checking, 2.5.3
versions
compatibility for Oracle Clusterware, Oracle ASM, and Oracle Database software, 1.4.1
VIP, 1.2.4, 9.3.1
adding
with Oracle Enterprise Manager, 9.3.1
address
changing, 2.8.1
defining for applications, 1.8
requirements, 1.2.4
virtual internet protocol address (VIP), 9.3.1
virtual IP
See VIP
voting files, 1.2.3
adding, 6.2.4
adding to non-Oracle ASM storage, 6.2.4
adding to Oracle ASM, 6.2.4
administering, 6
backing up, 6.2.2
deleting, 6.2.4, 6.2.4
file universal identifier (FUID)
obtaining, 6.2.4, 6.2.4
managing, 6.2
migrating, 6.2.4
migrating to Oracle ASM, 6.2.4, 6.2.4
modifying not stored on Oracle ASM, 6.2.4
modifying stored on Oracle ASM, 6.2.4
replacing in non-Oracle ASM storage, 6.2.4
restoring, 6.2.3
storing on Oracle ASM, 6.2.1

W

weak start dependency, 9.2.3.1
modifiers, B
What-If APIs, H
sequence ID, H
Windows systems
services for clusterware, 1.3.2, 1.3.2
working copy
creating on the Rapid Home Provisioning client, 5.2.3
creating on the Rapid Home Provisioning server, 5.2.3
PKĿլPKP DOEBPS/img/cwadd002.gifE[GIF89aNyyxUTUCCC!!!444FFF&&&<<>>aaaQQQmmmǛkkkdddjjjNNN```cccbbbZZZiiiPPPeee222fffqqqnnneZVVVAAAHGHwvvYYYMMMPOOz===qpqhghdLcbbTJIJYXYGGG}||YA@Asssb`a:89SSSihh{{{Gʹet`}b?ttr KKKIII!,N H*\ȰÇ#CA3^̨ F9Z(rO H)˖*_ތ)ϟ@ JѣH]i3'N+)ϙSauT`MJٳhӪ]SAQ­*pvymZ7n_x7kႇ N,W0ǐ#KL!F3k̹Ϡ^MӨS:zװc˞}5eW@ ,"ȓ+_\aq ǛKNѭkν{ZË-ӫWOBr7a{!GR7ŀhXiOFWGT "B~vmFAn$h(V%An()p8XEA@ JlepHH菎L6)D]#nt`XfVʱ[7hbINeav|x| dY$硈j@曐F꒜FIF8e 1*&ġ)CI* ֘ +SP갑 뮚dlJ}VۤfilEkPZk؂ ߶nUޗ'+Vy2;mGl~%u/ó;<J,rh u1 w, ;ʕ,1<'3P4kY8sHOA;YE|tTOMtYQwr`vuOY5Y]3 ޥlሬt#5k*8_ 7ߘ{n skP(A9ĚQQ1q=|NC]0ِ"+S C853x7+=+)qF0wH t*/,)W)Kp~PGbA8v8 z s&rPN( ې788YG(n@(L `A݌Ln gH wX*pJI "\H42 26ЉJi▼ALRִ.2чs'2 PU|! e6*QI%ĈX S9*aTq RGeH" ĥ c"'ɍ |i 7I 8gCG->3 ْ9lf2Ya58y39˳vfCܬg9~*,h2nSν3dȉMCQԦjwپt6zìƫ ]׿p;(Bvռ6NH8@oNp\͚6Mqܹ6BnI Z\UIZ/\oCw@ M~Ooj {c$лb7x_8Ƨ#i8A^9>lnMFw<8#N,7GQFYsEs鹃§I&WkƼB4l[ ~Bm:C?;XҖ{Q}sTsֽts瑤|0ňO0[2ARq&R}_'l{O ()~go%'{gve'gg Xa F}g}8$g5B_?r?17Y#x@$tNxpⷄkYpm&S\ ripX#FP `gx7@lp*j\~owNRp$j&Z a0_vPw~ R R0bX  #5(D[ad_p 8QAthg芐!IBh=xfe((&2aXxf!؍Xa_uXFwr긎׎av~wxYht؏a7" !PqWa"-Y !X$Gaq戆$%J`b` $ 2DR@8iPZEɒQ&E@) q7HW@@ɓ`-b y{H7@$kiQY7 Y`_X|-bYhWp5($0V9 a2```:zٔDfZr#`ɘSI\YW?)`Y  )q@iBY:iyyPS9iV&YIa7`W扔Pjɞz(cΉD:j` $J*y)I7 9yqPJBy & {\)79>93zx!ёycӛ7 &BK˜9%E$Q[)c@a `j BaԤ28ؖY16YafZ: ɢa$;$fܒqZzHK9٣t 9U*l9E }!Y6Ja ڊSSiIa iݙOB-"c|ɤCpVP٬U0ɮM \ȫ蚡y˪7y7"f* Јr9pIsYW9AV\0Y~@)y y/{dDLѮ؊9=볐7CKlBJPe”T!c\K\^#+!fQf+hKj{ilk۶bsr+t ~[۷w+K!# P9Q["`[OKKk; ۺ+K{˻Kk[ۻ;[j@^ܖ؛[Kܖ;˾K;a 뫿K+aL5H <#A3L0 <2",$\s±*<0, .ï14|.68ֲë>L,@B<ĪRĩqH,)JLĐħR&TlV|Nť\$^L`92ơQf!h jƟprt\qǝz|~ț1ȄֱșȎL\Wɒ?6Ba: ͖0-5=Ӡ&^dg 9@9L?Gax,#Wp)E k^soP7"`}ܨg!ִҒa"ih-[& +AaI8%*a&`S@ /IGYpW9:-r.͒ɣ-.gqȤ )0뚉 om=ڰ͜8@`!ڗ#akA'=hq@IʍsJBZj7ghz#:9k9~ʱY7"w>zxS9PP i~yn-NBI'@nE :y%kI )ZJ㞩x5^@^fXEZmt:jڭ(\0=iZpd-_zjا؛!z$}#kNYW=s^ p7Ry#f^i4N$YDI$)N0)Ny怘׍bx.Xia5MJ0:gMiy XBIxڪMk~:I&`*~~#$Kﮡ^b[6 . K YN܈l1䮄%]Y-d; )z#Zi9jzG衉ba, xM,SV:䉽$힓)?i1NۤNJa- :1Qn*|mhћ"w MH0'PP)봶~/cA(0Zp826?+)T):YK?_joKO$_W@$ ۹$d#WDYn8Ҥ陟Zz٤7y#OŹ1Jqzv9-.\@p$nL(p`APQ#2To E\$ ( ]!JN;I\1cʕaj18JD 5HWaŊ cծe[qU!ݹ$J-Q wm {K¯@O c7%x=ZcӤJxְ[Ԣ= lܹu]no;Q \/g\.mΥOnuٙp!ˆn{yqg{cߎ__p@ 4@TpAtA#x1hC;CCqDK4DSTqE[tEc 1FrHa@7rH"4H$TrI&tI(4(J*J,,8Rq j;kC3TsM6tM8sN:LM;sO>3M M42OFu9tRJ+3PmC;-$tTRKUSTSSUuRL5eSOc͉juC[suW^{W`vXb5VUvYfu]0`Z6vۯR0Hp[seE7X\vm]yǤ^1݅_~ߺ_>}6x,n/a+ax<'a 8c6:?c#BF9ޓW> S9r[7omf͹yaqgB:xfХWhsjz-Vj:l޺&,~lYm҆;=ۮm>>;o;

iO U(CχAЯHԨ?3яs]gHKӤ*E)H[ї#)mJkSԥ:)O4)GwJԣ d|RCF$r+w:$TY TUUfWfK.[OU6iYגhz+۫~k%[l ki6'L:r-e2i:zj͡ -g˻r,C+P#r7!o;!!AA$^P&~]߻F ]# 6@)CP^*Xn ,L\L ]/EDV )JP3(.,+ ab[E5{`D+-h-`ϣ˩_h kɚ_u?k]׽گAz آA6p-9e _}6h+6v+! ޴EwsJnx[w!{[r\7sHo'|+pxgZ,˶cGpgEmNEq7"qM~v-a̧]Uja[\.Ĝ˱Ί[;KsC'ivN@*dNo8sR9Vuc@;Sęsß%zGtr*|d1_ɖ{ XXiMe/ㅬIZ/43md^éoqFȯDo`p҃Hk/0+г8<*#9Kÿxr>:?rX82#[;ҋ< p @ -#@; 0B*|#klH[7pbی:(,P0)@S070A x3 < L 4jŇ(?Xf􉅸4LސbF=k3#kF1,3=SEɩ[X+^>`|a+c<ĕ0G- : @k< MNsBG*9t7#Ô GLG5l:vD,wxS:T678S'MO\Ti=}3U@5ЕLMMR>RCTDUǰPݰQU~)E\\SY`" rݹ(ݜ=]M]p L5EYW uR]]k8^^mRUfUo e+F-_=_M_]_m_M_(@D9____2mX`n`~`` n`/ `  ~UGQ\f` ^a `xa`E\[]aNJ9=P:`<1a:1b@?b8i\[b.+mWai u~m)9a1)gl5gNgh@hVp݋UmS&؍^8y֙ȉFڑ>ԙ>ގ67bƚgHi:+i݀*,ahƀ͉MjPj]ij끹k!z)2^k Tdkƣ6sf ~ll.l>llnl~l®~u~kyެ {Ƒlmmlɾ͞2Ɛ~Ԇ.m۾m܆Ӗ~׾vvm^X9^acO)nHnnv~ NF#畕^FnvnOiX8YnxopNom1ao0jloXjp k l p7ljppsp ~Hg8qwqpi4(Q" L#_ ir10?q~ ,1 o9 0_#Wk=s56/7Wt mI˳sst*q1r#2s?A>t +O$s~$' (GWgrWu)~0XtIJfԊ9!7e7Lv%wQ_sjg4\puova*6(w@Pstwu;RoTv]uH~`/qOqwwSG|ߖ}nwu耂qExnXhxly'~ *0xP{w7mzXy_Ҡ٣m'7x]yYzswPgqz zOzpQVO{yv0NA{Ҹk? *h||ȏ|ɟ|ʯ|˿|̯1oI|yq}ÿ'}1@}g7s}}{q}}?\}^~?x~|7O珕G~~{~|w?/7m!8`eKϤ#t+XP &TC >_ RذF M\HILhra˄/ D8`M7O,$J(N<*өM:)LM¤_ VYdbhk-ܸ9Űnݻxw_x梙Kc;¤˘5/̌rg==ZtAmMFO5֯]Ö]{gܥu۷mڴK{?\% um94˽u/K~y #pջ/?=|i7`k*96 Z(!VqEtAB'B"(I/o}Ĉ#:c;飋@B@F"GƨUN8=VVfe@_Ҙ0Iә8Ӛ`zIfƉjn(fmgPB'ceqI?hx)j\:*z*j*::++Z뭾 ;,p[,*,,іZ{-R-n-K[l窻../;- ,W|p#l ;B8ǝF;8K>9[~9k9\T硋>:饛!s :.j>;P;<<+1;C?=J_=ٟ{{-ߋ?~>>)\O< fO |  &O , N<L$&N.|! [9.6!sCb!(!F<"%2N|"()RV"-r] @ 2F31vh (G cG ٲ>⑌"3@LY 4i Z(AAJ ArW!2 `*!?n pW`de> H& BI lR@ fALh! L&AJp^ YdA2@IGr! 5Ʌ `@p&.9Iر&-m?I(ZUi,@  2jaW&^ bMH#)K]2.E$PC^75-2&@c1Qja ҃re/[ZJ@ WeMURzqo[$ϣՖ:U&Pjp.@ЄlkXv A0)OZATn )|T)hٙ ltMYI׊r1H/0K^D5HE]HᒷN(DjX|rE)_^ Ԯ<4 Mx!$%\Jʟβ)U{ 7@r3O봖lH\+n1Z31u8Y;^jS; 6ҤjH.jי& resaV5$#ՏEUuVc3WaVOm pBق("KJbټ1Zn֚3Yb8+ӜC}sJ-dY?H0l@̂hL4gjP3t 5k UHqbHa vP^@G Ё* Npf]U V4Ё.&AMFay*Z vwc#"1qԑ d} aĹB)TvY0_Z U vq4F@1{6v @Fזа*T.6'A@gU&]mUHy*+.|29m3ܕc>OBx#]\LAa0N Shz$M/; ĐLU%&;P߼dt|*Om8c7,18 4OTMEo$0 o >3Σ^os1<~W_w%HH\M T8AUA`UA*1T 4ԩA]w ͱ\D L `A\8 DAA_@a xy, ƠaJ4Uڜxm nᯥ@>᥌!vW xᵱu8` @L  FJxu s"[ x!!D@ %bi p"@a f]fAb&'R"b?DB⢰u?¨\+c"- 22rJx**" Ԉ>04jJ|3 Pc A/7"c8r *]"A>4\c?H6bA?$B @"A#\4d?T!A?p CrC L1$*Y =r$B # <`䥰 Lf,\36GA M6$*%Pp0Adԁn(e  ;ZSrxS. d$ GR9%ESrJ50je[ve=}%K0^bE#2j?[MbΥRڊ;2"dnJ-!bh^`OA^fzWu%xA_¢BhnJ/"# Apfj|)V&T妠Qm^%nn%i֔#&KFF"c  @BfK-UTlZb?DfOJ&@pv! 'lR`ަ"a8't6-Eo" l.~@v֕?}R! \x`ޢZ'bv{?H#\A @x HAt0Z0hfdHA?E 10R#_%&i@Ҏ‚*" *`6ޠU$gbflp$lTmCB#D' B'^-!1Z-V)e!x0iiJ*R v $@%|iNjQa(+ )䒊 Hj($(XR:%*މ]?@2A`+~ZiԻA lTL@1Aj*)HvAFd X, )it+%?D-* nћ@k!ЁfhWڮ| B(AQAI s۟|.߆-.&@|=ڶpN)t@@뾮¯j|@|p$EETB*G ,.Z/!0 pQE,p?϶'o>l}(Aprt/./ +B0/Z8 g!o6{k@l@A*H?, !hA+TB D'׈rs) @ , -!LB%00- o/*4S3]s6rr%T 0qlk;r+[2==./31v*21;;+ǁ2H=ہ?dÆ9LHdP+|,)QE?@1'Fb 2# ,4jA(Z G4reK1gҴs'OA= yFz!ŊW7~Q$ɒ'Ul S̚8u TPHIs# Q%V/Q>XMƎ}BZtMn~gLS 5B,_VH4-p4.OYaӼ:SGE F5V]>P\WUgW͂t)E,dZj1V^es-7 J\GCmd[mwPU ^Fx%<Ҷmi9#PDLmҔb#>ZثdUb˗!yMDzօx=z]↊N%|IitsX˲7&Dvh~&ҊQo)cU'ZvoO|S.2RPr(Wxd\R>ڐޮtIYe0,yσZ{lVY]ꧫAzCImCpQ at&ۋ(!|pe> OvYnB E:]-Š΀⺅ N |HYHȄx ꖷ[( {$bM3d"n+YMZ+ĥ!V9%JQL@LRNGo@t9P:Hƿ~i3BUd(^g[2LK$~jFnr)%&ɛv ~42Ym9(&JrAҎـ.20]R,G$`N-rU)߲H32} B\43Wb@&+3#U)`l3;O ?&o`R3W-[sU&9In`h6v`/}32.; "vbM:Jtpӌ2$@B`Z*""2@4e4o F4 ԠM!M$  Cj?G?s? H{ Tr"E OnnR)=Ad@C  *:*;I4)t T.t ` <_t cg4@,4j73Bݐ1ѴԔMe @Ig6qWNM{&`NT@(C@j "19 !T9 F@ҳK `*#". 3B5@.c -3,at\,AAmCH@p'pr^E^q6A  l`&Tqq @pw&xo@@Cvs k!@1  dw H&tf*T(`ZW !ce FAPQɓJ!Z7s3t |wΖO}ٗ`7zgIdol6yͶ}}vOPKLK-8|÷KL !?8W!ZP4eW TO/r ~w ՅG!.6e_`!XX at8tM  JAll Uba \ w_!q}O`6WW  G-@naۖG bD2w Z @4e7 K` a\AluVE$@!`nUrK!ry{9p͈9eޘꠠxz Z@ZvG 6z[fyz`oxؠ 4`@>L@u7PH$ l@ Ġ Bt&D` ``  nt `!Z\Q$ں`L <Ι::kکz`_M; L AOZe x : ?@ 2 VMZoy7`" @o`0!|`v pHZy Xp[A De>{ z a  \|"r6[ Q图)ţ"7ak<[`ǡBL[|ߕew܏W[_c^8Ō;~ 9ɍ7$͜;{ z̤Ko :իG~ ;"ٴk6:nݷ{M9᥁?p'F& a)BzT`0yۻ=xX/B, @a$t W~Fx\}mU~'?"|#^?cxa dczW?$iaUT uQ# !Vŋ(ƈehF^BRх)O),,XXv i{|`I&]'{a Aha.: nJ^NBft' T3_?# ^dW螫fF"v|,YX׏ $2塅D:Iz[w-,~*} a߻)aBZܵ OuB)8`?3x*[E TH^#*̟8F?UԙF^t(oV-VXUDu,ўdtuׄPEUv˄UP!1ͮ}n}wd1WnTvYՄ/n?~O;.9嚳tyey'yN:}>S z"-oNF^^Q_|_ ;PKԞJ[E[PKP DOEBPS/img/cwadd004.pnguPNG  IHDRA3UtEXtSoftwareAdobe ImageReadyqe<PLTE怀???@@@>>>뿿 ...000Ӄ:::999QQQHHHߊ\\\գ,,, sssMMM###%%%nnnFFFJJJ111CCC888ǒƈKKK{{{tttEEEڏWWWAAABBB666$$$~~~777333<<<xxxbbb|||vvv===www;;;!!!yyyDDDcccYYYzzz 555"""GGGuuu***ZZZ^^^444LLL'''(((fff]]]UUU &&&---+++[[[}}})))rrrRRR Ϲooo///ЋNNN kkkVVViiiXXXIII222hhhlll___aaammmPPPOOOjjjqqqeeeSSSTTT```pppdddggg=rIDATx\Gp8z;"RQ#E'(E+ D 6l 6`{/KԨ117w Fʱq~쟭W Y RΧgYGFV (2U+W]eD&ޡ!2#Uy1nUcr,SB~&!ݭ2g(VF 0n!{#8$}BoE :=w50#4==gn;nixt2C?㤭qQ_ WVu(]rLi-d>wD}d! spK{T%ŒIʤT/&/vw+$#<+T&_$B8!#O]9F'a~ Hۨ;rKf%n-VB&2qS-~w+Lʞ$22aTJd8'BdE>$[6@47ORYdfxvR!- fd:')~w+̜DfXXT2_KyZ|2^ 2wQ^e.̭^|Lד^ !zr3O t@6h :,.LIIK8:ȸ1oO늨 Zb>E2)B?i ±:1nyD!1%tXfii{y3~$uq幘~,MZZ*253jUd,nw;,n@3r ^gZOk-1LRfuqiyݙGj7!K`^e[&9&+Y̫F13 `vLfCU-vw+LT75W@U r:.ɴ0$,H諢e,,ЄAI5DvkZRT2}t{2Ov>1 9&nŐ)f=x O&`w'oTۗ,* yp2-%2!2SX232x'{OlempדpKZ6qQ¨.Pv[v@6߷J~b䖴 od^ ޏ,O-jb4#48w۷d֛?q~=ظkij#F8sINBLfwWjS݉GyNikpYY羷P詌0v@ 2;=c<,/E{zx=kn&D&,DPR]5hZ=NQ:+w4w2m0IH<ge0[]e1կI:ttH2AzN&9q'jji W T]+, |I14y+L]U6J-3;}nZHL$Jdvdw3_G)gޢ3nQfy߆%~M ǙAA0TRSޜmOrm$}hq=gxoY*=·3\ӲwyCRdgGw*kFAouwiAQu  }"i.Yq?X^ *Ьkfh?~i IǬuzB ZɍM ״_ꆾv#^m3A*njMpZouwjda@MM,M5;4t HP] 3  љBݚEjkb`*^A'7TDeAd-n>i%NSul75KuJJl"ݭWӽ6`>Y;)s2. aSLN\!7zɃLU_N0a2&;31a2yI_{rY`2L&d2L&d2L&d2Af*d2BXQxdj_yt' 1`oo߿?eDf,mZZ/2FS$j]+dDvs<\Dkx e;St&V|zc[͑P\V\JPn>-|dFfŵZdj04KEe"ӤZ-W2K|$7,aX:dG!82ϩW`S?ޕ 8޺ԥ ԙL&̑ҴZ:63LoP# L&hZzQuL&} dK`ɬT2R_9>B7WY&,$ߟk_͌Wb[2kbȅQyЬd!3ˎC.d~UaJ$u0QT bȃ>ge~;$&1RNd}ڡ=9?ޡӿ9ysZ##SEV$Ȳ!o[{* G&;.D&dYS+4RL&R=d2L&dʞe)3L$" d)0L$terɔm}I F^KIʥ<^eFW̉r)25P6 bje|P2JZjOۺkUk\kJIE. b. ʥd}^ZdO{|Vܶ~ꜩ\TA 9#;[L$s,2LfǃRWe &|UṢ65'NL&y<{ $G4a2̯eT7_6֥d2Z&^>>TDz0L`BZ[&RˤͿ+f2[ɏU +L&hUQ5r)Ӣ!a!+2KJ7qOeZ.e~hg|ymuUR.4/@WFd*MZB:U˥Le/@Yfdj- Lj9BˎL[-("So.t9pWd~s;5j(j~eҪo1^M6uP8[5CghŋU *؈CUIi TQj.L7E2ʴ,/˴?h,FN-~;/1f}Fv?y"ժmL&2ym`{*#'  2Fh tVJvB؝iL&YF #_̈́udl`Ϯt>// /¿D[_ g1u0ysQlz>ƒ=WfBB8:5</hPcP+99评\tzD&yFݍXp-dyպJ1FV??@zqdvV{4l s}yT_RېX7PnBSSӋp8;uR>zV}X&M<&W57/W+ȹ0.#ˁFuϫ@趧%1dFZCp'8>CyXnꓻ_]J#/_~QtQa㭦pxy3ÍPE:V :W_|-t%2u`nIe&R ~iO_"2'e֔s뼇5fO4r|O`N˹av&KL^+S#A*Lr[[LU  }q:A%)yϯι JLr*~ʷ'0ٟ#'N9X=-dIc>_"SNKk?!= 3nm-*~oœo̅>Iznomj{C܌9jkpM%$Abi[)7:7ɉ/HKz8=Z&+.U"[9BTR;^qe#׹sn 3xTw&sk.{ 7쯛;˥ɢ캛NfI_.u}s 귃;k5=`cMHgrZÖOۻP _1iQLfe&pQ,a R RczAʼnDm_Jn+]FƹZ"Srcvcbbk<3v?6Le{M]xc_hY< 7 >8E~W.Iמ&;1 c!Po;Z{,_Z lG ɘ >b8\[RȔ;v|hZX3trc]v' 4Gd&+.Dys=<-30{u&I^qJ$Z \7>U/]*`-BcT?<)zZd[],{ mپ|4ΧEғi备N>%X }jYW[dLCf3= VS$hB<stɋ;{\:5iˉT2L 2MoJmw,/x<zlBx[߼q^Ye*%P(EÝoBM u&:лmer\t$ 3ݴ]&tE%9 SN2sY4 Pd00zn_lAyp Sd%{y{{l$:/o鑀>L~eb̀UN=~yD0y:r*:\ H;Fݴ]#Uhc{zL&2Q_?ҥ"g DVh?#=1t9b72NTgMvkS27>bw/iH d&=8UrK=A<}MyS_dIMrr@E9ֲsbdlk=ž4vnעr&Ӄɔ_W߿Q4x(f,f9I/)2CNEF{*^a,Vo;)6ךdʀLv-M ކpZe"Z[6,qLf9uy;7򘅛Whow{G5qfcc~Z}{!^zS<ȔrLf9FF4*TiJzϒEܖ˓ 2h ve-ָ;GPQdbpw.{%3 Gjrői]'GZ:}YE\q3Ibw ̭Pqdb<y?*ʮј`&5̝Ί$I6V?*1\LȲkF?ZJf`Em2J%; GW'R,? 36ǟvZ1;g-G,"`I D&Sn-.X)L Hdr  '96Qd6Zj!"!?'Q>2j<*9;"*rWkz~Z2"ӦTK?eZ][&`Jmb•u]yV+Xuk{2oڼQ(*eF>[!-2U C=PT2"VkhTږT]zؐF=d9;H&e c^\$@ g226-|L[ѭuOTM@r)ue^8'#q|T]SGA^|R3FBڛP\ˉ3$]ecYgm!"@Rʼ maf/\/"P q%sw F#HwQU> r' _!bgo[L8$r5p#kgNZA%s #nϓT).ޱ!O諅8wxyrIZKJ9<(7 Fx 77;Jx|&^;O>%8%-o>p0&* /]je:٘3yVK~֡99(#ӕlڝ3p t2UBܐ;;kC-wj/j4Zd#Cծ!cG ĀC)gøppD:.Yw_Db$t9ބe<<海oΣ̧p9d&ߌ[aW, ~q8s}K0dդ6 u;|euf[귁4p C?0ٱuZ>/k=v]~Hoܾ/#V ڢ^ޢ91xMɼL<lLKoLfؘ{ĕecGa6.UNNb`M(wW g: 6zl4p9ߖyߌ]hc.k9pM2v}<5,jO:ӍXr-$'vRmwYr9 Q>6=gjp^]\U}{Sy b;3rcefXaٸdW'ܔf/3rhxLs-; څ۝h:֗iv\G68jUչ,Dhݐ{KL&;JkJ:>x[_N<:'-MtA} ye Sa!tyH|y )SN>G\5~*|7ؼKؖ[]g^*)`|Syv9yN2bvtiƦM3.q}t}wۮ](L# j}M lkamBD ^^qNoJ08z{ Wf\_W1K >$pVc.n;fļ\awZ%3{6qNѯoKֹ> zp:їYo\ߑt62u=wS\4E&"\ @R.`2+L`2L&d2L&d2L&d2L&d2J#s:ծEd˜}ˊ]^ˈ|z_PVG򹶹K/s,HJJ[;("SʽO!32W.[݇m^.L4/XHV!7R2oѺ{~O,%#2VlUۖW5U(s+, S5VFdraW?k5UURFsv(*SMǐj^. ~ ƣLʼn)STg*U:jv 6 @1d2~siRd2 W GaR &ɔLR iW)f3Җ)Q7?wL&YPad2\25L&2[g22,Fe2BzdʴLc[ݠd]dpe2eWft+p ^+LzɏΑAb )2|:UYQdnذ!Tb2e8ss>LKkٛOhdʦL7R3 syQL=7o+YG|M HfeTي$ZhOm&SfeJPdm4uɔ]Q%Ӻ?‡ɔaxXz[s}6LY$LlˬcPdVɔmxTd{d'ӹ4ԧTɌL4zoRj5hܱTqt56YJLf)":/#2&Tybda&T2y\Lk54LYe 22/3g˞j/rߊkſq'B7Dc!Bc:~1PnŔYLY9@]]]_dxy?PŦ^WP?I @~W:ѯG]f[^LS>k5)k2oN(9u\qOڅ hɐNJd: k}Pf *WNliYL&&u]fxs6E {# w邐{I!Gnܹ;ᣥ?`~L&2B`=pݳ6Wc3SH&ق5Ms6j){d2qy)NkX4dƓ R&iF $[vQ[v@p:9- /R߇dj*N( rʪOǙo llyCx1*67`8m&Z.'EC0zXD IRL&2y|<0:]qW&ȈTMyi- ⑔\f6V[iT}M9g;S >nRL&2ECs 02h0r/({k?;7ZB=55md2\x-G/d,s&Mk tQ d 7Xx:P,t!5yo,DFsÿ'ߓ ÇHJ'Zs2Nw`s?ǧɃ0 5uBSS[U0imL"482] yIԾ8rBv7:W ኿ 1B l8`-WWfYN2AS5HF;sf\4==U5OIe693;!ӵϬ\2] 9 i7&O0e2y9-ۛK.ޜˈsq+ܷ =r  : ?$?t-ɧL㐶7z{b^x #񒌹AAPs<1q,tQ?Pk"EU/)r58~|d<Oqmۆc2eY4ٺ~_ܵsqqQ#7Fl/TO&§d5zLd#S;Z^hwY3tQti]#d(!DdNÈ-X `ii'!΄%DOg[} &ɸCJE+q_fdl~m.*w-r?"XhO˸FiaMSp#F C7| M#rӶJ&o1f2y?¿J"to^Ku3r68`<41y#ɣLWӚh"+"k5kĘQ 8f͍[JZir pX&1Y?_ ;v]!hVE|\<ރh3/N曔pF]xTUɨj,&Ҵdϛ3L&dVjiLfi t|~>ʈt<:),EآEE-+dDjZnd"~jKIW~⼊tVU1 }+EFdrW5҆#+2M+Lfq&<Ċk5Yplc8ʊLk5)QL &d2L&d2L&d2L&d2L&9d2L&d2L&d2L&d2L&d2L&d2L&d2L&d2LL&d2L&d2L&d2L&d2L&d2L&d2L&d2L&ɂd2L&d2J"s][>Ñd2)_EFc<ܗ8dVL H1P1LfH㱶B"dVL2JE&"d*OV3}Lnbb"oLdVrLLfL$YN9X~d2 -d2,L&A{V Тޞ=P[^ǏA4? 1ўS,_D߇a2r}uy-#ƏȈjUG{g\E^D\3.a(ﺛ]L&S2E5Cy>ex}p5DLL^ixb|C1TdCdJ[fbs v6nf9J㘟kZuF+u^di3LeT:!o vT}Aް`kbBnf&S^e:!dF)jHCt%iVI jE}~zd3*Yr 32=?8g> #Z =< t5=@!j`)2@8l.ċop8 j JՈ:t;$[4$4V#28kQDPH #m:dʫL!beAFB5!3Kr`W"gDܰ.:`̧LZxUgQGӜ+&HVLX1QNJ&hJ f8P42-&Y]A(|OaN2_B 'SWɐL&O2D|Tt"',kbMҟQcYիRYٞ?0E?H\Ele2ېb޽9MKO׹A~5tvͥ΍YYdJ㘭l2U##c`]=;nͅ:p?d2|tX=hP"XtȦY=?ƷLЩg>[L&ɷ0 qօhh܌ H&NXȡv͙Le.`F:qXf݄pJ|d29o_jiHڠүfk_k6l'& hܹ+1 AN$GdD rOL&ɷL}˖!mT=4sCʹmG3û#?1L&2L&d`2L&d2L&d2L&d2L&d2̯e2LY9egL&2khh<44ŠKyRyΜ9l?71l;Deu5R˫?q"ɬx]GJEf$~OdVL2O]vb<-Yq2Ee8"SHGfJ엪*HR2QO&<>)Ǚ "Feu1ܘLYxM$d2L&d2L&d2L&d2L&d2L&ddʮL+)!dJE)/L&) d22)Sd2%s@Zd2(3NOZd2RyJjzd2(SYjd2eJ)d`d2"SM7>>> ✙L&Sf?]9b2L/s4;d2e <b2+~0=c2LYw2 dʔLg3&ɔ-+L&Sdg2LY"dʜL9˜ Z\ʧ̩E0I/9Uz˚8KNE`F1'UX,>cVvϒ NktB)|=~]J*^/mgjT])bwEvsUddtBדFFv2U(Uw,UGnW0Ɍ2uAkՖ*SN XuT-_2-nʌ0W̟~]E\L*׫Sdj&S& ̯TRRmf◇TdʤB/Kd2 Sdd2$So7l5/a/&Sejw،sQ:bEM4~WcGJ%s:qx./h.nip, gK[^Y5llh.jF7l#^^tv(7EғiF$qI֚d4Z=>_q2ѳ_-+n=D\ĕpE?\p ln7nI9C!Ycב"u =Iyy+ˋ!oT[ oqOo%ِi^z`FHaҕwZRohPLǔ`s~Eo]i(% Y=}w?;?BHuG@WGlUB'Z>K^hb^'=5k=J󝜜IyH'*IEnvßdlG+ˍ/_>n0yԉ?=@F&iMjOglTGΘ+(,n7kX+Ҥ*t2-I I{xD#: Ȧ2m+LiEIdևۈ.ACo9NMEH`Ρ61;HSWm̓etU((gEdF\(?m%BϠL߭\,sTC`]lnA{Eqk4b=^F21հ2_Ąm2`$C34?b[ hm,!@$5nB)y#!v {I&Zzi}of hd|_2߷ggfwg`ldFԎ#Ǐ 4:9;ucF["3TfޥnnM2V,<) JEfxݺu{$zqх%aU9}LZnn>S$]VȬbd~B*IҖVAJv*C%S{z#jNfO2|C gL"A)mjNfWC1 L"LhfdVd2[GBZQ%頒ңi9v02+:[n@'702+:֐1adV,2{ېdfdZ 6h$ L:B3 F32-ޕHǷF9sF\yrBwV0edZZ=N![g /FIMʂaL#k6ꙠEIzoȴ$M%ZLӍ'XD͇? Ծ[{X 9T22-<LB`Kidr#Ȥ8ZdrL$)aĔ &ȴL2I8L\nq42Ϯѝ>9֗6P~ 5t!r ;X%ts]D? uw t%8-&wu{J }s%bzFl(oX5ٹ4 _%`CyDZ +#ɩӘGW\J4TqT26*| Y[b\%rDujyVu\FBQK%).DwMX,9Z yu=CX;eX*0\%[P%</4ᶖK󛛒X|ya7ԋLTʙ#΅%s@``OrFf)dVeUS9819S7֌Rɴr03tkSi3 OӝOYX27,uiF&#,G]ȿ8t>n/o42]#*6 Ȑ$TA˨d̟JxLBmt=R2SĪ WG?42[h9?dIxclQ,׈dmq(" o϶`q |."bw^->ۊ쿻0oE3blkȤyf,2[Q1}iyyJk$N`'JZ[.۾`Uc7283yP,w?s%#F'}+)a͛~0 鈥HS&54otrQϯ%ƤΘ6 {@:L4Cg#t}nZ+ oUZU'h! $(P9O&7%5bT=i42OpbGnc #$& ۧ_'雱OMД1$f:/?ZGCw"!Qku Z\!v?;n 2Zi$t$ KvSb͡ݍF揱@4$N^L^km&vvi[ɔD}r;pVtiINp!D`M4"LvÂ! ՟d\R)ܭ:Q ͈8H"}A/'{pw^FџvR !LDS0hOʎgQ5C[(FV=g KxDRr2PyNs1rY@+.[3a+%,\YYK9 ѝYYk ^8]'ɧALe<7 z_F.)? uƍe ,bH$!65jRqTl`b:/whR}tY}V$ L UM?bLqi|בiJ8qaFt 'rRrMIJGFPb )#p-k>} whw<N U+ mc KcLC>/vMy f(xo|ϕ[u_uI<}2oѯ5 &#~p@ )CcDh7Z~m\_$q?s7BE֝I"6 Mk>-Ss()[k y޴>q2Qd!ʬ$JMSmH BWYM$sPA}jp[ѥ;.ȑ2΋xDUk||Yt5Njh ،+;4|("i#Wx`Fժ*hO{c=8e$+kޡ{q̃oχPo)A;n삲,#T kK'-<~ɛ[U|`'+ Z#!.A#:tl뉒3HdDޅ" p,,ߣ BHu`: g<ZHgJuNۃvI{k헴`  4$`#vٙCv.DCNf]q2py#^$gۭ:kBE 0P9Zܵ3j2T#TJ&9պ̲HɤOk4{Cj' TvNnt=/Ĵ!Ɍ{maWj9a!z6hOmլ2jVgm@q)'Ey7ˀs#n'_1bf <9hQvC?bdeG2Mz-6AلHk͗|OkpY̐d# Wo=h]m qdvXT4Ԋ=Sy`ϏLY:-#Ф|Gy쁯F$s ߛzܔ l*pZ{M^i3ią<8b827]zaM sQŅ`Sw@kzFD~Gt!>3EjHgffϪgL1V24FZfv@6h!nddL5{F6G)wHn#Z~Z,c-I?>v#f+&!R}>b/cy8d&Ƿ|ML_+z4]9]KCpֶB l32MB7<zBjuv By C/Jʉ 6@WCu"'6J91YY(T^Z+A^Xk/7glb⭻v+4VOA/zP{ho={1bIٴi fDug5uvOC9>{wCUxubxU0%QVzLYfVRl^? ;' =kUas= 9V3n!(k܌n{wΜ-4rE\<"шvG &I<6{c$Gy=kFlh3 ~>}aDM7) X)kRUTֻSTpU aC SSԉN0G֤`i"R;#dŖ IK'Ss Z'{3ޥ%ZjM;Odx2 "o"v]mrOz69n.piͰ4ġZkyJb_bg8|A#S*[Xb#+ 2xG:] , 2_ 95tѠ`ؼhvHFHsY*/ lTDdd7yV!^Y\8dN}܈tLfK#0 dJo6ðADн;!+YE*.UVN5`]3vSM2YaCkߞᴇ4_7UqwɌ%MWhy. G9.}1GeC\"!WL,J%62!<Ko9Ou 8|:#E*G PIJGhǁL{mQ{; j$/Wf'lf9+pvzLUwjMVmVv3R~~)'\1-iK5i~MK4Kx3n6aNP[K.t{csGTal[?{74d{yȔitO\0`nq؃)Xԥcw~R/{",qcW'旯H^)˟/?{4K6cF\Gf代4 O?fq(˞1u?mn]k?9V 걑&V\AnBH?b/=j2'cvFVT^[Ar0XuaBLo9P"{(W_SgLcc V]}P*ႎ[H3_`D\=PƾxCRvn%ϴ5dY815.u*wsONNzHߺEt']6>D (O&E-ڽŲRW'rV]u% yG mY+=DFdm$MAZ@L߳ThqT~곃J?>Y(\`%Q{5Mu6!M!ȣ=M){ XI:G[:n(ISVR21L4xRɜdd#zfd22LF&#dd22LF&#dd22LF1L7WL]$d-̚R!7E|EV> @8KExah\E\(!`Y(aƛ.pp Y?#' 8D` IOC!G  '|VZ-%k]tB"`{P);D{hÖ*$ P' Ey\B>eF9mr&P̝c;d:!),wO,8>9!4Pj>pHP/wD*T,0]JQ"E3 mPﱧuҠ\ӬKoՑ .|6H: y;48,TG9YKAw]x`uQ .``eS A^C@CA4 Af4gO@<tA2[:$,YHs9Otix|矀*蠄j衈&_Qh( m"p p48'}WNZU/) vz, KBKt0`Ű Щ  RHZk- rAi+m"0ç A 9:(EY+vj'xk k*$ƩB60 駡+G" Э$0k.f̣&98L,}$mkެL Ğ*ťYrjMw5%"pj5_-w6⏛ CPi#&OT%)KgJFҖ:\%Jr'aȔ%k `x@aB0@?7)dU3ICazr,PM(@Ȱ=iIS4>mc_Ӆ$%FY ?Š+J eiM(@GA*RS`KaЃ0Мh%F<4!<((YOr-uazx. xBP!*)j*Zі5i՛27C-u!^V.Fk_Z԰mh yHv3e}]*P$ PcUjVt6sA*vu[1wpʱw@G>S|!lYָ>I*[Xβ}taIӒa'þĻ 6pL:y <9i361~*KgN;tKٸRL0Ydn2ӌD/tܣԈS6Z SB4UN o (<dRPD} yup_'}:T9Fq+ʩQ\*@T5 ϷD7:]gO}%-$--Q3v~S=5DDWzbsbJtn6Qgƍ sn\{@0]p_~7'hg_y\Jtt'uQQ -&}ouA !~̆ k^$q!7m!0#`&* Zzds '3h}F<<3dHJCV;Ct-pAP5'ChET[`bMn8Xw؅twr8thgeUvlF&k{KD[H]z†WiKk閭vsx_slY2JEhP|bs^@l>'fux6膗刬jEOcـ}HV_P6(r1&_`yPf]s8Xx蘎X2WxȈ]iuSUPSnygFa(77)vXfXTEw5jV5P @QVx46V>Y&is)75][0JxeUQJP֨l!SW(8'IqMӒ@\OapUU\!^_u^U4BYqs#a"6XY gWTeRhS@Iѷ^EufFgY_ :3qI Ğ91JCV/O"P,MI,"OBMh@3yVRR\v)MIt gil V">6P,):,6,#5!?j2*7Wh=A#:n%C5E G??Z2AM5OʣQJ?SJ2|b:dZfzhgID[h:&%vzZfCӥ9xJ9g8#ڨcg棨&èz;%^sZj=Z 马":}Z檸4J 9s:,Êjz8S\xڏmʧzxrjJ¬< 2CƂ,P3] /n#:-b2p5:S.35s E+~+㳫a*s *HӰ/8,> 5"2Ƀ78<ꊇ>d>#34c3EfK:C5u9kC#4T7>s7Ax8sD{88( >P4 =c(-n/k6.W)l1ch+cmc4:\k:H K@NP>| 1MM|>[`S92ۜdM2#2@Np˺ȵ˿ < :yN,$ּ̑\ńܿeOڔM $OX|#9$X|S˽,2MW̽>M%;@ ^ `xjg}21lwEl]f=_|N$ ^\٨1{Mƞlِmh,+ٶgg;P͗߬@(9 ;9ۛ=:ƺۼM 7< ],ߧnm ~N ~ N 2ࣰ~L ;PK3[~PKP DOEBPS/img/cwadd03.gif*GIF89aF㸹z|_???ϲk}}o=>/٦ٵɚɰjvklS///pppnnc___𰰰lm[-.#noglmWoolmn_OOO[]G oooLM;į]]O__\\]K..P!F,F"&%! -* #'))', *E.%ܟ&.ά(# 4i BT un +sjQc;5KÓ(W*:ʜI3 L)V O^F쀱5*]:cȑL({ɴׯKo T71pZA0D %5Y,'È{ Eٳ=YI1k9nݻy4X w^ͺ5bp:lJVGzε kH \ V\ \#bD˳/0{E q 6+gɳLBWS5\r$S~Ɖם|e'f' @xa+x7DAP aރʬ6aډw:}a~8Ha^3 A!r .oy#CvxC0CLC. C<9%kR]**Xz88xaznvfm90 e Ca`3˜jvhmhh(^2C*kb>Y;1A;H@qHhQWfkb^Q *XkW䞫n;o{ooo:pxp Ϸpwmpm;q Kl~LO蚜 @21zLꞯn몓챣>~Nk^w[p| ; K_ouG}#n2||1>'þ_?_۟-x $`,c"HL2L 8aֳtFƷ"d'BB¨$FtH2&9@x82 PoHB 7(*ZVCDB QD A#ʘDL#xC7vL#ƌx1 [` F1+;dF:򑐌$'IJVZDNz (GIu$P*WV򕰌,gIXb@eY f  0IbL294qcDjZT&p9=ᙱE̦8Ic6scYv3Ḑ1tԧ@Os$=PwszD@ЊBXFw>vB-JRk(:ɑBJ%6l_"NyҔ*&; 0@ST#88e Sd y೧()L/:S878(Hȉ7N3HhX"y{x|}(qz8HȸJA7AI X0/XrX/긎Ҏ.2/xHH1؏-9R)0 Y* 0YSr0! S$I/#yR`* 0uaߠ(oX1a:Nq= QƠGb;3 ΀IUI PQ1.YWY<@58) LtVDiF1бIPyR×!p0/ k) q@󐚶6z yqə9aK Q BA !)y;PKE!8/*PKP DOEBPS/ghcref.htm Rapid Home Provisioning Command Reference

F Rapid Home Provisioning Command Reference

This appendix contains reference information for Rapid Home Provisioning commands, including the Rapid Home Provisioning Control (RHPCTL) utility and Server Control (SRVCTL) utility.

This appendix includes the following topics:

RHPCTL Command Reference

This section describes RHPCTL command usage information, and lists and describes RHPCTL commands.

RHPCTL Overview

RHPCTL is a command-line utility with which you perform Rapid Home Provisioning operations and manage Rapid Home Provisioning Servers and Clients. RHPCTL uses the following syntax:

rhpctl command object [parameters]

In RHPCTL syntax:

  • command is a verb such as add, delete, or query

  • object (also known as a noun) is the target or object on which RHPCTL performs the command, such as client or image.

  • parameters extend the use of a preceding command combination to include additional parameters for the command. Specify parameters as -keyword value. If the value field contains a comma-delimited list, then do not use spaces between the items in the list.

    </