设为首页 - 加入收藏 焦点技术网
热搜:java
当前位置:首页 >

11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]

2011-07-28 13:56:00.0 oracle service database list application server  
导读: 11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1] 修改时间 14-FEB-2011     类型 BULLETIN     状态 PUBLISHED In this Document  Purpose  Scope and...。。。
11gR2 Clusterware and Grid Home - What You Need to Know [ID 1053147.1]

 修改时间 14-FEB-2011     类型 BULLETIN     状态 PUBLISHED 

In this Document
  Purpose
  Scope and Application
  11gR2 Clusterware and Grid Home - What You Need to Know
     11gR2 Clusterware Key Facts
     Clusterware Startup Sequence
     Important Log Locations
     Clusterware Resource Status Check
     Clusterware Resource Administration
     OCRCONFIG Options:
     OLSNODES Options
     Cluster Verification Options
  References


Applies to:

Oracle Server - Enterprise Edition - Version: 11.2.0.1 to 11.2.0.1 - Release: 11.2 to 11.2
Information in this document applies to any platform.

Purpose

The 11gR2 Clusterware has undergone numerous changes since the previous release. For information on the previous release(s), see Note: 259301.1 "CRS and 10g Real Application Clusters". This document is intended to go over the 11.2 Clusterware which has some similarities and some differences from the previous version(s). 

Scope and Application


This document is intended for RAC Database Administrators and Oracle support engineers.

11gR2 Clusterware and Grid Home - What You Need to Know

11gR2 Clusterware Key Facts

  • 11gR2 Clusterware is required to be up and running prior to installing a 11gR2 Real Application Clusters database.
  • The GRID home consists of the Oracle Clusterware and ASM.  ASM should not be in a seperate home.
  • The 11gR2 Clusterware can be installed in "Standalone" mode for ASM and/or "Oracle Restart" single node support. This clusterware is a subset of the full clusterware described in this document.
  • The 11gR2 Clusterware can be run by itself or on top of vendor clusterware.  See the certification matrix for certified combinations. Ref: Note: 184875.1 "How To Check The Certification Matrix for Real Application Clusters"
  • The GRID Home and the RAC/DB Home must be installed in different locations.
  • The 11gR2 Clusterware requires a shared OCR files and voting files.  These can be stored on ASM or a cluster filesystem.
  • The OCR is backed up automatically every 4 hours to /cdata// and can be restored via ocrconfig. 
  • The voting file is backed up into the OCR at every configuration change and can be restored via crsctl. 
  • The 11gR2 Clusterware requires at least one private network for inter-node communication and at least one public network for external communication.  Several virtual IPs need to be registered with DNS.  This includes the node VIPs (one per node), SCAN VIPs (three).  This can be done manually via your network administrator or optionally you could configure the "GNS" (Grid Naming Service) in the Oracle clusterware to handle this for you (note that GNS requires its own VIP).  
  • A SCAN (Single Client Access Name) is provided to clients to connect to.  For more info on SCAN see Note: 887522.1
  • The root.sh script at the end of the clusterware installation starts the clusterware stack.  For information on troubleshooting root.sh issues see Note: 1053970.1
  • Only one set of clusterware daemons can be running per node. 
  • On Unix, the clusterware stack is started via the init.ohasd script referenced in /etc/inittab with "respawn".
  • A node can be evicted (rebooted) if a node is deemed to be unhealthy.  This is done so that the health of the entire cluster can be maintained.  For more information on this see: Note: 1050693.1 "Troubleshooting 11.2 Clusterware Node Evictions (Reboots)"
  • Either have vendor time synchronization software (like NTP) fully configured and running or have it not configured at all and let CTSS handle time synchonization.  See Note: 1054006.1 for more infomation.
  • If installing DB homes for a lower version, you will need to pin the nodes in the clusterware or you will see ORA-29702 errors.  See Note 946332.1 and Note:948456.1 for more info.
  • The clusterware stack can be started by either booting the machine, running "crsctl start crs" to start the clusterware stack, or by running "crsctl start cluster" to start the clusterware on all nodes.  Note that crsctl is in the /bin directory.  Note that "crsctl start cluster" will only work if ohasd is running.
  • The clusterware stack can be stopped by either shutting down the machine, running "crsctl stop crs" to stop the clusterware stack, or by running "crsctl stop cluster" to stop the clusterware on all nodes.  Note that crsctl is in the /bin directory.
  • Killing clusterware daemons is not supported.
Note that it is also a good idea to follow the RAC Assurance best practices in Note: 810394.1

Clusterware Startup Sequence

The following is the Clusterware startup sequence (image from the "Oracle Clusterware Administration and Deployment Guide):


Don't let this picture scare you too much.  You aren't responsible for managing all of these processes, that is the Clusterware's job!

Short summary of the startup sequence: INIT spawns init.ohasd (with respawn) which in turn starts the OHASD process (Oracle High Availability Services Daemon).  This daemon spawns 4 processes.

Level 1: OHASD Spawns:

  • cssdagent - Agent responsible for spawning CSSD.
  • orarootagent - Agent responsible for managing all root owned ohasd resources.
  • oraagent - Agent responsible for managing all oracle owned ohasd resources.
  • cssdmonitor - Monitors CSSD and node health (along wth the cssdagent).

Level 2: OHASD rootagent spawns:

  • CRSD - Primary daemon responsible for managing cluster resources.
  • CTSSD - Cluster Time Synchronization Services Daemon
  • Diskmon
  • ACFS (ASM Cluster File System) Drivers 

Level 2: OHASD oraagent spawns:

  • MDNSD - Used for DNS lookup
  • GIPCD - Used for inter-process and inter-node communication
  • GPNPD - Grid Plug & Play Profile Daemon
  • EVMD - Event Monitor Daemon
  • ASM - Resource for monitoring ASM instances

Level 3: CRSD spawns:

  • orarootagent - Agent responsible for managing all root owned crsd resources.
  • oraagent - Agent responsible for managing all oracle owned crsd resources.

Level 4: CRSD rootagent spawns:

  • Network resource - To monitor the public network
  • SCAN VIP(s) - Single Client Access Name Virtual IPs
  • Node VIPs - One per node
  • ACFS Registery - For mounting ASM Cluster File System
  • GNS VIP (optional) - VIP for GNS

Level 4: CRSD oraagent spawns:

  • ASM Resouce - ASM Instance(s) resource
  • Diskgroup - Used for managing/monitoring ASM diskgroups.  
  • DB Resource - Used for monitoring and managing the DB and instances
  • SCAN Listener - Listener for single client access name, listening on SCAN VIP
  • Listener - Node listener listening on the Node VIP
  • Services - Used for monitoring and managing services
  • ONS - Oracle Notification Service
  • eONS - Enhanced Oracle Notification Service
  • GSD - For 9i backward compatibility
  • GNS (optional) - Grid Naming Service - Performs name resolution

This image shows the various levels more clearly:


Important Log Locations

Clusterware daemon logs are all under /log/.  Structure under /log/:

alert.log - look here first for most clusterware issues
./admin:
./agent:
./agent/crsd:
./agent/crsd/oraagent_oracle:
./agent/crsd/ora_oc4j_type_oracle:
./agent/crsd/orarootagent_root:
./agent/ohasd:
./agent/ohasd/oraagent_oracle:
./agent/ohasd/oracssdagent_root:
./agent/ohasd/oracssdmonitor_root:
./agent/ohasd/orarootagent_root:
./client:
./crsd:
./cssd:
./ctssd:
./diskmon:
./evmd:
./gipcd:
./gnsd:
./gpnpd:
./mdnsd:
./ohasd:
./racg:
./racg/racgeut:
./racg/racgevtf:
./racg/racgmain:
./srvm:

The cfgtoollogs dir under and $ORACLE_BASE contains other important logfiles.  Specifically for rootcrs.pl and configuration assistants like ASMCA, etc...

ASM logs live under $ORACLE_BASE/diag/asm/+asm//trace

The diagcollection.pl script under /bin can be used to automatically collect important files for support.  Run this as the root user. 

Clusterware Resource Status Check

The following command will display the status of all cluster resources:


$ ./crsctl status resource -t
--------------------------------------------------------------------------------
NAME           TARGET  STATE        SERVER                   STATE_DETAILS
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.DATADG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.LISTENER.lsnr
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.SYSTEMDG.dg
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.asm
               ONLINE  ONLINE       racbde1                  Started
               ONLINE  ONLINE       racbde2                  Started
ora.eons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.gsd
               OFFLINE OFFLINE      racbde1
               OFFLINE OFFLINE      racbde2
ora.net1.network
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.ons
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
ora.registry.acfs
               ONLINE  ONLINE       racbde1
               ONLINE  ONLINE       racbde2
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
      1        ONLINE  ONLINE       racbde1
ora.LISTENER_SCAN2.lsnr
      1        ONLINE  ONLINE       racbde2
ora.LISTENER_SCAN3.lsnr
      1        ONLINE  ONLINE       racbde2
ora.oc4j
      1        OFFLINE OFFLINE
ora.rac.db
      1        ONLINE  ONLINE       racbde1                  Open
      2        ONLINE  ONLINE       racbde2                  Open
ora.racbde1.vip
      1        ONLINE  ONLINE       racbde1
ora.racbde2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan1.vip
      1        ONLINE  ONLINE       racbde1
ora.scan2.vip
      1        ONLINE  ONLINE       racbde2
ora.scan3.vip
      1        ONLINE  ONLINE       racbde2

Clusterware Resource Administration

Srvctl and crsctl are used to manage clusterware resources.  The general rule is to use srvctl for whatever resource management you can.  Crsctl should only be used for things that you cannot do with srvctl (like start the cluster).  Both have a help feature to see the available syntax.


Srvctl syntax:

Usage: srvctl setenv database -d {-t =[,=,...] | -T =}
Usage: srvctl unsetenv database -d -t ""

Usage: srvctl add instance -d -i -n [-f]
Usage: srvctl start instance -d {-n [-i ] | -i } [-o ]
Usage: srvctl stop instance -d {-n | -i }  [-o ] [-f]
Usage: srvctl status instance -d {-n | -i } [-f] [-v]
Usage: srvctl enable instance -d -i ""
Usage: srvctl disable instance -d -i ""
Usage: srvctl modify instance -d -i { -n | -z }
Usage: srvctl remove instance -d [-i ] [-f] [-y]

Usage: srvctl add service -d -s {-r "" [-a ""] [-P {BASIC | NONE | PRECONNECT}] | -g [-c {UNIFORM | SINGLETON}] } [-k   ] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}] [-q {TRUE|FALSE}] [-x {TRUE|FALSE}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z ] [-w ]
Usage: srvctl add service -d -s -u {-r "" | -a ""}
Usage: srvctl config service -d [-s ] [-a]
Usage: srvctl enable service -d -s "" [-i | -n ]
Usage: srvctl disable service -d -s "" [-i | -n ]
Usage: srvctl status service -d [-s ""] [-f] [-v]
Usage: srvctl modify service -d -s -i -t [-f]
Usage: srvctl modify service -d -s -i -r [-f]
Usage: srvctl modify service -d -s -n -i "" [-a ""] [-f]
Usage: srvctl modify service -d -s [-c {UNIFORM | SINGLETON}] [-P {BASIC|PRECONNECT|NONE}] [-l [PRIMARY][,PHYSICAL_STANDBY][,LOGICAL_STANDBY][,SNAPSHOT_STANDBY]] [-y {AUTOMATIC | MANUAL}][-q {true|false}] [-x {true|false}] [-j {SHORT|LONG}] [-B {NONE|SERVICE_TIME|THROUGHPUT}] [-e {NONE|SESSION|SELECT}] [-m {NONE|BASIC}] [-z ] [-w ]
Usage: srvctl relocate service -d -s {-i -t | -c -n } [-f]
       Specify instances for an administrator-managed database, or nodes for a policy managed database
Usage: srvctl remove service -d -s [-i ] [-f]
Usage: srvctl start service -d [-s "" [-n | -i ] ] [-o ]
Usage: srvctl stop service -d [-s "" [-n | -i ] ] [-f]

Usage: srvctl add nodeapps { { -n -A //[if1[|if2...]] } | { -S //[if1[|if2...]] } } [-p ] [-m ] [-e ] [-l ]  [-r ] [-t [:][,[:]...]] [-v]
Usage: srvctl config nodeapps [-a] [-g] [-s] [-e]
Usage: srvctl modify nodeapps {[-n -A /[/if1[|if2|...]]] | [-S /[/if1[|if2|...]]]} [-m ] [-p ] [-e ] [ -l ] [-r ] [-t [:][,[:]...]] [-v]
Usage: srvctl start nodeapps [-n ] [-v]
Usage: srvctl stop nodeapps [-n ] [-f] [-r] [-v]
Usage: srvctl status nodeapps
Usage: srvctl enable nodeapps [-v]
Usage: srvctl disable nodeapps [-v]
Usage: srvctl remove nodeapps [-f] [-y] [-v]
Usage: srvctl getenv nodeapps [-a] [-g] [-s] [-e] [-t ""]
Usage: srvctl setenv nodeapps {-t "=[,=,...]" | -T "="}
Usage: srvctl unsetenv nodeapps -t "" [-v]

Usage: srvctl add vip -n -k -A //[if1[|if2...]] [-v]
Usage: srvctl config vip { -n | -i }
Usage: srvctl disable vip -i [-v]
Usage: srvctl enable vip -i [-v]
Usage: srvctl remove vip -i "" [-f] [-y] [-v]
Usage: srvctl getenv vip -i [-t ""]
Usage: srvctl start vip { -n | -i } [-v]
Usage: srvctl stop vip { -n   | -i } [-f] [-r] [-v]
Usage: srvctl status vip { -n | -i }
Usage: srvctl setenv vip -i {-t "=[,=,...]" | -T "="}
Usage: srvctl unsetenv vip -i -t "" [-v]

Usage: srvctl add asm [-l ]
Usage: srvctl start asm [-n ] [-o ]
Usage: srvctl stop asm [-n ] [-o ] [-f]
Usage: srvctl config asm [-a]
Usage: srvctl status asm [-n ] [-a]
Usage: srvctl enable asm [-n ]
Usage: srvctl disable asm [-n ]
Usage: srvctl modify asm [-l ]
Usage: srvctl remove asm [-f]
Usage: srvctl getenv asm [-t [, ...]]
Usage: srvctl setenv asm -t "= [,...]" | -T "="
Usage: srvctl unsetenv asm -t "[, ...]"

Usage: srvctl start diskgroup -g [-n ""]
Usage: srvctl stop diskgroup -g [-n ""] [-f]
Usage: srvctl status diskgroup -g [-n ""] [-a]
Usage: srvctl enable diskgroup -g [-n ""]
Usage: srvctl disable diskgroup -g [-n ""]
Usage: srvctl remove diskgroup -g [-f]

Usage: srvctl add listener [-l ] [-s] [-p "[TCP:][, ...][/IPC:][/NMP:][/TCPS:] [/SDP:]"] [-o ] [-k ]
Usage: srvctl config listener [-l ] [-a]
Usage: srvctl start listener [-l ] [-n ]
Usage: srvctl stop listener [-l ] [-n ] [-f]
Usage: srvctl status listener [-l ] [-n ]
Usage: srvctl enable listener [-l ] [-n ]
Usage: srvctl disable listener [-l ] [-n ]
Usage: srvctl modify listener [-l ] [-o ] [-p "[TCP:][, ...][/IPC:][/NMP:][/TCPS:] [/SDP:]"] [-u ] [-k ]
Usage: srvctl remove listener [-l | -a] [-f]
Usage: srvctl getenv listener [-l ] [-t [, ...]]
Usage: srvctl setenv listener [-l ] -t "= [,...]" | -T "="
Usage: srvctl unsetenv listener [-l ] -t "[, ...]"

Usage: srvctl add scan -n [-k [-S /[/if1[|if2|...]]]]
Usage: srvctl config scan [-i ]
Usage: srvctl start scan [-i ] [-n ]
Usage: srvctl stop scan [-i ] [-f]
Usage: srvctl relocate scan -i [-n ]
Usage: srvctl status scan [-i ]
Usage: srvctl enable scan [-i ]
Usage: srvctl disable scan [-i ]
Usage: srvctl modify scan -n
Usage: srvctl remove scan [-f] [-y]
Usage: srvctl add scan_listener [-l ] [-s] [-p [TCP:][/IPC:][/NMP:][/TCPS:] [/SDP:]]
Usage: srvctl config scan_listener [-i ]
Usage: srvctl start scan_listener [-n ] [-i ]
Usage: srvctl stop scan_listener [-i ] [-f]
Usage: srvctl relocate scan_listener -i [-n ]
Usage: srvctl status scan_listener [-i ]
Usage: srvctl enable scan_listener [-i ]
Usage: srvctl disable scan_listener [-i ]
Usage: srvctl modify scan_listener {-u|-p [TCP:][/IPC:][/NMP:][/TCPS:] [/SDP:]}
Usage: srvctl remove scan_listener [-f] [-y]

Usage: srvctl add srvpool -g [-l ] [-u ] [-i ] [-n ""]
Usage: srvctl config srvpool [-g ]
Usage: srvctl status srvpool [-g ] [-a]
Usage: srvctl status server -n "" [-a]
Usage: srvctl relocate server -n "" -g [-f]
Usage: srvctl modify srvpool -g [-l ] [-u ] [-i ] [-n ""]
Usage: srvctl remove srvpool -g

Usage: srvctl add oc4j [-v]
Usage: srvctl config oc4j
Usage: srvctl start oc4j [-v]
Usage: srvctl stop oc4j [-f] [-v]
Usage: srvctl relocate oc4j [-n ] [-v]
Usage: srvctl status oc4j [-n ]
Usage: srvctl enable oc4j [-n ] [-v]
Usage: srvctl disable oc4j [-n ] [-v]
Usage: srvctl modify oc4j -p [-v]
Usage: srvctl remove oc4j [-f] [-v]

Usage: srvctl start home -o -s -n
Usage: srvctl stop home -o -s -n [-t ] [-f]
Usage: srvctl status home -o -s -n

Usage: srvctl add filesystem -d -v -g [-m ] [-u ]
Usage: srvctl config filesystem -d
Usage: srvctl start filesystem -d [-n ]
Usage: srvctl stop filesystem -d [-n ] [-f]
Usage: srvctl status filesystem -d
Usage: srvctl enable filesystem -d
Usage: srvctl disable filesystem -d
Usage: srvctl modify filesystem -d -u
Usage: srvctl remove filesystem -d [-f]

Usage: srvctl start gns [-v] [-l ] [-n ]
Usage: srvctl stop gns [-v] [-n ] [-f]
Usage: srvctl config gns [-v] [-a] [-d] [-k] [-m] [-n ] [-p] [-s] [-V]
Usage: srvctl status gns -n
Usage: srvctl enable gns [-v] [-n ]
Usage: srvctl disable gns [-v] [-n ]
Usage: srvctl relocate gns [-v] [-n ] [-f]
Usage: srvctl add gns [-v] -d -i [-k [-S /[/]]]
srvctl modify gns [-v] [-f] [-l ] [-d ] [-i ] [-N -A
] [-D -A
] [-c -a ] [-u ] [-r
] [-V ] [-F ] [-R ] [-X ]
Usage: srvctl remove gns [-f] [-d ]

Crsctl Syntax (for further explanation of these commands see the Oracle Documentation)

OCRCONFIG Options:


$ ./ocrconfig -help
Name:
        ocrconfig - Configuration tool for Oracle Cluster/Local Registry.

Synopsis:
        ocrconfig [option]
        option:
                [-local] -export
                                                    - Export OCR/OLR contents to a file
                [-local] -import          - Import OCR/OLR contents from a file
                [-local] -upgrade [ []]
                                                    - Upgrade OCR from previous version
                -downgrade [-version ]
                                                    - Downgrade OCR to the specified version
                [-local] -backuploc        - Configure OCR/OLR backup location
                [-local] -showbackup [auto|manual]  - Show OCR/OLR backup information
                [-local] -manualbackup              - Perform OCR/OLR backup
                [-local] -restore         - Restore OCR/OLR from physical backup
                -replace -replacement
                                                    - Replace a OCR device/file with
                -add                      - Add a new OCR device/file
                -delete                   - Remove a OCR device/file
                -overwrite                          - Overwrite OCR configuration on disk
                -repair -add | -delete | -replace -replacement
                                                    - Repair OCR configuration on the local node
                -help                               - Print out this help information

Note:
        * A log file will be created in
        $ORACLE_HOME/log//client/ocrconfig_.log. Please ensure
        you have file creation privileges in the above directory before
        running this tool.
        * Only -local -showbackup [manual] is supported.
        * Use option '-local' to indicate that the operation is to be performed on the Oracle Local Registry

OLSNODES Options


$ ./olsnodes -h
Usage: olsnodes [ [-n] [-i] [-s] [-t] [ | -l [-p]] | [-c] ] [-g] [-v]
        where
                -n print node number with the node name
                -p print private interconnect address for the local node
                -i print virtual IP address with the node name
                print information for the specified node
                -l print information for the local node
                -s print node status - active or inactive
                -t print node type - pinned or unpinned
                -g turn on logging
                -v Run in debug mode; use at direction of Oracle Support only.
                -c print clusterware name

Cluster Verification Options


Component Options:

$ ./cluvfy comp -list

USAGE:
cluvfy comp    [-verbose]

Valid components are:
        nodereach : checks reachability between nodes
        nodecon   : checks node connectivity
        cfs       : checks CFS integrity
        ssa       : checks shared storage accessibility
        space     : checks space availability
        sys       : checks minimum system requirements
        clu       : checks cluster integrity
        clumgr    : checks cluster manager integrity
        ocr       : checks OCR integrity
        olr       : checks OLR integrity
        ha        : checks HA integrity
        crs       : checks CRS integrity
        nodeapp   : checks node applications existence
        admprv    : checks administrative privileges
        peer      : compares properties with peers
        software  : checks software distribution
        asm       : checks ASM integrity
        acfs       : checks ACFS integrity
        gpnp      : checks GPnP integrity
        gns       : checks GNS integrity
        scan      : checks SCAN configuration
        ohasd     : checks OHASD integrity
        clocksync      : checks Clock Synchronization
        vdisk      : check Voting Disk Udev settings


Stage Options:

$ ./cluvfy stage -list

USAGE:
cluvfy stage {-pre|-post}   [-verbose]

Valid stage options and stage names are:
        -post hwos    :  post-check for hardware and operating system
        -pre  cfs     :  pre-check for CFS setup
        -post cfs     :  post-check for CFS setup
        -pre  crsinst :  pre-check for CRS installation
        -post crsinst :  post-check for CRS installation
        -pre  hacfg   :  pre-check for HA configuration
        -post hacfg   :  post-check for HA configuration
        -pre  dbinst  :  pre-check for database installation
        -pre  acfscfg  :  pre-check for ACFS Configuration.
        -post acfscfg  :  post-check for ACFS Configuration.
        -pre  dbcfg   :  pre-check for database configuration
        -pre  nodeadd :  pre-check for node addition.
        -post nodeadd :  post-check for node addition.
        -post nodedel :  post-check for node deletion.


References

NOTE:1053970.1 - Troubleshooting 11.2 Grid Infastructure Installation Root.sh Issues
NOTE:1054006.1 - CTSSD Runs in Observer Mode Even Though No Time Sync Software is Running
NOTE:184875.1 - How To Check The Certification Matrix for Real Application Clusters
NOTE:259301.1 - CRS and 10g/11.1 Real Application Clusters
NOTE:810394.1 - RAC Assurance Support Team: RAC and Oracle Clusterware Starter Kit and Best Practices (Generic)
NOTE:887522.1 - 11gR2 Grid Infrastructure Single Client Access Name (SCAN) Explained
NOTE:946332.1 - Unable To Create 10.1 or 10.2 or 11.1(< 11gR2) ASM RAC Databases (ORA-29702) Using Brand New 11gR2 Grid Infrastructure Installation .
Oracle Clusterware Administration and Deployment Guide
http://www.oracle.com/technology/documentation/index.html

显示附件 附件

显示相关信息 相关的


产品
  • Oracle Database Products > Oracle Database > Oracle Database > Oracle Server - Enterprise Edition
关键字
CLUSTER; REAL APPLICATION CLUSTERS; SCAN; CRSCTL; 11GR2; CLUSTERWARE; GRID
错误
ORA-29702; 29702 ERROR

(编辑: msdnchina)

网友评论
相关文章