SlideShare ist ein Scribd-Unternehmen logo
1 von 16
Downloaden Sie, um offline zu lesen
为 11.2.0.2 Grid
Infrastructure 添加节
          点




      by Maclean.liu
            liu.maclean@gmail.com
        www.oracledatabase12g.com
About Me

l Email:liu.maclean@gmail.com
l Blog:www.oracledatabase12g.com
l Oracle Certified Database Administrator Master 10g
and 11g
l Over 6 years experience with Oracle DBA technology
l Over 7 years experience with Linux technology
l Member Independent Oracle Users Group
l Member All China Users Group
l Presents for advanced Oracle topics: RAC,
DataGuard, Performance Tuning and Oracle Internal.
在之前的文章中我介绍了为 10g RAC Cluster 添加节点的具体步骤。在 11gr2 中 Oracle CRS
升级为 Grid Infrastructure,通过 GI 我们可以更方便地控制 CRS 资源如:VIP、ASM 等等,这
也导致了在为 11.2 中的 GI 添加节点时,同 10gr2 相比有着较大的差异。

这里我们要简述在 11.2 中为 GI ADD NODE 的几个要点:


一、准备工作


准备工作是不可忽略的,在 10g RAC Cluster 添加节点中我列举了必须完成的先决条件,在
11.2 GI 中这些条件依然有效,但请注意以下 2 点:

1.不仅要为 oracle 用户配置用户等价性,也要为 grid(GI 安装用户)用户配置;除非你同时使
用 oracle 安装 GI 和 RDBMS,这是不推荐的




2.在 11.2 GI 中推出了 octssd(Oracle Cluster Synchronization Service Daemon)时间同步服务,如
果打算使用 octssd 的话那么建议禁用 ntpd 事件服务,具体方法如下:
# service ntpd stop
Shutting down ntpd:                                      [   OK   ]
# chkconfig ntpd off
# mv /etc/ntp.conf /etc/ntp.conf.orig
# rm /var/run/ntpd.pid




3.使用 cluster verify 工具验证新增节点是否满足 cluster 的要求:
cluvfy stage -pre nodeadd -n <NEW NODE>
具体用法如:

su - grid

[grid@vrh1 ~]$ cluvfy stage -pre nodeadd -n vrh3

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "vrh1"
Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Node connectivity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all
nodes
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"

Node connectivity check passed

Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
    vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: vrh3

File "/etc/resolv.conf" is not consistent across nodes

Pre-check for node addition was unsuccessful on all the nodes.


一般来说如果我们不使用 DNS 解析域名方式的话,那么 resolv.conf 不一直的问题可以忽
略,但在 slient 安装模式下可能造成我们的操作无法完成,这个后面会介绍。
二、向 GI 中加入新的节点

注意 11.2.0.2 GI 添加节点的关键脚本 addNode.sh 可能存在 Bug,如官方文档所述当希望使用
Interactive Mode 交互模式启动 OUI 界面添加节点时,只要运行 addNode.sh 脚本即可,实际
情况则不是这样:
documentation said:
Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing
nodes.
Oracle Universal Installer runs in add node mode and the Welcome page displays.
Click Next and the Specify Cluster Nodes for Node Addition page displays.

we done:

运行 addNode.sh 要求以 GI 拥有者身份运行该脚本,一般为 grid 用户,要求在已有的正运行 GI 的节点上启
动脚本

[grid@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin

[grid@vrh1 bin]$ ./addNode.sh
ERROR:
Value for CLUSTER_NEW_NODES not specified.

USAGE:
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl   {-pre|-post}

/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={}
CLUSTER_NEW_VIRTUAL_HOSTNAMES={}

/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] -responseFile
/g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post [-silent]


我们的本意是期望使用图形化的交互界面的 OUI(runInstaller -addnode)来新增节点,然而
addNode.sh 居然让我们输入一些参量,而且其调用的 check_nodeadd.pl 脚本使用的是 silent
模式。

在 MOS 和 GOOGLE 上搜了一圈,基本所有的文档都推荐使用 silent 模式来添加节点,无法
只好转到静默添加上来。实际上静默添加所需要提供的参数并不多,这可能是这种方式得到
推崇的原因之一,但是这里又碰到问题了:
语法 SYNTAX:
./addNode.sh –silent
"CLUSTER_NEW_NODES={node2}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}"
在我们的例子中具体命令如下

./addNode.sh -silent
"CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
以上命令因为采用 silent 模式所以没有任何窗口输出(实际上会输出到 /tmp/silentInstall.log 日志
文件中),去掉-silent 参数

./addNode.sh "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"

Performing pre-checks for node addition

Checking node reachability...
Node reachability check passed from node "vrh1"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Node connectivity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all
nodes
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"

Node connectivity check passed

Total memory check passed
Available memory check passed
Swap space check passed
Free disk space check passed for "vrh3:/tmp"
Free disk space check passed for "vrh1:/tmp"
Check for multiple users with UID value 54322 passed
User existence check passed for "grid"
Run level check passed
Hard limits check failed for "maximum open file descriptors"
Check failed on nodes:
    vrh3
Soft limits check passed for "maximum open file descriptors"
Hard limits check passed for "maximum user processes"
Soft limits check passed for "maximum user processes"
System architecture check passed
Kernel version check passed
Kernel parameter check passed for "semmsl"
Kernel parameter check passed for "semmns"
Kernel parameter check passed for "semopm"
Kernel parameter check passed for "semmni"
Kernel parameter check passed for "shmmax"
Kernel parameter check passed for "shmmni"
Kernel parameter check passed for "shmall"
Kernel parameter check passed for "file-max"
Kernel parameter check passed for "ip_local_port_range"
Kernel parameter check passed for "rmem_default"
Kernel parameter check passed for "rmem_max"
Kernel parameter check passed for "wmem_default"
Kernel parameter check passed for "wmem_max"
Kernel parameter check passed for "aio-max-nr"
Package existence check passed for "make-3.81( x86_64)"
Package existence check passed for "binutils-2.17.50.0.6( x86_64)"
Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)"
Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)"
Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)"
Package existence check passed for "glibc-common-2.5( x86_64)"
Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)"
Package existence check passed for "glibc-headers-2.5( x86_64)"
Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)"
Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)"
Package existence check passed for "sysstat-7.0.2( x86_64)"
Package existence check passed for "ksh-20060214( x86_64)"
Check for multiple users with UID value 0 passed
Current group ID check passed

Checking OCR integrity...

OCR integrity check passed

Checking Oracle Cluster Voting Disk configuration...

Oracle Cluster Voting Disk configuration check passed
Time zone consistency check passed

Starting Clock synchronization checks using Network Time Protocol(NTP)...

NTP Configuration file check started...
No NTP Daemons or Services were found to be running

Clock synchronization check using Network Time Protocol(NTP) passed

User "grid" is not part of "root" group. Check passed
Checking consistency of file "/etc/resolv.conf" across nodes

File "/etc/resolv.conf" does not have both domain and search entries defined
domain entry in file "/etc/resolv.conf" is consistent across nodes
search entry in file "/etc/resolv.conf" is consistent across nodes
All nodes have one search entry defined in file "/etc/resolv.conf"
PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on
following nodes: vrh3
File "/etc/resolv.conf" is not consistent across nodes

Checking VIP configuration.
Checking VIP Subnet configuration.
Check for VIP Subnet configuration passed.
Checking VIP reachability
Check for VIP reachability passed.

Pre-check for node addition was unsuccessful on all the nodes.


在 addNode.sh 正式添加节点之前它也会调用 cluvfy 工具来验证新加入节点是否满足条件,如
果不满足则拒绝下一步操作。因为我们在之前已经验证过了新节点的可用性,所以这里完全
可以跳过 addNode.sh 的验证,具体来看一下 addNode.sh 脚本的内容:
[grid@vrh1 bin]$ cat addNode.sh

#!/bin/sh
OHOME=/g01/11.2.0/grid
INVPTRLOC=$OHOME/oraInst.loc
ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC
ORACLE_HOME=$OHOME $*"
if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f
"$OHOME/cv/cvutl/check_nodeadd.pl" ]
then
     $ADDNODE
else
     CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre
$*"
     $CHECK_NODEADD
     if [ $? -eq 0 ]
     then
     $ADDNODE
     fi
fi


可以看到存在一个 IGNORE_PREADDNODE_CHECKS 环境变量可以控制是否进行节点新增
的预检查,我们手动设置该变量,之后再次运行 addNode.sh 脚本:
export IGNORE_PREADDNODE_CHECKS=Y
./addNode.sh "CLUSTER_NEW_NODES={vrh3}"
"CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}"
"CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
> add_node.log 2>&1

另开一个窗口可以监控新增节点的过程日志

tail -f add_node.log

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 5951 MB     Passed
Checking monitor: must be configured to display at least 256 colors.     Actual
16777216    Passed
Oracle Universal Installer, Version 11.2.0.2.0 Production
Copyright (C) 1999, 2010, Oracle. All rights reserved.
Performing tests to see whether nodes vrh2,vrh3 are available
............................................................... 100% Done.

.
-----------------------------------------------------------------------------
Cluster Node Addition Summary
Global Settings
  Source: /g01/11.2.0/grid
  New Nodes
Space Requirements
  New Nodes
    vrh3
      /: Required 6.66GB : Available 32.40GB
Installed Products
  Product Names
    Oracle Grid Infrastructure 11.2.0.2.0
    Sun JDK 1.5.0.24.08
    Installer SDK Component 11.2.0.2.0
    Oracle One-Off Patch Installer 11.2.0.0.2
    Oracle Universal Installer 11.2.0.2.0
    Oracle USM Deconfiguration 11.2.0.2.0
    Oracle Configuration Manager Deconfiguration 10.3.1.0.0
    Enterprise Manager Common Core Files 10.2.0.4.3
    Oracle DBCA Deconfiguration 11.2.0.2.0
    Oracle RAC Deconfiguration 11.2.0.2.0
    Oracle Quality of Service Management (Server) 11.2.0.2.0
    Installation Plugin Files 11.2.0.2.0
    Universal Storage Manager Files 11.2.0.2.0
    Oracle Text Required Support Files 11.2.0.2.0
    Automatic Storage Management Assistant 11.2.0.2.0
    Oracle Database 11g Multimedia Files 11.2.0.2.0
    Oracle Multimedia Java Advanced Imaging 11.2.0.2.0
    Oracle Globalization Support 11.2.0.2.0
    Oracle Multimedia Locator RDBMS Files 11.2.0.2.0
    Oracle Core Required Support Files 11.2.0.2.0
    Bali Share 1.1.18.0.0
    Oracle Database Deconfiguration 11.2.0.2.0
    Oracle Quality of Service Management (Client) 11.2.0.2.0
    Expat libraries 2.0.1.0.1
    Oracle Containers for Java 11.2.0.2.0
    Perl Modules 5.10.0.0.1
    Secure Socket Layer 11.2.0.2.0
    Oracle JDBC/OCI Instant Client 11.2.0.2.0
    Oracle Multimedia Client Option 11.2.0.2.0
    LDAP Required Support Files 11.2.0.2.0
    Character Set Migration Utility 11.2.0.2.0
    Perl Interpreter 5.10.0.0.1
    PL/SQL Embedded Gateway 11.2.0.2.0
    OLAP SQL Scripts 11.2.0.2.0
    Database SQL Scripts 11.2.0.2.0
    Oracle Extended Windowing Toolkit 3.4.47.0.0
SSL Required Support Files for InstantClient 11.2.0.2.0
SQL*Plus Files for Instant Client 11.2.0.2.0
Oracle Net Required Support Files 11.2.0.2.0
Oracle Database User Interface 2.2.13.0.0
RDBMS Required Support Files for Instant Client 11.2.0.2.0
RDBMS Required Support Files Runtime 11.2.0.2.0
XML Parser for Java 11.2.0.2.0
Oracle Security Developer Tools 11.2.0.2.0
Oracle Wallet Manager 11.2.0.2.0
Enterprise Manager plugin Common Files 11.2.0.2.0
Platform Required Support Files 11.2.0.2.0
Oracle JFC Extended Windowing Toolkit 4.2.36.0.0
RDBMS Required Support Files 11.2.0.2.0
Oracle Ice Browser 5.2.3.6.0
Oracle Help For Java 4.2.9.0.0
Enterprise Manager Common Files 10.2.0.4.3
Deinstallation Tool 11.2.0.2.0
Oracle Java Client 11.2.0.2.0
Cluster Verification Utility Files 11.2.0.2.0
Oracle Notification Service (eONS) 11.2.0.2.0
Oracle LDAP administration 11.2.0.2.0
Cluster Verification Utility Common Files 11.2.0.2.0
Oracle Clusterware RDBMS Files 11.2.0.2.0
Oracle Locale Builder 11.2.0.2.0
Oracle Globalization Support 11.2.0.2.0
Buildtools Common Files 11.2.0.2.0
Oracle RAC Required Support Files-HAS 11.2.0.2.0
SQL*Plus Required Support Files 11.2.0.2.0
XDK Required Support Files 11.2.0.2.0
Agent Required Support Files 10.2.0.4.3
Parser Generator Required Support Files 11.2.0.2.0
Precompiler Required Support Files 11.2.0.2.0
Installation Common Files 11.2.0.2.0
Required Support Files 11.2.0.2.0
Oracle JDBC/THIN Interfaces 11.2.0.2.0
Oracle Multimedia Locator 11.2.0.2.0
Oracle Multimedia 11.2.0.2.0
HAS Common Files 11.2.0.2.0
Assistant Common Files 11.2.0.2.0
PL/SQL 11.2.0.2.0
HAS Files for DB 11.2.0.2.0
Oracle Recovery Manager 11.2.0.2.0
Oracle Database Utilities 11.2.0.2.0
Oracle Notification Service 11.2.0.2.0
SQL*Plus 11.2.0.2.0
Oracle Netca Client 11.2.0.2.0
Oracle Net 11.2.0.2.0
Oracle JVM 11.2.0.2.0
Oracle Internet Directory Client 11.2.0.2.0
Oracle Net Listener 11.2.0.2.0
   Cluster Ready Services Files 11.2.0.2.0
   Oracle Database 11g 11.2.0.2.0
-----------------------------------------------------------------------------

Instantiating scripts for add node (Monday, August 15, 2011 10:15:35 PM CST)
.                                                                 1% Done.
Instantiation of add node scripts complete

Copying to remote nodes (Monday, August 15, 2011 10:15:38 PM CST)
................................................................................
...............                                 96% Done.
Home copied to new nodes

Saving inventory on nodes (Monday, August 15, 2011 10:21:02 PM CST)
.                                                               100% Done.
Save inventory complete
WARNING:A new inventory has been created on one or more nodes in this session.
However, it has not yet been registered as the central inventory of this system.
To register the new inventory please run the script at
'/g01/oraInventory/orainstRoot.sh'
with root privileges on nodes 'vrh3'.
If you do not register the inventory, you may not be able to update or
patch the products you installed.
The following configuration scripts need to be executed as the "root" user in
each cluster node.
/g01/oraInventory/orainstRoot.sh #On nodes vrh3
/g01/11.2.0/grid/root.sh #On nodes vrh3
To execute the configuration scripts:
  1. Open a terminal window
  2. Log in as "root"
  3. Run the scripts in each cluster node

The Cluster Node Addition of /g01/11.2.0/grid was successful.
Please check '/tmp/silentInstall.log' for more details.


以上 GI 软件的安装成功了,接下来我们还需要在新加入的节点上运行 2 个关键的脚本,千
万不要忘记这一点!:
运行 orainstRoot.sh 和 root.sh 脚本要求以 root 身份
su - root

[root@vrh3]# cat /etc/oraInst.loc
inventory_loc=/g01/oraInventory                     --这里是 oraInventory 的位置
inst_group=asmadmin

[root@vrh3 ~]# cd /g01/oraInventory

[root@vrh3 oraInventory]# ./orainstRoot.sh
Creating the Oracle inventory pointer file (/etc/oraInst.loc)
Changing permissions of /g01/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /g01/oraInventory to asmadmin.
The execution of the script is complete.

运行 CRS_HOME 下的 root.sh 脚本,可能会有警告但不要紧

[root@vrh3 ~]# cd $ORA_CRS_HOME
[root@vrh3 g01]# /g01/11.2.0/grid/root.sh
Running Oracle 11g root script...

The following environment variables are set as:
  ORACLE_OWNER= grid
  ORACLE_HOME= /g01/11.2.0/grid

Enter the   full pathname of the local bin directory: [/usr/local/bin]:
  Copying   dbhome to /usr/local/bin ...
  Copying   oraenv to /usr/local/bin ...
  Copying   coraenv to /usr/local/bin ...

Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.

Using configuration parameter file:
/g01/11.2.0/grid/crs/install/crsconfig_params
Creating trace directory
LOCAL ADD MODE
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
OLR initialization - successful
Adding daemon to inittab
ACFS-9200: Supported
ACFS-9300: ADVM/ACFS distribution files found.
ACFS-9307: Installing requested ADVM/ACFS software.
ACFS-9308: Loading installed ADVM/ACFS drivers.
ACFS-9321: Creating udev for ADVM/ACFS.
ACFS-9323: Creating module dependencies - this may take some time.
ACFS-9327: Verifying ADVM/ACFS devices.
ACFS-9309: ADVM/ACFS installation correctness verified.
CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS
daemon on node vrh1, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the
cluster
clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
/g01/11.2.0/grid/bin/srvctl start listener -n vrh3 ... failed
Failed to perform new node configuration at
/g01/11.2.0/grid/crs/install/crsconfig_lib.pm line 8255.
/g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib
-I/g01/11.2.0/grid/crs/install
/g01/11.2.0/grid/crs/install/rootcrs.pl execution failed


以上会出现了 2 个小错误:

1.新增节点上 LISTENER 启动失败的问题可以忽略,这是因为 RDBMS_HOME 仍未安装,
但 CRS 尝试去启动相关的监听
[root@vrh3 g01]# /g01/11.2.0/grid/bin/srvctl start listener -n vrh3
PRCR-1013 : Failed to start resource ora.CRS_LISTENER.lsnr
PRCR-1064 : Failed to start resource ora.CRS_LISTENER.lsnr on node vrh3
CRS-5010: Update of configuration file
"/s01/orabase/product/11.2.0/dbhome_1/network/admin/listener.ora" failed:
details at "(:CLSN00014:)" in
"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process
"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details
at "(:CLSN00008:)" in
"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2674: Start of 'ora.CRS_LISTENER.lsnr' on 'vrh3' failed
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process
"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "clean": details
at "(:CLSN00008:)" in
"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process
"/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details
at "(:CLSN00008:)" in
"/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log"
CRS-2678: 'ora.CRS_LISTENER.lsnr' on 'vrh3' has experienced an unrecoverable
failure
CRS-0267: Human intervention required to resume its availability.
PRCC-1015 : LISTENER was already running on vrh3
PRCR-1004 : Resource ora.LISTENER.lsnr is already running


2.rootcrs.pl 脚本运行失败的话,一般重新运行一次即可:
[root@vrh3 bin]# /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib
-I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.pl

Using configuration parameter file:
/g01/11.2.0/grid/crs/install/crsconfig_params
PRKO-2190 : VIP exists for node vrh3, VIP name vrh3-vip
PRKO-2420 : VIP is already started on node(s): vrh3
Preparing packages for installation...
cvuqdisk-1.0.9-1
Configure Oracle Grid Infrastructure for a Cluster ... succeeded


3.建议在新增节点上重启 crs,并使用 cluvfy 验证 nodeadd 顺利完成 :
[root@vrh3 ~]# crsctl stop crs

[root@vrh3 ~]# crsctl start crs

[root@vrh3 ~]# su - grid

[grid@vrh3 ~]$   cluvfy stage -post nodeadd -n vrh1,vrh2,vrh3
Performing post-checks for node addition

Checking node reachability...
Node reachability check passed from node "vrh1"

Checking user equivalence...
User equivalence check passed for user "grid"

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"
Node connectivity check passed

Checking cluster integrity...

Cluster integrity check passed

Checking CRS integrity...

CRS integrity check passed

Checking shared resources...

Checking CRS home location...
The location "/g01/11.2.0/grid" is not shared but is present/creatable on all
nodes
Shared resources check for node addition passed

Checking node connectivity...

Checking hosts config file...

Verification of the hosts config file successful

Check: Node connectivity for interface "eth0"
Node connectivity passed for interface "eth0"

Check: Node connectivity for interface "eth1"
Node connectivity passed for interface "eth1"

Node connectivity check passed

Checking node application existence...

Checking existence of VIP node application (required)
VIP node application check passed

Checking existence of NETWORK node application (required)
NETWORK node application check passed

Checking existence of GSD node application (optional)
GSD node application is offline on nodes "vrh3,vrh2,vrh1"

Checking existence of ONS node application (optional)
ONS node application check passed

Checking Single Client Access Name (SCAN)...

Checking TCP connectivity to SCAN Listeners...
TCP connectivity to SCAN Listeners exists on all cluster nodes

Checking name resolution setup for "vrh.cluster.oracle.com"...

ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name
"vrh.cluster.oracle.com"

ERROR:
PRVF-4657 : Name resolution setup check for "vrh.cluster.oracle.com" (IP
address: 192.168.1.190) failed

ERROR:
PRVF-4664 : Found inconsistent name resolution entries for SCAN name
"vrh.cluster.oracle.com"
Verification of SCAN VIP and Listener setup failed

User "grid" is not part of "root" group. Check passed

Checking if Clusterware is installed on all nodes...
Check of Clusterware install passed

Checking if CTSS Resource is running on all nodes...
CTSS resource check passed

Querying CTSS for time offset on all nodes...
Query of CTSS for time offset passed

Check CTSS state started...
CTSS is in Active state. Proceeding with check of clock time offsets on all
nodes...
Check of clock time offsets passed

Oracle Cluster Time Synchronization Services check passed

Post-check for node addition was successful.


© 2011, www.oracledatabase12g.com. 版权所有.文章允许转载,但必须以链接方式注明源地址,
否则追求法律责任.

Weitere ähnliche Inhalte

Was ist angesagt?

Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitch
Sim Janghoon
 
Docker network Present in VietNam DockerDay 2015
Docker network Present in VietNam DockerDay 2015Docker network Present in VietNam DockerDay 2015
Docker network Present in VietNam DockerDay 2015
Van Phuc
 

Was ist angesagt? (20)

Deep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay NetworksDeep Dive in Docker Overlay Networks
Deep Dive in Docker Overlay Networks
 
Discovering OpenBSD on AWS
Discovering OpenBSD on AWSDiscovering OpenBSD on AWS
Discovering OpenBSD on AWS
 
青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes 青云CoreOS虚拟机部署kubernetes
青云CoreOS虚拟机部署kubernetes
 
Deep dive in Docker Overlay Networks
Deep dive in Docker Overlay NetworksDeep dive in Docker Overlay Networks
Deep dive in Docker Overlay Networks
 
Virtualized network with openvswitch
Virtualized network with openvswitchVirtualized network with openvswitch
Virtualized network with openvswitch
 
Hyperledger composer
Hyperledger composerHyperledger composer
Hyperledger composer
 
Ansible ex407 and EX 294
Ansible ex407 and EX 294Ansible ex407 and EX 294
Ansible ex407 and EX 294
 
[2019.03] 멀티 노드에서 Hyperledger Fabric 네트워크 구성하기
[2019.03] 멀티 노드에서 Hyperledger Fabric 네트워크 구성하기[2019.03] 멀티 노드에서 Hyperledger Fabric 네트워크 구성하기
[2019.03] 멀티 노드에서 Hyperledger Fabric 네트워크 구성하기
 
K8s上の containerized cloud foundryとcontainerized open stackをprometheusで監視してみる
K8s上の containerized cloud foundryとcontainerized open stackをprometheusで監視してみるK8s上の containerized cloud foundryとcontainerized open stackをprometheusで監視してみる
K8s上の containerized cloud foundryとcontainerized open stackをprometheusで監視してみる
 
KubeCon EU 2016: Creating an Advanced Load Balancing Solution for Kubernetes ...
KubeCon EU 2016: Creating an Advanced Load Balancing Solution for Kubernetes ...KubeCon EU 2016: Creating an Advanced Load Balancing Solution for Kubernetes ...
KubeCon EU 2016: Creating an Advanced Load Balancing Solution for Kubernetes ...
 
Deeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay NetworksDeeper dive in Docker Overlay Networks
Deeper dive in Docker Overlay Networks
 
How to send DNS over anything encrypted
How to send DNS over anything encryptedHow to send DNS over anything encrypted
How to send DNS over anything encrypted
 
Hyperledger Fabric v2.0: 새로운 기능
Hyperledger Fabric v2.0: 새로운 기능Hyperledger Fabric v2.0: 새로운 기능
Hyperledger Fabric v2.0: 새로운 기능
 
Docker network Present in VietNam DockerDay 2015
Docker network Present in VietNam DockerDay 2015Docker network Present in VietNam DockerDay 2015
Docker network Present in VietNam DockerDay 2015
 
Content Caching with NGINX and NGINX Plus
Content Caching with NGINX and NGINX PlusContent Caching with NGINX and NGINX Plus
Content Caching with NGINX and NGINX Plus
 
Container Orchestration Integration: OpenStack Kuryr & Apache Mesos
Container Orchestration Integration: OpenStack Kuryr & Apache MesosContainer Orchestration Integration: OpenStack Kuryr & Apache Mesos
Container Orchestration Integration: OpenStack Kuryr & Apache Mesos
 
"Enabling Googley microservices with gRPC" at JEEConf 2017
"Enabling Googley microservices with gRPC" at JEEConf 2017"Enabling Googley microservices with gRPC" at JEEConf 2017
"Enabling Googley microservices with gRPC" at JEEConf 2017
 
Anatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoortersAnatomy of neutron from the eagle eyes of troubelshoorters
Anatomy of neutron from the eagle eyes of troubelshoorters
 
Call Of Duty 2 Cheats
Call Of Duty 2 CheatsCall Of Duty 2 Cheats
Call Of Duty 2 Cheats
 
Kubernetes: Beyond Baby Steps
Kubernetes: Beyond Baby StepsKubernetes: Beyond Baby Steps
Kubernetes: Beyond Baby Steps
 

Andere mochten auch

Major research presentation E.O.I.
Major research presentation E.O.I.Major research presentation E.O.I.
Major research presentation E.O.I.
The City of Toronto
 
Gebruiksaanwijzing platformweegschaal
Gebruiksaanwijzing platformweegschaalGebruiksaanwijzing platformweegschaal
Gebruiksaanwijzing platformweegschaal
laurenztack
 
Nomensa iof 110706
Nomensa iof 110706Nomensa iof 110706
Nomensa iof 110706
Jason Potts
 
Que hago y_como_vivo
Que hago y_como_vivoQue hago y_como_vivo
Que hago y_como_vivo
almeri1595
 
Planning
PlanningPlanning
Planning
Ranolph
 
在Oel5上安装配置oracle gird control 10.2.0.5
在Oel5上安装配置oracle gird control 10.2.0.5在Oel5上安装配置oracle gird control 10.2.0.5
在Oel5上安装配置oracle gird control 10.2.0.5
maclean liu
 
Validation of User Intentions in Process Models
Validation of User Intentions in Process ModelsValidation of User Intentions in Process Models
Validation of User Intentions in Process Models
Gerd Groener
 

Andere mochten auch (20)

1
11
1
 
Iof panel microsoft final_clean
Iof panel microsoft final_cleanIof panel microsoft final_clean
Iof panel microsoft final_clean
 
Nbm112
Nbm112Nbm112
Nbm112
 
The life in the Cloud
The life in the CloudThe life in the Cloud
The life in the Cloud
 
Marianna Iannone - Petrolio: Quanto siamo disposti a pagare?
Marianna Iannone - Petrolio: Quanto siamo disposti a pagare?Marianna Iannone - Petrolio: Quanto siamo disposti a pagare?
Marianna Iannone - Petrolio: Quanto siamo disposti a pagare?
 
Introduction SEAL programme
Introduction SEAL programmeIntroduction SEAL programme
Introduction SEAL programme
 
Major research presentation E.O.I.
Major research presentation E.O.I.Major research presentation E.O.I.
Major research presentation E.O.I.
 
Gebruiksaanwijzing platformweegschaal
Gebruiksaanwijzing platformweegschaalGebruiksaanwijzing platformweegschaal
Gebruiksaanwijzing platformweegschaal
 
了解Oracle在线重定义online redefinition
了解Oracle在线重定义online redefinition了解Oracle在线重定义online redefinition
了解Oracle在线重定义online redefinition
 
Austur Evrópa
Austur EvrópaAustur Evrópa
Austur Evrópa
 
Nomensa iof 110706
Nomensa iof 110706Nomensa iof 110706
Nomensa iof 110706
 
Rafelrand van Amsterdam
Rafelrand van AmsterdamRafelrand van Amsterdam
Rafelrand van Amsterdam
 
Que hago y_como_vivo
Que hago y_como_vivoQue hago y_como_vivo
Que hago y_como_vivo
 
Planning
PlanningPlanning
Planning
 
dbdao.com 汪伟华 my-sql-replication复制高可用配置方案
dbdao.com 汪伟华 my-sql-replication复制高可用配置方案dbdao.com 汪伟华 my-sql-replication复制高可用配置方案
dbdao.com 汪伟华 my-sql-replication复制高可用配置方案
 
Eco Tourism
Eco TourismEco Tourism
Eco Tourism
 
在Oel5上安装配置oracle gird control 10.2.0.5
在Oel5上安装配置oracle gird control 10.2.0.5在Oel5上安装配置oracle gird control 10.2.0.5
在Oel5上安装配置oracle gird control 10.2.0.5
 
Antonio Bavusi - Petrolio: Quanto siamo disposti a pagare?
Antonio Bavusi - Petrolio: Quanto siamo disposti a pagare?Antonio Bavusi - Petrolio: Quanto siamo disposti a pagare?
Antonio Bavusi - Petrolio: Quanto siamo disposti a pagare?
 
Validation of User Intentions in Process Models
Validation of User Intentions in Process ModelsValidation of User Intentions in Process Models
Validation of User Intentions in Process Models
 
Miten toteutan informaation visualisoinnin?
Miten toteutan informaation visualisoinnin?Miten toteutan informaation visualisoinnin?
Miten toteutan informaation visualisoinnin?
 

Ähnlich wie 为11.2.0.2 grid infrastructure添加节点

[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
OpenStack Korea Community
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
Sungman Jang
 

Ähnlich wie 为11.2.0.2 grid infrastructure添加节点 (20)

Rac on NFS
Rac on NFSRac on NFS
Rac on NFS
 
Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1Harmonia open iris_basic_v0.1
Harmonia open iris_basic_v0.1
 
Docker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBMDocker Multi Host Networking, Rachit Arora, IBM
Docker Multi Host Networking, Rachit Arora, IBM
 
Fabric8: Better Software Faster with Docker, Kubernetes, Jenkins
Fabric8: Better Software Faster with Docker, Kubernetes, JenkinsFabric8: Better Software Faster with Docker, Kubernetes, Jenkins
Fabric8: Better Software Faster with Docker, Kubernetes, Jenkins
 
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes MeetupMetal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
Metal-k8s presentation by Julien Girardin @ Paris Kubernetes Meetup
 
Container & kubernetes
Container & kubernetesContainer & kubernetes
Container & kubernetes
 
Puppet at Opera Sofware - PuppetCamp Oslo 2013
Puppet at Opera Sofware - PuppetCamp Oslo 2013Puppet at Opera Sofware - PuppetCamp Oslo 2013
Puppet at Opera Sofware - PuppetCamp Oslo 2013
 
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
Orchestrating Docker with Terraform and Consul by Mitchell Hashimoto
 
Docker.io
Docker.ioDocker.io
Docker.io
 
RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)RHCE (RED HAT CERTIFIED ENGINEERING)
RHCE (RED HAT CERTIFIED ENGINEERING)
 
Networking in Docker EE 2.0 with Kubernetes and Swarm
Networking in Docker EE 2.0 with Kubernetes and SwarmNetworking in Docker EE 2.0 with Kubernetes and Swarm
Networking in Docker EE 2.0 with Kubernetes and Swarm
 
Networking in docker ee with kubernetes and swarm
Networking in docker ee with kubernetes and swarmNetworking in docker ee with kubernetes and swarm
Networking in docker ee with kubernetes and swarm
 
Dockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and NovaDockerizing the Hard Services: Neutron and Nova
Dockerizing the Hard Services: Neutron and Nova
 
Erik Skytthe - Monitoring Mesos, Docker, Containers with Zabbix | ZabConf2016
Erik Skytthe - Monitoring Mesos, Docker, Containers with Zabbix | ZabConf2016Erik Skytthe - Monitoring Mesos, Docker, Containers with Zabbix | ZabConf2016
Erik Skytthe - Monitoring Mesos, Docker, Containers with Zabbix | ZabConf2016
 
Helm @ Orchestructure
Helm @ OrchestructureHelm @ Orchestructure
Helm @ Orchestructure
 
Exploring the Future of Helm
Exploring the Future of HelmExploring the Future of Helm
Exploring the Future of Helm
 
Compliance as Code with InSpec - DevOps Melbourne 2017
Compliance as Code with InSpec - DevOps Melbourne 2017Compliance as Code with InSpec - DevOps Melbourne 2017
Compliance as Code with InSpec - DevOps Melbourne 2017
 
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
20240415 [Container Plumbing Days] Usernetes Gen2 - Kubernetes in Rootless Do...
 
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
[OpenStack 하반기 스터디] Interoperability with ML2: LinuxBridge, OVS and SDN
 
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
20151222_Interoperability with ML2: LinuxBridge, OVS and SDN
 

Mehr von maclean liu

基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
maclean liu
 
Shoug at apouc2015 4min pitch_biotwang_v2
Shoug at apouc2015 4min pitch_biotwang_v2Shoug at apouc2015 4min pitch_biotwang_v2
Shoug at apouc2015 4min pitch_biotwang_v2
maclean liu
 
Apouc 4min pitch_biotwang_v2
Apouc 4min pitch_biotwang_v2Apouc 4min pitch_biotwang_v2
Apouc 4min pitch_biotwang_v2
maclean liu
 
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
maclean liu
 

Mehr von maclean liu (20)

Mysql企业备份发展及实践
Mysql企业备份发展及实践Mysql企业备份发展及实践
Mysql企业备份发展及实践
 
Oracle専用データ復旧ソフトウェアprm dulユーザーズ・マニュアル
Oracle専用データ復旧ソフトウェアprm dulユーザーズ・マニュアルOracle専用データ復旧ソフトウェアprm dulユーザーズ・マニュアル
Oracle専用データ復旧ソフトウェアprm dulユーザーズ・マニュアル
 
【诗檀软件 郭兆伟-技术报告】跨国企业级Oracle数据库备份策略
【诗檀软件 郭兆伟-技术报告】跨国企业级Oracle数据库备份策略【诗檀软件 郭兆伟-技术报告】跨国企业级Oracle数据库备份策略
【诗檀软件 郭兆伟-技术报告】跨国企业级Oracle数据库备份策略
 
基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
基于Oracle 12c data guard & far sync的低资源消耗两地三数据中心容灾方案
 
TomCat迁移步骤简述以及案例
TomCat迁移步骤简述以及案例TomCat迁移步骤简述以及案例
TomCat迁移步骤简述以及案例
 
PRM DUL Oracle Database Health Check
PRM DUL Oracle Database Health CheckPRM DUL Oracle Database Health Check
PRM DUL Oracle Database Health Check
 
Vbox virtual box在oracle linux 5 - shoug 梁洪响
Vbox virtual box在oracle linux 5 - shoug 梁洪响Vbox virtual box在oracle linux 5 - shoug 梁洪响
Vbox virtual box在oracle linux 5 - shoug 梁洪响
 
【诗檀软件】Mysql高可用方案
【诗檀软件】Mysql高可用方案【诗檀软件】Mysql高可用方案
【诗檀软件】Mysql高可用方案
 
Shoug at apouc2015 4min pitch_biotwang_v2
Shoug at apouc2015 4min pitch_biotwang_v2Shoug at apouc2015 4min pitch_biotwang_v2
Shoug at apouc2015 4min pitch_biotwang_v2
 
Apouc 4min pitch_biotwang_v2
Apouc 4min pitch_biotwang_v2Apouc 4min pitch_biotwang_v2
Apouc 4min pitch_biotwang_v2
 
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
使用Oracle osw analyzer工具分析oswbb日志,并绘制系统性能走势图1
 
诗檀软件 Oracle开发优化基础
诗檀软件 Oracle开发优化基础 诗檀软件 Oracle开发优化基础
诗檀软件 Oracle开发优化基础
 
Orclrecove 1 pd-prm-dul testing for oracle database recovery_20141030_biot_wang
Orclrecove 1 pd-prm-dul testing for oracle database recovery_20141030_biot_wangOrclrecove 1 pd-prm-dul testing for oracle database recovery_20141030_biot_wang
Orclrecove 1 pd-prm-dul testing for oracle database recovery_20141030_biot_wang
 
诗檀软件 – Oracle数据库修复专家 oracle数据块损坏知识2014-10-24
诗檀软件 – Oracle数据库修复专家 oracle数据块损坏知识2014-10-24诗檀软件 – Oracle数据库修复专家 oracle数据块损坏知识2014-10-24
诗檀软件 – Oracle数据库修复专家 oracle数据块损坏知识2014-10-24
 
追求Jdbc on oracle最佳性能?如何才好?
追求Jdbc on oracle最佳性能?如何才好?追求Jdbc on oracle最佳性能?如何才好?
追求Jdbc on oracle最佳性能?如何才好?
 
使用Virtual box在oracle linux 5.7上安装oracle database 11g release 2 rac的最佳实践
使用Virtual box在oracle linux 5.7上安装oracle database 11g release 2 rac的最佳实践使用Virtual box在oracle linux 5.7上安装oracle database 11g release 2 rac的最佳实践
使用Virtual box在oracle linux 5.7上安装oracle database 11g release 2 rac的最佳实践
 
Prm dul is an oracle database recovery tool database
Prm dul is an oracle database recovery tool   databasePrm dul is an oracle database recovery tool   database
Prm dul is an oracle database recovery tool database
 
Oracle prm dul, jvm and os
Oracle prm dul, jvm and osOracle prm dul, jvm and os
Oracle prm dul, jvm and os
 
Oracle dba必备技能 使用os watcher工具监控系统性能负载
Oracle dba必备技能   使用os watcher工具监控系统性能负载Oracle dba必备技能   使用os watcher工具监控系统性能负载
Oracle dba必备技能 使用os watcher工具监控系统性能负载
 
Parnassus data recovery manager for oracle database user guide v0.3
Parnassus data recovery manager for oracle database user guide v0.3Parnassus data recovery manager for oracle database user guide v0.3
Parnassus data recovery manager for oracle database user guide v0.3
 

Kürzlich hochgeladen

Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Safe Software
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
?#DUbAI#??##{{(☎️+971_581248768%)**%*]'#abortion pills for sale in dubai@
 

Kürzlich hochgeladen (20)

A Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source MilvusA Beginners Guide to Building a RAG App Using Open Source Milvus
A Beginners Guide to Building a RAG App Using Open Source Milvus
 
Ransomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdfRansomware_Q4_2023. The report. [EN].pdf
Ransomware_Q4_2023. The report. [EN].pdf
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 
presentation ICT roal in 21st century education
presentation ICT roal in 21st century educationpresentation ICT roal in 21st century education
presentation ICT roal in 21st century education
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers:  A Deep Dive into Serverless Spatial Data and FMECloud Frontiers:  A Deep Dive into Serverless Spatial Data and FME
Cloud Frontiers: A Deep Dive into Serverless Spatial Data and FME
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
+971581248768>> SAFE AND ORIGINAL ABORTION PILLS FOR SALE IN DUBAI AND ABUDHA...
 
2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...2024: Domino Containers - The Next Step. News from the Domino Container commu...
2024: Domino Containers - The Next Step. News from the Domino Container commu...
 
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ..."I see eyes in my soup": How Delivery Hero implemented the safety system for ...
"I see eyes in my soup": How Delivery Hero implemented the safety system for ...
 
MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024MINDCTI Revenue Release Quarter One 2024
MINDCTI Revenue Release Quarter One 2024
 
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
Emergent Methods: Multi-lingual narrative tracking in the news - real-time ex...
 
MS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectorsMS Copilot expands with MS Graph connectors
MS Copilot expands with MS Graph connectors
 
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
Apidays New York 2024 - Accelerating FinTech Innovation by Vasa Krishnan, Fin...
 
FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024FWD Group - Insurer Innovation Award 2024
FWD Group - Insurer Innovation Award 2024
 
DBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor PresentationDBX First Quarter 2024 Investor Presentation
DBX First Quarter 2024 Investor Presentation
 
Corporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptxCorporate and higher education May webinar.pptx
Corporate and higher education May webinar.pptx
 
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
Apidays Singapore 2024 - Scalable LLM APIs for AI and Generative AI Applicati...
 
Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024Manulife - Insurer Transformation Award 2024
Manulife - Insurer Transformation Award 2024
 
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin WoodPolkadot JAM Slides - Token2049 - By Dr. Gavin Wood
Polkadot JAM Slides - Token2049 - By Dr. Gavin Wood
 

为11.2.0.2 grid infrastructure添加节点

  • 1. 为 11.2.0.2 Grid Infrastructure 添加节 点 by Maclean.liu liu.maclean@gmail.com www.oracledatabase12g.com
  • 2. About Me l Email:liu.maclean@gmail.com l Blog:www.oracledatabase12g.com l Oracle Certified Database Administrator Master 10g and 11g l Over 6 years experience with Oracle DBA technology l Over 7 years experience with Linux technology l Member Independent Oracle Users Group l Member All China Users Group l Presents for advanced Oracle topics: RAC, DataGuard, Performance Tuning and Oracle Internal.
  • 3. 在之前的文章中我介绍了为 10g RAC Cluster 添加节点的具体步骤。在 11gr2 中 Oracle CRS 升级为 Grid Infrastructure,通过 GI 我们可以更方便地控制 CRS 资源如:VIP、ASM 等等,这 也导致了在为 11.2 中的 GI 添加节点时,同 10gr2 相比有着较大的差异。 这里我们要简述在 11.2 中为 GI ADD NODE 的几个要点: 一、准备工作 准备工作是不可忽略的,在 10g RAC Cluster 添加节点中我列举了必须完成的先决条件,在 11.2 GI 中这些条件依然有效,但请注意以下 2 点: 1.不仅要为 oracle 用户配置用户等价性,也要为 grid(GI 安装用户)用户配置;除非你同时使 用 oracle 安装 GI 和 RDBMS,这是不推荐的 2.在 11.2 GI 中推出了 octssd(Oracle Cluster Synchronization Service Daemon)时间同步服务,如 果打算使用 octssd 的话那么建议禁用 ntpd 事件服务,具体方法如下: # service ntpd stop Shutting down ntpd: [ OK ] # chkconfig ntpd off # mv /etc/ntp.conf /etc/ntp.conf.orig # rm /var/run/ntpd.pid 3.使用 cluster verify 工具验证新增节点是否满足 cluster 的要求: cluvfy stage -pre nodeadd -n <NEW NODE> 具体用法如: su - grid [grid@vrh1 ~]$ cluvfy stage -pre nodeadd -n vrh3 Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "vrh1"
  • 4. Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location... The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "vrh3:/tmp" Free disk space check passed for "vrh1:/tmp" Check for multiple users with UID value 54322 passed User existence check passed for "grid" Run level check passed Hard limits check failed for "maximum open file descriptors" Check failed on nodes: vrh3 Soft limits check passed for "maximum open file descriptors" Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max"
  • 5. Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81( x86_64)" Package existence check passed for "binutils-2.17.50.0.6( x86_64)" Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)" Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)" Package existence check passed for "glibc-common-2.5( x86_64)" Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)" Package existence check passed for "glibc-headers-2.5( x86_64)" Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)" Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)" Package existence check passed for "sysstat-7.0.2( x86_64)" Package existence check passed for "ksh-20060214( x86_64)" Check for multiple users with UID value 0 passed Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes All nodes have one search entry defined in file "/etc/resolv.conf" PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3 File "/etc/resolv.conf" is not consistent across nodes Pre-check for node addition was unsuccessful on all the nodes. 一般来说如果我们不使用 DNS 解析域名方式的话,那么 resolv.conf 不一直的问题可以忽 略,但在 slient 安装模式下可能造成我们的操作无法完成,这个后面会介绍。
  • 6. 二、向 GI 中加入新的节点 注意 11.2.0.2 GI 添加节点的关键脚本 addNode.sh 可能存在 Bug,如官方文档所述当希望使用 Interactive Mode 交互模式启动 OUI 界面添加节点时,只要运行 addNode.sh 脚本即可,实际 情况则不是这样: documentation said: Go to CRS_home/oui/bin and run the addNode.sh script on one of the existing nodes. Oracle Universal Installer runs in add node mode and the Welcome page displays. Click Next and the Specify Cluster Nodes for Node Addition page displays. we done: 运行 addNode.sh 要求以 GI 拥有者身份运行该脚本,一般为 grid 用户,要求在已有的正运行 GI 的节点上启 动脚本 [grid@vrh1 ~]$ cd $ORA_CRS_HOME/oui/bin [grid@vrh1 bin]$ ./addNode.sh ERROR: Value for CLUSTER_NEW_NODES not specified. USAGE: /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl {-pre|-post} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] CLUSTER_NEW_NODES={} CLUSTER_NEW_VIRTUAL_HOSTNAMES={} /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -pre [-silent] -responseFile /g01/11.2.0/grid/cv/cvutl/check_nodeadd.pl -post [-silent] 我们的本意是期望使用图形化的交互界面的 OUI(runInstaller -addnode)来新增节点,然而 addNode.sh 居然让我们输入一些参量,而且其调用的 check_nodeadd.pl 脚本使用的是 silent 模式。 在 MOS 和 GOOGLE 上搜了一圈,基本所有的文档都推荐使用 silent 模式来添加节点,无法 只好转到静默添加上来。实际上静默添加所需要提供的参数并不多,这可能是这种方式得到 推崇的原因之一,但是这里又碰到问题了: 语法 SYNTAX: ./addNode.sh –silent "CLUSTER_NEW_NODES={node2}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={node2-priv}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={node2-vip}" 在我们的例子中具体命令如下 ./addNode.sh -silent "CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}"
  • 7. 以上命令因为采用 silent 模式所以没有任何窗口输出(实际上会输出到 /tmp/silentInstall.log 日志 文件中),去掉-silent 参数 ./addNode.sh "CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" Performing pre-checks for node addition Checking node reachability... Node reachability check passed from node "vrh1" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Node connectivity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location... The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" Node connectivity check passed Total memory check passed Available memory check passed Swap space check passed Free disk space check passed for "vrh3:/tmp" Free disk space check passed for "vrh1:/tmp" Check for multiple users with UID value 54322 passed User existence check passed for "grid" Run level check passed Hard limits check failed for "maximum open file descriptors" Check failed on nodes: vrh3 Soft limits check passed for "maximum open file descriptors"
  • 8. Hard limits check passed for "maximum user processes" Soft limits check passed for "maximum user processes" System architecture check passed Kernel version check passed Kernel parameter check passed for "semmsl" Kernel parameter check passed for "semmns" Kernel parameter check passed for "semopm" Kernel parameter check passed for "semmni" Kernel parameter check passed for "shmmax" Kernel parameter check passed for "shmmni" Kernel parameter check passed for "shmall" Kernel parameter check passed for "file-max" Kernel parameter check passed for "ip_local_port_range" Kernel parameter check passed for "rmem_default" Kernel parameter check passed for "rmem_max" Kernel parameter check passed for "wmem_default" Kernel parameter check passed for "wmem_max" Kernel parameter check passed for "aio-max-nr" Package existence check passed for "make-3.81( x86_64)" Package existence check passed for "binutils-2.17.50.0.6( x86_64)" Package existence check passed for "gcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-0.3.106 (x86_64)( x86_64)" Package existence check passed for "glibc-2.5-24 (x86_64)( x86_64)" Package existence check passed for "compat-libstdc++-33-3.2.3 (x86_64)( x86_64)" Package existence check passed for "elfutils-libelf-0.125 (x86_64)( x86_64)" Package existence check passed for "elfutils-libelf-devel-0.125( x86_64)" Package existence check passed for "glibc-common-2.5( x86_64)" Package existence check passed for "glibc-devel-2.5 (x86_64)( x86_64)" Package existence check passed for "glibc-headers-2.5( x86_64)" Package existence check passed for "gcc-c++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libaio-devel-0.3.106 (x86_64)( x86_64)" Package existence check passed for "libgcc-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-4.1.2 (x86_64)( x86_64)" Package existence check passed for "libstdc++-devel-4.1.2 (x86_64)( x86_64)" Package existence check passed for "sysstat-7.0.2( x86_64)" Package existence check passed for "ksh-20060214( x86_64)" Check for multiple users with UID value 0 passed Current group ID check passed Checking OCR integrity... OCR integrity check passed Checking Oracle Cluster Voting Disk configuration... Oracle Cluster Voting Disk configuration check passed Time zone consistency check passed Starting Clock synchronization checks using Network Time Protocol(NTP)... NTP Configuration file check started... No NTP Daemons or Services were found to be running Clock synchronization check using Network Time Protocol(NTP) passed User "grid" is not part of "root" group. Check passed Checking consistency of file "/etc/resolv.conf" across nodes File "/etc/resolv.conf" does not have both domain and search entries defined domain entry in file "/etc/resolv.conf" is consistent across nodes search entry in file "/etc/resolv.conf" is consistent across nodes All nodes have one search entry defined in file "/etc/resolv.conf" PRVF-5636 : The DNS response time for an unreachable node exceeded "15000" ms on following nodes: vrh3
  • 9. File "/etc/resolv.conf" is not consistent across nodes Checking VIP configuration. Checking VIP Subnet configuration. Check for VIP Subnet configuration passed. Checking VIP reachability Check for VIP reachability passed. Pre-check for node addition was unsuccessful on all the nodes. 在 addNode.sh 正式添加节点之前它也会调用 cluvfy 工具来验证新加入节点是否满足条件,如 果不满足则拒绝下一步操作。因为我们在之前已经验证过了新节点的可用性,所以这里完全 可以跳过 addNode.sh 的验证,具体来看一下 addNode.sh 脚本的内容: [grid@vrh1 bin]$ cat addNode.sh #!/bin/sh OHOME=/g01/11.2.0/grid INVPTRLOC=$OHOME/oraInst.loc ADDNODE="$OHOME/oui/bin/runInstaller -addNode -invPtrLoc $INVPTRLOC ORACLE_HOME=$OHOME $*" if [ "$IGNORE_PREADDNODE_CHECKS" = "Y" -o ! -f "$OHOME/cv/cvutl/check_nodeadd.pl" ] then $ADDNODE else CHECK_NODEADD="$OHOME/perl/bin/perl $OHOME/cv/cvutl/check_nodeadd.pl -pre $*" $CHECK_NODEADD if [ $? -eq 0 ] then $ADDNODE fi fi 可以看到存在一个 IGNORE_PREADDNODE_CHECKS 环境变量可以控制是否进行节点新增 的预检查,我们手动设置该变量,之后再次运行 addNode.sh 脚本: export IGNORE_PREADDNODE_CHECKS=Y ./addNode.sh "CLUSTER_NEW_NODES={vrh3}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={vrh3-vip}" "CLUSTER_NEW_PRIVATE_NODE_NAMES={vrh3-priv}" > add_node.log 2>&1 另开一个窗口可以监控新增节点的过程日志 tail -f add_node.log Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 5951 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Oracle Universal Installer, Version 11.2.0.2.0 Production Copyright (C) 1999, 2010, Oracle. All rights reserved.
  • 10. Performing tests to see whether nodes vrh2,vrh3 are available ............................................................... 100% Done. . ----------------------------------------------------------------------------- Cluster Node Addition Summary Global Settings Source: /g01/11.2.0/grid New Nodes Space Requirements New Nodes vrh3 /: Required 6.66GB : Available 32.40GB Installed Products Product Names Oracle Grid Infrastructure 11.2.0.2.0 Sun JDK 1.5.0.24.08 Installer SDK Component 11.2.0.2.0 Oracle One-Off Patch Installer 11.2.0.0.2 Oracle Universal Installer 11.2.0.2.0 Oracle USM Deconfiguration 11.2.0.2.0 Oracle Configuration Manager Deconfiguration 10.3.1.0.0 Enterprise Manager Common Core Files 10.2.0.4.3 Oracle DBCA Deconfiguration 11.2.0.2.0 Oracle RAC Deconfiguration 11.2.0.2.0 Oracle Quality of Service Management (Server) 11.2.0.2.0 Installation Plugin Files 11.2.0.2.0 Universal Storage Manager Files 11.2.0.2.0 Oracle Text Required Support Files 11.2.0.2.0 Automatic Storage Management Assistant 11.2.0.2.0 Oracle Database 11g Multimedia Files 11.2.0.2.0 Oracle Multimedia Java Advanced Imaging 11.2.0.2.0 Oracle Globalization Support 11.2.0.2.0 Oracle Multimedia Locator RDBMS Files 11.2.0.2.0 Oracle Core Required Support Files 11.2.0.2.0 Bali Share 1.1.18.0.0 Oracle Database Deconfiguration 11.2.0.2.0 Oracle Quality of Service Management (Client) 11.2.0.2.0 Expat libraries 2.0.1.0.1 Oracle Containers for Java 11.2.0.2.0 Perl Modules 5.10.0.0.1 Secure Socket Layer 11.2.0.2.0 Oracle JDBC/OCI Instant Client 11.2.0.2.0 Oracle Multimedia Client Option 11.2.0.2.0 LDAP Required Support Files 11.2.0.2.0 Character Set Migration Utility 11.2.0.2.0 Perl Interpreter 5.10.0.0.1 PL/SQL Embedded Gateway 11.2.0.2.0 OLAP SQL Scripts 11.2.0.2.0 Database SQL Scripts 11.2.0.2.0 Oracle Extended Windowing Toolkit 3.4.47.0.0
  • 11. SSL Required Support Files for InstantClient 11.2.0.2.0 SQL*Plus Files for Instant Client 11.2.0.2.0 Oracle Net Required Support Files 11.2.0.2.0 Oracle Database User Interface 2.2.13.0.0 RDBMS Required Support Files for Instant Client 11.2.0.2.0 RDBMS Required Support Files Runtime 11.2.0.2.0 XML Parser for Java 11.2.0.2.0 Oracle Security Developer Tools 11.2.0.2.0 Oracle Wallet Manager 11.2.0.2.0 Enterprise Manager plugin Common Files 11.2.0.2.0 Platform Required Support Files 11.2.0.2.0 Oracle JFC Extended Windowing Toolkit 4.2.36.0.0 RDBMS Required Support Files 11.2.0.2.0 Oracle Ice Browser 5.2.3.6.0 Oracle Help For Java 4.2.9.0.0 Enterprise Manager Common Files 10.2.0.4.3 Deinstallation Tool 11.2.0.2.0 Oracle Java Client 11.2.0.2.0 Cluster Verification Utility Files 11.2.0.2.0 Oracle Notification Service (eONS) 11.2.0.2.0 Oracle LDAP administration 11.2.0.2.0 Cluster Verification Utility Common Files 11.2.0.2.0 Oracle Clusterware RDBMS Files 11.2.0.2.0 Oracle Locale Builder 11.2.0.2.0 Oracle Globalization Support 11.2.0.2.0 Buildtools Common Files 11.2.0.2.0 Oracle RAC Required Support Files-HAS 11.2.0.2.0 SQL*Plus Required Support Files 11.2.0.2.0 XDK Required Support Files 11.2.0.2.0 Agent Required Support Files 10.2.0.4.3 Parser Generator Required Support Files 11.2.0.2.0 Precompiler Required Support Files 11.2.0.2.0 Installation Common Files 11.2.0.2.0 Required Support Files 11.2.0.2.0 Oracle JDBC/THIN Interfaces 11.2.0.2.0 Oracle Multimedia Locator 11.2.0.2.0 Oracle Multimedia 11.2.0.2.0 HAS Common Files 11.2.0.2.0 Assistant Common Files 11.2.0.2.0 PL/SQL 11.2.0.2.0 HAS Files for DB 11.2.0.2.0 Oracle Recovery Manager 11.2.0.2.0 Oracle Database Utilities 11.2.0.2.0 Oracle Notification Service 11.2.0.2.0 SQL*Plus 11.2.0.2.0 Oracle Netca Client 11.2.0.2.0 Oracle Net 11.2.0.2.0 Oracle JVM 11.2.0.2.0 Oracle Internet Directory Client 11.2.0.2.0
  • 12. Oracle Net Listener 11.2.0.2.0 Cluster Ready Services Files 11.2.0.2.0 Oracle Database 11g 11.2.0.2.0 ----------------------------------------------------------------------------- Instantiating scripts for add node (Monday, August 15, 2011 10:15:35 PM CST) . 1% Done. Instantiation of add node scripts complete Copying to remote nodes (Monday, August 15, 2011 10:15:38 PM CST) ................................................................................ ............... 96% Done. Home copied to new nodes Saving inventory on nodes (Monday, August 15, 2011 10:21:02 PM CST) . 100% Done. Save inventory complete WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system. To register the new inventory please run the script at '/g01/oraInventory/orainstRoot.sh' with root privileges on nodes 'vrh3'. If you do not register the inventory, you may not be able to update or patch the products you installed. The following configuration scripts need to be executed as the "root" user in each cluster node. /g01/oraInventory/orainstRoot.sh #On nodes vrh3 /g01/11.2.0/grid/root.sh #On nodes vrh3 To execute the configuration scripts: 1. Open a terminal window 2. Log in as "root" 3. Run the scripts in each cluster node The Cluster Node Addition of /g01/11.2.0/grid was successful. Please check '/tmp/silentInstall.log' for more details. 以上 GI 软件的安装成功了,接下来我们还需要在新加入的节点上运行 2 个关键的脚本,千 万不要忘记这一点!: 运行 orainstRoot.sh 和 root.sh 脚本要求以 root 身份 su - root [root@vrh3]# cat /etc/oraInst.loc inventory_loc=/g01/oraInventory --这里是 oraInventory 的位置 inst_group=asmadmin [root@vrh3 ~]# cd /g01/oraInventory [root@vrh3 oraInventory]# ./orainstRoot.sh Creating the Oracle inventory pointer file (/etc/oraInst.loc) Changing permissions of /g01/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /g01/oraInventory to asmadmin. The execution of the script is complete. 运行 CRS_HOME 下的 root.sh 脚本,可能会有警告但不要紧 [root@vrh3 ~]# cd $ORA_CRS_HOME
  • 13. [root@vrh3 g01]# /g01/11.2.0/grid/root.sh Running Oracle 11g root script... The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /g01/11.2.0/grid Enter the full pathname of the local bin directory: [/usr/local/bin]: Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params Creating trace directory LOCAL ADD MODE Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. OLR initialization - successful Adding daemon to inittab ACFS-9200: Supported ACFS-9300: ADVM/ACFS distribution files found. ACFS-9307: Installing requested ADVM/ACFS software. ACFS-9308: Loading installed ADVM/ACFS drivers. ACFS-9321: Creating udev for ADVM/ACFS. ACFS-9323: Creating module dependencies - this may take some time. ACFS-9327: Verifying ADVM/ACFS devices. ACFS-9309: ADVM/ACFS installation correctness verified. CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node vrh1, number 1, and is terminating An active cluster was found during exclusive startup, restarting to join the cluster clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 11g Release 2. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. /g01/11.2.0/grid/bin/srvctl start listener -n vrh3 ... failed Failed to perform new node configuration at /g01/11.2.0/grid/crs/install/crsconfig_lib.pm line 8255. /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib -I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.pl execution failed 以上会出现了 2 个小错误: 1.新增节点上 LISTENER 启动失败的问题可以忽略,这是因为 RDBMS_HOME 仍未安装, 但 CRS 尝试去启动相关的监听 [root@vrh3 g01]# /g01/11.2.0/grid/bin/srvctl start listener -n vrh3 PRCR-1013 : Failed to start resource ora.CRS_LISTENER.lsnr PRCR-1064 : Failed to start resource ora.CRS_LISTENER.lsnr on node vrh3 CRS-5010: Update of configuration file
  • 14. "/s01/orabase/product/11.2.0/dbhome_1/network/admin/listener.ora" failed: details at "(:CLSN00014:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log" CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log" CRS-2674: Start of 'ora.CRS_LISTENER.lsnr' on 'vrh3' failed CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "clean": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log" CRS-5013: Agent "/g01/11.2.0/grid/bin/oraagent.bin" failed to start process "/s01/orabase/product/11.2.0/dbhome_1/bin/lsnrctl" for action "check": details at "(:CLSN00008:)" in "/g01/11.2.0/grid/log/vrh3/agent/crsd/oraagent_oracle/oraagent_oracle.log" CRS-2678: 'ora.CRS_LISTENER.lsnr' on 'vrh3' has experienced an unrecoverable failure CRS-0267: Human intervention required to resume its availability. PRCC-1015 : LISTENER was already running on vrh3 PRCR-1004 : Resource ora.LISTENER.lsnr is already running 2.rootcrs.pl 脚本运行失败的话,一般重新运行一次即可: [root@vrh3 bin]# /g01/11.2.0/grid/perl/bin/perl -I/g01/11.2.0/grid/perl/lib -I/g01/11.2.0/grid/crs/install /g01/11.2.0/grid/crs/install/rootcrs.pl Using configuration parameter file: /g01/11.2.0/grid/crs/install/crsconfig_params PRKO-2190 : VIP exists for node vrh3, VIP name vrh3-vip PRKO-2420 : VIP is already started on node(s): vrh3 Preparing packages for installation... cvuqdisk-1.0.9-1 Configure Oracle Grid Infrastructure for a Cluster ... succeeded 3.建议在新增节点上重启 crs,并使用 cluvfy 验证 nodeadd 顺利完成 : [root@vrh3 ~]# crsctl stop crs [root@vrh3 ~]# crsctl start crs [root@vrh3 ~]# su - grid [grid@vrh3 ~]$ cluvfy stage -post nodeadd -n vrh1,vrh2,vrh3 Performing post-checks for node addition Checking node reachability... Node reachability check passed from node "vrh1" Checking user equivalence... User equivalence check passed for user "grid" Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0"
  • 15. Node connectivity check passed Checking cluster integrity... Cluster integrity check passed Checking CRS integrity... CRS integrity check passed Checking shared resources... Checking CRS home location... The location "/g01/11.2.0/grid" is not shared but is present/creatable on all nodes Shared resources check for node addition passed Checking node connectivity... Checking hosts config file... Verification of the hosts config file successful Check: Node connectivity for interface "eth0" Node connectivity passed for interface "eth0" Check: Node connectivity for interface "eth1" Node connectivity passed for interface "eth1" Node connectivity check passed Checking node application existence... Checking existence of VIP node application (required) VIP node application check passed Checking existence of NETWORK node application (required) NETWORK node application check passed Checking existence of GSD node application (optional) GSD node application is offline on nodes "vrh3,vrh2,vrh1" Checking existence of ONS node application (optional) ONS node application check passed Checking Single Client Access Name (SCAN)... Checking TCP connectivity to SCAN Listeners... TCP connectivity to SCAN Listeners exists on all cluster nodes Checking name resolution setup for "vrh.cluster.oracle.com"... ERROR: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com" ERROR: PRVF-4657 : Name resolution setup check for "vrh.cluster.oracle.com" (IP address: 192.168.1.190) failed ERROR: PRVF-4664 : Found inconsistent name resolution entries for SCAN name "vrh.cluster.oracle.com"
  • 16. Verification of SCAN VIP and Listener setup failed User "grid" is not part of "root" group. Check passed Checking if Clusterware is installed on all nodes... Check of Clusterware install passed Checking if CTSS Resource is running on all nodes... CTSS resource check passed Querying CTSS for time offset on all nodes... Query of CTSS for time offset passed Check CTSS state started... CTSS is in Active state. Proceeding with check of clock time offsets on all nodes... Check of clock time offsets passed Oracle Cluster Time Synchronization Services check passed Post-check for node addition was successful. © 2011, www.oracledatabase12g.com. 版权所有.文章允许转载,但必须以链接方式注明源地址, 否则追求法律责任.