Oracle 11g RAC重新添加节点的示例分析

网友投稿 236 2023-12-14

Oracle 11g RAC重新添加节点的示例分析

本篇文章为大家展示了Oracle 11g RAC重新添加节点的示例分析,内容简明扼要并且容易理解,绝对能使你眼前一亮,通过这篇文章的详细介绍希望你能有所收获。

Oracle 11g RAC重新添加节点的示例分析

      环境:

            suse 11sp4

          Oracle 11.2.0.4.180116 RAC

安装Oracle11g rac过程中,由于主机硬件原因,导致节点1操作系统需要重新安装,目前集群已经全部安装完成,尚未创建数据库

    详细操作过程如下:

grid@XXXXXrac2:~> olsnodes -s -t

XXXXXrac1       Inactive        Unpinned

XXXXXrac2       Active  Unpinned

grid@XXXXXrac2:~> crsctl stat res -t

--------------------------------------------------------------------------------

NAME           TARGET  STATE        SERVER                   STATE_DETAILS

--------------------------------------------------------------------------------

Local Resources

--------------------------------------------------------------------------------

ora.LISTENER.lsnr

               ONLINE  ONLINE       XXXXXrac2                                    

ora.DATA.dg

               ONLINE  ONLINE       XXXXXrac2                                    

ora.asm

ONLINE  ONLINE       XXXXXrac2                Started

ora.gsd

               OFFLINE OFFLINE      XXXXXrac2                                    

ora-1-work

ONLINE  ONLINE       XXXXXrac2

ora.ons

               ONLINE  ONLINE       XXXXXrac2                                    

--------------------------------------------------------------------------------

Cluster Resources

--------------------------------------------------------------------------------

ora.LISTENER_SCAN1.lsnr

      1        ONLINE  ONLINE       XXXXXrac2                                    

ora.cvu

      1        ONLINE  ONLINE       XXXXXrac2                                    

ora.XXXXXrac1.vip

1        ONLINE  INTERMEDIATE XXXXXrac2                FAILED OVER

ora.XXXXXrac2.vip

      1        ONLINE  ONLINE       XXXXXrac2                                    

ora.oc4j

1        ONLINE  ONLINE       XXXXXrac2

ora.scan1.vip

      1        ONLINE  ONLINE       XXXXXrac2                                    

--删除一节点:

/oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1(二节点root执行)

XXXXXrac2:~ # /oracle/xxxx/grid/bin/crsctl delete node -n XXXXXrac1

CRS-4661: Node XXXXXrac1 successfully deleted.

XXXXXrac2:~ # 

grid@XXXXXrac2:~> olsnodes -s -t

XXXXXrac2       Active  Unpinned

--清除一节点VIP信息

XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl stop vip -i XXXXXrac1 -f

XXXXXrac2:~ # /oracle/xxxx/grid/bin/srvctl remove vip -i XXXXXrac1 -f

二节点更新inventory信息

grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/xxxx/grid  "CLUSTER_NODES=XXXXXrac2" CRS=TRUE -silent

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 32767 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/app/oraInventory

UpdateNodeList was successful.

oracle@XXXXXrac2:~>  /oracle/app/oracle/product/11.2.0/db_1/oui/bin/runInstaller -updateNodeList ORACLE_HOME=/oracle/app/oracle/product/11.2.0/db_1 "CLUSTER_NODES=XXXXXrac2"

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 32767 MB    Passed

The inventory pointer is located at /etc/oraInst.loc

The inventory is located at /oracle/app/oraInventory

UpdateNodeList was successful.

检查整个删除过程

grid@XXXXXrac2:~> cluvfy stage -post nodedel -n XXXXXrac1 -verbose

Performing post-checks for node removal

Checking CRS integrity...

Clusterware version consistency passed

The Oracle Clusterware is healthy on node "XXXXXrac2"

CRS integrity check passed

Result: 

Node removal check passed

Post-check for node removal was successful.

==========添加节点

主机安装操作系统。

配置集群安装基础环境

--配置ssh互信

grid、oracle 用户ssh互信配置

mkdir ~/.ssh

chmod 755 ~/.ssh

/usr/bin/ssh-keygen -t rsa

/usr/bin/ssh-keygen -t dsa

cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

ssh XXXXXrac2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

ssh XXXXXrac2 cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

scp ~/.ssh/authorized_keys XXXXXrac2:~/.ssh

--添加grid

[export IGNORE_PREADDNODE_CHECKS=Y  #可选]

grid@XXXXXrac2:~> /oracle/xxxx/grid/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={XXXXXrac1-vip}"

Performing pre-checks for node addition 

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "grid"

......

Saving inventory on nodes (Monday, March 19, 2018 4:13:26 PM CST)

.                                                               100% Done.

Save inventory complete

WARNING:A new inventory has been created on one or more nodes in this session. However, it has not yet been registered as the central inventory of this system.

To register the new inventory please run the script at /oracle/app/oraInventory/orainstRoot.sh with root privileges on nodes XXXXXrac1.

If you do not register the inventory, you may not be able to update or patch the products you installed.

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/oracle/app/oraInventory/orainstRoot.sh #On nodes XXXXXrac1

/oracle/xxxx/grid/root.sh #On nodes XXXXXrac1

To execute the configuration scripts:

1. Open a terminal window

    2. Log in as "root"

    3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/xxxx/grid was successful.

Please check /tmp/silentInstall.log for more details.

--节点一运行脚本

XXXXXrac1:~ # /oracle/app/oraInventory/orainstRoot.sh

Creating the Oracle inventory pointer file (/etc/oraInst.loc)

Changing permissions of /oracle/app/oraInventory.

Adding read,write permissions for group.

Removing read,write,execute permissions for world.

Changing groupname of /oracle/app/oraInventory to oinstall.

The execution of the script is complete.

XXXXXrac1:~ # 

XXXXXrac1:~ # 

XXXXXrac1:~ # /oracle/xxxx/grid/root.sh

Performing root user operation for Oracle 11g 

The following environment variables are set as:

    ORACLE_OWNER= grid

    ORACLE_HOME=  /oracle/xxxx/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:

   Copying dbhome to /usr/local/bin ...

Copying oraenv to /usr/local/bin ...

   Copying coraenv to /usr/local/bin ...

Creating /etc/oratab file...

Entries will be added to the /etc/oratab file as needed by

Database Configuration Assistant when a database is created

Finished running generic part of root script.

Now product-specific root actions will be performed.

Using configuration parameter file: /oracle/xxxx/grid/crs/install/crsconfig_params

Creating trace directory

User ignored Prerequisites during installation

Installing Trace File Analyzer

OLR initialization - successful

Adding Clusterware entries to /etc/inittab

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node XXXXXrac2, number 2, and is terminating

An active cluster was found during exclusive startup, restarting to join the cluster

clscfg: EXISTING configuration version 5 detected.

clscfg: version 5 is 11g Release 2.

Successfully accumulated necessary OCR keys.

Creating OCR keys for user root, privgrp root..

Operation successful.

Preparing packages for installation...

cvuqdisk-1.0.9-1

Configure Oracle Grid Infrastructure for a Cluster ... succeeded

--检查grid是否添加成功

检查是否添加成功

grid@XXXXXrac2:~> cluvfy stage -post nodeadd -n XXXXXrac1

Performing post-checks for node addition 

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "grid"

....

--添加rdbms

oracle@XXXXXrac2:~> /oracle/app/oracle/product/11.2.0/db_1/oui/bin/addNode.sh "CLUSTER_NEW_NODES={XXXXXrac1}"

Performing pre-checks for node addition

Checking node reachability...

Node reachability check passed from node "XXXXXrac2"

Checking user equivalence...

User equivalence check passed for user "oracle"

WARNING: 

Node "XXXXXrac1" already appears to be part of cluster

Pre-check for node addition was successful. 

Starting Oracle Universal Installer...

Checking swap space: must be greater than 500 MB.   Actual 32767 MB    Passed

Oracle Universal Installer, Version 11.2.0.4.0 Production

Copyright (C) 1999, 2013, Oracle. All rights reserved.

Performing tests to see whether nodes XXXXXrac1 are available

............................................................... 100% Done.

.

-----------------------------------------------------------------------------

Cluster Node Addition Summary

Global Settings

   Source: /oracle/app/oracle/product/11.2.0/db_1

New Nodes

Space Requirements

   New Nodes

   ......

WARNING:

The following configuration scripts need to be executed as the "root" user in each new cluster node. Each script in the list below is followed by a list of nodes.

/oracle/app/oracle/product/11.2.0/db_1/root.sh #On nodes XXXXXrac1

To execute the configuration scripts:

    1. Open a terminal window

    2. Log in as "root"

    3. Run the scripts in each cluster node

The Cluster Node Addition of /oracle/app/oracle/product/11.2.0/db_1 was successful.

Please check /tmp/silentInstall.log for more details.

切换到root后执行root脚本。

上述内容就是Oracle 11g RAC重新添加节点的示例分析,你们学到知识或技能了吗?如果还想学到更多技能或者丰富自己的知识储备,欢迎关注行业资讯频道。

版权声明:本文内容由网络用户投稿,版权归原作者所有,本站不拥有其著作权,亦不承担相应法律责任。如果您发现本站中有涉嫌抄袭或描述失实的内容,请联系我们jiasou666@gmail.com 处理,核实后本网站将在24小时内删除侵权内容。

上一篇:IMPDP分区表的注意事项有哪些
下一篇:如何进行CPack的入门
相关文章

 发表评论

暂时没有评论,来抢沙发吧~