Continue with the previous article, starting with the server introduction
Server environment |
||
Sequence Number |
Name |
Edition |
1 |
Oracle Linux |
Enterprise-R6-U8-Server-x86_64 |
2 |
Grid Insfrastructure |
112040_Linux-x86-64 |
3 |
Oracle 11g |
112040_Linux-x86-64 |
network configuration |
|||||
Host Name |
Network Card Name |
Address type |
IP Address |
Remarks |
field |
host01 |
eth0 |
Public network |
10.0.1.101 |
example.com |
|
eth1 |
Private Network |
192.168.56.101 |
host01-priv |
nothing |
|
vip |
SoftEther |
10.0.1.105 |
host01-vip |
|
|
host02 |
eth0 |
Public network |
10.0.1.102 |
example.com |
|
eth1 |
Private Network |
192.168.56.102 |
host02-priv |
nothing |
|
vip |
SoftEther |
10.0.1.106 |
host02-vip |
|
|
host03 |
eth0 |
Public network |
10.0.1.103 |
Delete Node |
example.com |
eth1 |
Private Network |
192.168.56.103 |
host03-priv |
|
|
vip |
SoftEther |
10.0.1.107 |
host03-vip |
|
|
cluster01-scan |
Virtual Address |
Public network |
10.0.1.201 |
scan address |
|
http://docs.oracle.com/cd/E11882_01/rac.112/e41959/adddelclusterware.htm#CWADD90989
1. Use root to check if the node is pinned, ORACLE_HOME=/u01/app/11.2.0/grid
2. Use root to execute ORACLE_HOME=/u01/app/11.2.0/grid under the node host03 directory ORACLE_HOME/crs/install that needs to be deleted[root@host01 ~]# olsnodes -s -t host01 Active Unpinned host02 Active Unpinned host03 Active Unpinned
When deleting, we must know which one is. Actually, I wanted to delete host03, but unfortunately it was executed on host01. The life above will be deleted immediately with a little execution. Remember to use it carefully and think more before using it.[root@host03 ~]# $ORACLE_HOME/crs/install/rootcrs.pl -deconfig -force Using configuration parameter file: /u01/app/11.2.0/grid/crs/install/crsconfig_params PRCR-1119 : Failed to look up CRS resources of ora.cluster_vip_net1.type type PRCR-1068 : Failed to query resources Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.gsd is registered Cannot communicate with crsd PRCR-1070 : Failed to check if resource ora.ons is registered Cannot communicate with crsd CRS-4544: Unable to connect to OHAS CRS-4000: Command Stop failed, or completed with errors. Removing Trace File Analyzer Successfully deconfigured Oracle clusterware stack on this node
3. Use root to execute the following commands on any node that has not been deleted
4. Check if the cluster has been deleted. Here you can see that host03 has been deleted completely[root@host02 ~]# crsctl delete node -n host03 CRS-4661: Node host03 successfully deleted.
5. Use grid and oracle to execute on any undeleted node host01 and host02[root@host02 ~]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.DATA.dg ora....up.type ONLINE ONLINE host02 ora.FRA.dg ora....up.type ONLINE ONLINE host02 ora....ER.lsnr ora....er.type ONLINE ONLINE host02 ora....N1.lsnr ora....er.type ONLINE ONLINE host02 ora....N2.lsnr ora....er.type ONLINE ONLINE host01 ora....N3.lsnr ora....er.type ONLINE ONLINE host02 ora.asm ora.asm.type ONLINE ONLINE host02 ora.cvu ora.cvu.type ONLINE ONLINE host02 ora.gsd ora.gsd.type OFFLINE OFFLINE ora....SM2.asm application ONLINE ONLINE host02 ora....02.lsnr application ONLINE ONLINE host02 ora.host02.gsd application OFFLINE OFFLINE ora.host02.ons application ONLINE ONLINE host02 ora.host02.vip ora....t1.type ONLINE ONLINE host02 ora....SM3.asm application ONLINE ONLINE host01 ora....01.lsnr application ONLINE ONLINE host01 ora.host01.gsd application OFFLINE OFFLINE ora.host01.ons application ONLINE ONLINE host01 ora.host01.vip ora....t1.type ONLINE ONLINE host01 ora....network ora....rk.type ONLINE ONLINE host02 ora.oc4j ora.oc4j.type ONLINE ONLINE host01 ora.ons ora.ons.type ONLINE ONLINE host02 ora.racdb.db ora....se.type OFFLINE OFFLINE ora.scan1.vip ora....ip.type ONLINE ONLINE host02 ora.scan2.vip ora....ip.type ONLINE ONLINE host01 ora.scan3.vip ora....ip.type ONLINE ONLINE host02
6. Check to see if the cluster has been completely removed, using grid and oracle to check[grid@host02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={host03}" CRS=TRUE -silent -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3957 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' failed. [grid@host02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={host02,host01}" CRS=TRUE -silent -local Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3957 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful. [oracle@host02 ~]$ $ORACLE_HOME/oui/bin/runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={host02,host01}" Starting Oracle Universal Installer... Checking swap space: must be greater than 500 MB. Actual 3957 MB Passed The inventory pointer is located at /etc/oraInst.loc The inventory is located at /u01/app/oraInventory 'UpdateNodeList' was successful.
Delete jobs applied on the remaining hosts$ORACLE_HOME/deinstall/deinstall-local can be deleted using their own oracle and grid users.[grid@host02 ~]$ cluvfy stage -post nodedel -n host03 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "host02" CRS integrity check passed Result: Node removal check passed Post-check for node removal was successful. [oracle@host02 ~]$ cluvfy stage -post nodedel -n host03 -verbose Performing post-checks for node removal Checking CRS integrity... Clusterware version consistency passed The Oracle Clusterware is healthy on node "host01" The Oracle Clusterware is healthy on node "host02" CRS integrity check passed Result: Node removal check passed Post-check for node removal was successful.