In this article we are going to the step by step procedure to delete a node from a standard 12c cluster. Standard cluster in 12c is same as the cluster we have in pre 12c. This procedure step by step procedure holds good to rac environments with release 11.2.0.2 and later also.
- 1.About the environment
- 2.Removing a node
- a.Removing a oracle database instance
- b.Removing RDBMS software
- c.Removing node from cluster
- d.Verification
- e.Removing remaining components
1. About the environment :
This environment is a 3 node 12c rac grid infrastructure with ASM as common storage. The installation is role separation i.e. separate users for grid and oracle. Grid infrastructure is installed in grid user and RDBMS is installed in oracle user. The three nodes in the clusterware are askmrac1/2/3 and the databases instances are orcl1/2/3. As a practice in this demo, we are going to delete askmrac3 from the environment.
2. Removing a Node :
Before starting removing a node, you need to make sure that the following tasks are completed.
2. Removing a Node :
Before starting removing a node, you need to make sure that the following tasks are completed.
- -Remove oem db console if you have db console installed in your environment
- -Backup OCR
- -If you have any services running with the instance to be deleted as preferred or available instance, please modify the services.
[root@askmrac1 ~]# su - grid
[grid@askmrac1 ~]$ olsnodes
askmrac1
askmrac2
askmrac3
[grid@askmrac1 ~]$ crsctl get cluster mode status
Cluster is running in "standard" mode
[grid@askmrac1 ~]$ srvctl config gns
PRKF-1110 : Neither GNS server nor GNS client is configured on this cluster
[grid@askmrac1 ~]$ oifcfg getif
eth0 192.168.1.0 global public
eth1 10.10.10.0 global cluster_interconnect
[grid@askmrac1 ~]$ crsctl get node role config
Node 'askmrac1' configured role is 'hub'
[grid@askmrac1 ~]$ asmcmd showclustermode
ASM cluster : Flex mode disabled
[grid@askmrac1 ~]$ asmcmd showclusterstate
Normal
[grid@askmrac1 ~]$ srvctl status asm -detail
ASM is running on askmrac3,askmrac2,askmrac1
ASM is enabled.
[grid@askmrac1 ~]$ crsctl get node role config -all
Node 'askmrac1' configured role is 'hub'
Node 'askmrac2' configured role is 'hub'
Node 'askmrac3' configured role is 'hub'
[grid@askmrac1 ~]$ crsctl get node role status -all
Node 'askmrac1' active role is 'hub'
Node 'askmrac2' active role is 'hub'
Node 'askmrac3' active role is 'hub'
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ONLINE ONLINE askmrac3 STABLE
ora.OCR_VOTE.dg
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ONLINE ONLINE askmrac3 STABLE
ora.asm
ONLINE ONLINE askmrac1 Started,STABLE
ONLINE ONLINE askmrac2 Started,STABLE
ONLINE ONLINE askmrac3 Started,STABLE
ora.net1.network
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ONLINE ONLINE askmrac3 STABLE
ora.ons
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ONLINE ONLINE askmrac3 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE askmrac2 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE askmrac2 169.254.200.16 10.10
.10.232,STABLE
ora.askmrac1.vip
1 ONLINE ONLINE askmrac1 STABLE
ora.askmrac2.vip
1 ONLINE ONLINE askmrac2 STABLE
ora.askmrac3.vip
1 ONLINE ONLINE askmrac3 STABLE
ora.cvu
1 ONLINE ONLINE askmrac2 STABLE
ora.oc4j
1 ONLINE ONLINE askmrac3 STABLE
ora.orcl.db
1 ONLINE ONLINE askmrac1 Open,STABLE
2 ONLINE ONLINE askmrac2 Open,STABLE
3 ONLINE ONLINE askmrac3 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE askmrac2 STABLE
--------------------------------------------------------------------------------
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ exit
logout
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/olsnodes -s
askmrac1 Active
askmrac2 Active
askmrac3 Active
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/crsctl query css votedisk
## STATE File Universal Id File Name Disk group
-- ----- ----------------- --------- ---------
1. ONLINE a6aeac6ed4274f91bfe7e12d4592e4f0 (/dev/xvdc1) [OCR_VOTE]
Located 1 voting disk(s).
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 4
Total space (kbytes) : 409568
Used space (kbytes) : 7784
Available space (kbytes) : 401784
ID : 2080220530
Device/File Name : +OCR_VOTE
Device/File integrity check succeeded
Device/File not configured
Device/File not configured
Device/File not configured
Device/File not configured
Cluster registry integrity check succeeded
Logical corruption check succeeded
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl status database -d orcl
Instance ORCL1 is running on node askmrac1
Instance ORCL2 is running on node askmrac2
Instance ORCL3 is running on node askmrac3
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl config service -d ORCL
[root@askmrac1 ~]# /u01/app/12.1.0/grid_1/bin/srvctl status service -d orcl
2.a Removing a oracle database instance :
As the Oracle software owner, run the Oracle Database Configuration Assistant (DBCA) in silent mode from a node that will remain in the cluster to remove the orcl3 instance from the existing cluster database. The instance that's being removed by DBCA must be up and running.
[root@askmrac1 ~]# su - oracle
[oracle@askmrac1 ~]$ which dbca
/u01/app/oracle/product/12.1.0/dbhome_1/bin/dbca
[oracle@askmrac1 ~]$ dbca -silent -deleteInstance -nodeList askmrac3 -gdbName ORCL -instanceName ORCL3 -sysDBAUserName sys -sysDBAPassword oracle
Deleting instance
1% complete
2% complete
6% complete
13% complete
20% complete
26% complete
33% complete
40% complete
46% complete
53% complete
60% complete
66% complete
Completing instance management.
100% complete
Look at the log file "/u01/app/oracle/cfgtoollogs/dbca/ORCL.log" for further details.
[oracle@askmrac1 ~]$ srvctl status database -d orcl
Instance ORCL1 is running on node askmrac1
Instance ORCL2 is running on node askmrac2
[oracle@askmrac1 ~]$ srvctl config database -d orcl -v
Database unique name: ORCL
Database name: ORCL
Oracle home: /u01/app/oracle/product/12.1.0/dbhome_1
Oracle user: oracle
Spfile: +OCR_VOTE/orcl/spfileorcl.ora
Password file:
Domain:
Start options: open
Stop options: immediate
Database role: PRIMARY
Management policy: AUTOMATIC
Server pools: ORCL
Database instances: ORCL1,ORCL2
Disk Groups: OCR_VOTE
Mount point paths:
Services:
Type: RAC
Start concurrency:
Stop concurrency:
Database is administrator managed
[oracle@askmrac1 ~]$ clear
[oracle@askmrac1 ~]$ sqlplus '/as sysdba'
SQL*Plus: Release 12.1.0.1.0 Production on Sat Mar 8 10:51:53 2014
Copyright (c) 1982, 2013, Oracle. All rights reserved.
Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> select inst_id, instance_name, status, to_char(startup_time,'DD-MON-YYYY HH24:MI:SS') as "START_TIME" from gv$instance order by inst_id;
INST_ID INSTANCE_NAME STATUS START_TIME
---------- ---------------- ------------ --------------------
1 ORCL1 OPEN 08-MAR-2014 07:25:23
2 ORCL2 OPEN 08-MAR-2014 07:25:26
Check if the redo log thread and UNDO tablespace for the deleted instance is removed (which for my example, they were successfully removed). If not, manually remove them.
SQL> select thread# from v$thread where instance='ORCL';
no rows selected
SQL> select thread# from v$thread where upper(instance) = upper('orcl');
no rows selected
SQL> select group# from v$log where thread# = 3;
no rows selected
SQL> select member from v$logfile ;
MEMBER
--------------------------------------------------------------------------------
+OCR_VOTE/orcl/onlinelog/group_2.262.841375503
+OCR_VOTE/orcl/onlinelog/group_1.261.841375497
+OCR_VOTE/orcl/onlinelog/group_3.268.841375693
+OCR_VOTE/orcl/onlinelog/group_4.269.841375695
SQL> exit
Disconnected from Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
[oracle@askmrac1 ~]$ clear
[oracle@askmrac1 ~]$ srvctl config listener -a
Name: LISTENER
Network: 1, Owner: grid
Home: <CRS home>
/u01/app/12.1.0/grid_1 on node(s) askmrac3,askmrac2,askmrac1
End points: TCP:1521
[oracle@askmrac1 ~]$ ssh 192.168.1.233
Last login: Sat Mar 8 06:17:06 2014 from askmrac1.localdomain
[oracle@askmrac3 ~]$ exit
If you find any redolog and undo references of the deleted instance in the cluster, use the following commands to remove those references.
alter database disable thread 3;
alter database drop logfile group 5;
alter database drop logfile group 6;
drop tablespace undotbs3 including contents and datafiles;
alter system reset undo_tablespace scope=spfile sid = 'orcl3';
alter system reset instance_number scope=spfile sid = 'orcl3';
2.b Removing RDBMS software :
On the node which is to be deleted from the cluster , run the following command ...
[root@askmrac3 ~]# su - oracle
[oracle@askmrac3 ~]$ echo $ORACLE_HOME
/u01/app/oracle/product/12.1.0/dbhome_1
[oracle@askmrac3 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@askmrac3 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin
[oracle@askmrac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={askmrac3}" -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2044 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@askmrac3 bin]$
Now run the following command on node 3 , to deinstall oracle home from this node.
[oracle@askmrac3 ~]$ cd $ORACLE_HOME/deinstall
[oracle@askmrac3 deinstall]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/deinstall
[oracle@askmrac3 deinstall]$ ./deinstall -local
On any cluster node that remains in the cluster , run the following command ....
[oracle@askmrac1 ~]$ cd $ORACLE_HOME/oui/bin
[oracle@askmrac1 bin]$ pwd
/u01/app/oracle/product/12.1.0/dbhome_1/oui/bin
[oracle@askmrac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=$ORACLE_HOME "CLUSTER_NODES={askmrac1,askmrac2}"
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2038 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[oracle@askmrac1 bin]$
Now verify the inventory and make sure that the database on node3 is completely removed.
On askmrac1 or 2 :
[oracle@askmrac1 bin]$ cd /u01/app/oraInventory/ContentsXML/
[oracle@askmrac1 ContentsXML]$ ls
comps.xml inventory.xml libs.xml
[oracle@askmrac1 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>12.1.0.1.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@askmrac1 ContentsXML]$
On askmrac3 :
[oracle@askmrac3 bin]$ cd /u01/app/oraInventory/ContentsXML
[oracle@askmrac3 ContentsXML]$ ls
comps.xml inventory.xml libs.xml
[oracle@askmrac3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>12.1.0.1.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
<NODE_LIST>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[oracle@askmrac3 ContentsXML]$
2.c Removing Node From Cluster :
Run the following command as root to determine whether the node you want to delete is active and whether it is pinned.
[root@askmrac1 ~]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[root@askmrac1 ~]# export GRID_HOME=/u01/app/12.1.0/grid_1
[root@askmrac1 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1 Active Unpinned
askmrac2 Active Unpinned
askmrac3 Active Unpinned
[root@askmrac1 ~]#
[root@askmrac3 ~]# export ORACLE_HOME=/u01/app/oracle/product/12.1.0/dbhome_1
[root@askmrac3 ~]# export GRID_HOME=/u01/app/12.1.0/grid_1
[root@askmrac3 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1 Active Unpinned
askmrac2 Active Unpinned
askmrac3 Active Unpinned
[root@askmrac3 ~]#
Disable the Oracle Clusterware applications and daemons running on the node to be deleted from the cluster. Run the rootcrs.pl script as root from the Grid_home/crs/install directory on the node to be deleted.
[root@askmrac3 ~]# cd $GRID_HOME/crs/install
[root@askmrac3 install]# ./rootcrs.pl -deconfig -force
Using configuration parameter file: ./crsconfig_params
Network 1 exists
Subnet IPv4: 192.168.1.0/255.255.255.0/eth0, static
Subnet IPv6:
VIP exists: network number 1, hosting node askmrac1
VIP Name: askmrac1-vip
VIP IPv4 Address: 192.168.1.234
VIP IPv6 Address:
VIP exists: network number 1, hosting node askmrac2
VIP Name: askmrac2-vip
VIP IPv4 Address: 192.168.1.235
VIP IPv6 Address:
VIP exists: network number 1, hosting node askmrac3
VIP Name: askmrac3-vip
VIP IPv4 Address: 192.168.1.236
VIP IPv6 Address:
ONS exists: Local port 6100, remote port 6200, EM port 2016
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'askmrac3'
CRS-2673: Attempting to stop 'ora.crsd' on 'askmrac3'
CRS-2790: Starting shutdown of Cluster Ready Services-managed resources on 'askmrac3'
CRS-2673: Attempting to stop 'ora.OCR_VOTE.dg' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.oc4j' on 'askmrac3'
CRS-2677: Stop of 'ora.oc4j' on 'askmrac3' succeeded
CRS-2672: Attempting to start 'ora.oc4j' on 'askmrac1'
CRS-2677: Stop of 'ora.OCR_VOTE.dg' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'askmrac3'
CRS-2677: Stop of 'ora.asm' on 'askmrac3' succeeded
CRS-2676: Start of 'ora.oc4j' on 'askmrac1' succeeded
CRS-2792: Shutdown of Cluster Ready Services-managed resources on 'askmrac3' has completed
CRS-2677: Stop of 'ora.crsd' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.storage' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.evmd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.mdnsd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.ctssd' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.crf' on 'askmrac3'
CRS-2673: Attempting to stop 'ora.gpnpd' on 'askmrac3'
CRS-2677: Stop of 'ora.storage' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.asm' on 'askmrac3'
CRS-2677: Stop of 'ora.drivers.acfs' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.crf' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.gpnpd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.mdnsd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.evmd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.ctssd' on 'askmrac3' succeeded
CRS-2677: Stop of 'ora.asm' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.cluster_interconnect.haip' on 'askmrac3'
CRS-2677: Stop of 'ora.cluster_interconnect.haip' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.cssd' on 'askmrac3'
CRS-2677: Stop of 'ora.cssd' on 'askmrac3' succeeded
CRS-2673: Attempting to stop 'ora.gipcd' on 'askmrac3'
CRS-2677: Stop of 'ora.gipcd' on 'askmrac3' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'askmrac3' has completed
CRS-4133: Oracle High Availability Services has been stopped.
2014/03/08 11:22:19 CLSRSC-336: Successfully deconfigured Oracle clusterware stack on this node
From a node that is to remain a member of the Oracle RAC, run the following command from the Grid_home/bin directory as root to update the Clusterware configuration to delete the node from the cluster.
[root@askmrac1 ~]# $GRID_HOME/bin/crsctl delete node -n askmrac3
CRS-4661: Node askmrac3 successfully deleted.
[root@askmrac1 ~]# $GRID_HOME/bin/olsnodes -s -t
askmrac1 Active Unpinned
askmrac2 Active Unpinned
[root@askmrac1 ~]#
As the Oracle Grid Infrastructure owner, execute runInstaller from Grid_home/oui/bin on the node being removed to update the inventory.
[grid@askmrac3 ~]$ cd $ORACLE_HOME/oui/bin
[grid@askmrac3 bin]$ pwd
/u01/app/12.1.0/grid_1/oui/bin
[grid@askmrac3 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid_1 "CLUSTER_NODES={askmrac3}" CRS=TRUE -silent -local
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2047 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@askmrac3 bin]$
Run deinstall as the Grid Infrastructure software owner from the node to be removed in order to delete the Oracle Grid Infrastructure software.
Please pay extra care while responding to the prompts. When supplying the values to listener, give only local listener value and don't specify scan_listener for deletion.
On Node3:(as grid s/w owner)
[grid@askmrac3 ~]$ cd /u01/app/12.1.0/grid_1/deinstall/
[grid@askmrac3 deinstall]$ pwd
/u01/app/12.1.0/grid_1/deinstall
[grid@askmrac3 deinstall]$ ./deinstall -local
Checking for required files and bootstrapping ...
Please wait ...
Location of logs /u01/app/oraInventory/logs/
From one of the nodes that is to remain part of the cluster, execute runInstaller (without the -local option) as the Grid Infrastructure software owner to update the inventories with a list of the nodes that are to remain in the cluster.
On Node1 or Node2:
[grid@askmrac1 ~]$ cd $ORACLE_HOME/oui/bin
[grid@askmrac1 bin]$ ./runInstaller -updateNodeList ORACLE_HOME=/u01/app/12.1.0/grid_1 "CLUSTER_NODES={askmrac1,askmrac2}" CRS=TRUE -silent
Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 2038 MB Passed
The inventory pointer is located at /etc/oraInst.loc
'UpdateNodeList' was successful.
[grid@askmrac1 bin]$
2.d Verification :
[grid@askmrac1 bin]$ cd /u01/app/oraInventory/ContentsXML/
[grid@askmrac1 ContentsXML]$ clear
[grid@askmrac1 ContentsXML]$ ls
comps.xml inventory.xml libs.xml
[grid@askmrac1 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>12.1.0.1.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[grid@askmrac1 ContentsXML]$ clear
[grid@askmrac1 ContentsXML]$ cd
[grid@askmrac1 ~]$ which cluvfy
/u01/app/12.1.0/grid_1/bin/cluvfy
[grid@askmrac1 ~]$ cluvfy stage -post nodedel -n askmrac3 -verbose
Performing post-checks for node removal
Checking CRS integrity...
The Oracle Clusterware is healthy on node "askmrac1"
CRS integrity check passed
Clusterware version consistency passed.
Result:
Node removal check passed
Post-check for node removal was successful.
[grid@askmrac1 ~]$ olsnodes -s -t
askmrac1 Active Unpinned
askmrac2 Active Unpinned
[grid@askmrac1 ~]$ crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.LISTENER.lsnr
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ora.OCR_VOTE.dg
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ora.asm
ONLINE ONLINE askmrac1 Started,STABLE
ONLINE ONLINE askmrac2 Started,STABLE
ora.net1.network
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
ora.ons
ONLINE ONLINE askmrac1 STABLE
ONLINE ONLINE askmrac2 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE askmrac2 STABLE
ora.MGMTLSNR
1 ONLINE ONLINE askmrac2 169.254.200.16 10.10
.10.232,STABLE
ora.askmrac1.vip
1 ONLINE ONLINE askmrac1 STABLE
ora.askmrac2.vip
1 ONLINE ONLINE askmrac2 STABLE
ora.cvu
1 ONLINE ONLINE askmrac2 STABLE
ora.oc4j
1 ONLINE ONLINE askmrac1 STABLE
ora.orcl.db
1 ONLINE ONLINE askmrac1 Open,STABLE
2 ONLINE ONLINE askmrac2 Open,STABLE
ora.scan1.vip
1 ONLINE ONLINE askmrac2 STABLE
--------------------------------------------------------------------------------
[grid@askmrac1 ~]$ clear
[grid@askmrac1 ~]$ crsctl status res -t | grep -i askmrac3
[grid@askmrac1 ~]$
On Node 3:
[grid@askmrac3 bin]$ cd /u01/app/oraInventory/ContentsXML/
[grid@askmrac3 ContentsXML]$ ls
comps.xml inventory.xml libs.xml
[grid@askmrac3 ContentsXML]$ clear
[grid@askmrac3 ContentsXML]$ cat inventory.xml
<?xml version="1.0" standalone="yes" ?>
<!-- Copyright (c) 1999, 2013, Oracle and/or its affiliates.
All rights reserved. -->
<!-- Do not modify the contents of this file by hand. -->
<INVENTORY>
<VERSION_INFO>
<SAVED_WITH>12.1.0.1.0</SAVED_WITH>
<MINIMUM_VER>2.1.0.6.0</MINIMUM_VER>
</VERSION_INFO>
<HOME_LIST>
<HOME NAME="OraGrid11gR2" LOC="/u01/app/11.2.0/grid" TYPE="O" IDX="1">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraRAC11gR2" LOC="/u01/app/oracle/product/11.2.0/dbhome_1" TYPE="O" IDX="2">
<NODE_LIST>
<NODE NAME="askmrac1"/>
<NODE NAME="askmrac2"/>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraGI12Home1" LOC="/u01/app/12.1.0/grid_1" TYPE="O" IDX="3" CRS="true">
<NODE_LIST>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
<HOME NAME="OraDB12Home1" LOC="/u01/app/oracle/product/12.1.0/dbhome_1" TYPE="O" IDX="4">
<NODE_LIST>
<NODE NAME="askmrac3"/>
</NODE_LIST>
</HOME>
</HOME_LIST>
<COMPOSITEHOME_LIST>
</COMPOSITEHOME_LIST>
</INVENTORY>
[grid@askmrac3 ContentsXML]$
2.e Removing remaining components :
- Remove asmlib if you are using asmlib for ASM storage
- Revmoce udev rules, if you are using udev rules for ASM storage
- Remove oracle and grid users and also corresponding groups.
Hope this helps
SRI
Post a Comment
Thank you for visiting our site and leaving your valuable comment.