Written By askMLabs on Tuesday, April 1, 2014 | 12:57 PM
In this document, we are going to
explain the step by step configuration details to duplicate a database from a
different previous incarnation without connecting to the target database.
I have seen many articles on google about this and found no article with the practical approach. Many articles described to duplicate the database without connecting to target database, but copying the backups to auxiliary database server. In this article, i will show you the practical approach how the backup available on tsm should be configured to be available to auxiliary server.
1.
Environment :
Environment
Host
Version
TSM
configuration
Rman
Catalog
TARGET
CRPROD
dbprl.askmlabs.com
11gR2
tsm5
rcatprod
AUXILIARY
CRPERF
dbrfl.askmlabs.com
11gR2
tsm6
rcatdev
Please
note here that the target environment is the environment which we use as source
to duplicate database and auxiliary database is the database which is to be
created by using the target database backups.
Please try to understand the complexity of the environment. Here, target database and auxiliary databases are configured with different tsm's. The target database backups are available on tsm5 and we need to present these backups to dbrfl server.
2.
Task :
We need to create a new duplicated database
CRPERF from the backups of CRPROD to a date prior to the point that it is
opened with resetlogs ie we need to duplicate CRPROD to CRPERF, to the
parent incarnation of the CRPROD. There are different approaches to complete
this task. In the present document, we are going to use "duplicating a
database without connecting to target database" as described in the oracle
documentation here.
RMAN> list incarnation of database crprod;
List of Database Incarnations
DB Key Inc
Key DB Name DB ID STATUS
Reset SCN Reset Time
2721859 233687610 CRPROD 1190017710 CURRENT 2321471232053 21-MAR-14
CRPROD is opened with reset logs on
21-MAR-2014 and it started a new incarnation. Our aim is to restore and recover
the CRPROD as of time 13-MAR-2014. The time 13-MAR-2014 is not in the current
incarnation and it is in the parent incarnation. We will get the following
error if we try to duplicate the database connecting to target database
RMAN-06004: ORACLE error from recovery catalog
database: RMAN-20207: UNTIL TIME or RECOVERY WINDOW is before RESETLOGS time
3.
Procedure
3.1
Prepare auxiliary environment :
Calculate the space requirements and
make sure you have enough space available for duplicating database.
Prepare init.ora file for auxiliary
database and make sure you include following two variables in the init.ora
file.
The tsm backups for production are
available on tsm5 where as the CRPERF environment is configured to have its
backups on tsm6. So we need to complete the following configuration to make
sure that the production backups available on tsm5 are available to crperf
servers.
Create
a temporary directory on dbrfl.askmlabs.com to keep all the tsm5
configuration files.
mkdir $HOME/dup_perf
Copy the following files to the
directory created above from the server dbprl.askmlabs.com.
dsm.opt.tsm5 ( This configuration file specified whether
tsm backups are on tsm5 or tsm6)
CRPROD_tdpo.opt ( tsm tape configuration files to
connect to the tsm5)
TDPO.tdpdbprl ( password files from production dbprl.askmlabs.com )
Now create the following symlinks to
point the configuration files.
Written By askMLabs on Sunday, March 16, 2014 | 2:25 PM
In this article, we will see how to apply PSU patch to the flex cluster. I have a RAC 12c flex cluster installed with 4 nodes. 3nodes being hub nodes and 1 node being leaf node. Environment Details :
RAC Version Cluster Type
12c 12.1.0.1.0
Flex Cluster
Hub Nodes
rac12cnode1/2/3
Leaf Nodes
rac12cnode4
DB Running on
All Hub Nodes
Task
Applying JAN2014 PSU Patch
Environment Configuration :
Type
Path
Owner
Version
Shared
Grid Infra Home
/u01/app/12.1.0/grid
Grid
12.1.0.1
False
Database Home
/u01/app/oracle/product/12.1.0/dbhome_1
Oracle
12.1.0.1
Flase
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1 Active Unpinned
rac12cnode2 Active Unpinned
rac12cnode3 Active Unpinned
rac12cnode4 Active Unpinned
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode1 ~]$
Step By Step
1. Download and Unzip the Latest OPatch to all cluster nodes
2. Validate and Record Pre-Patch information
3. Create OCM response file if it does not exist
4. Download and Unzip the JAN2014 PSU patch for GI 12c ie 12.1.0.1.2
5. One-off patch Conflicts detection and Resolution
6. Patch application
7. Verification of Patch application
8. Issues and Resolutions
1. Download and Unzip the Latest OPatch to all cluster nodes:
You must use the OPatch utility version12.1.0.1.1 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 12.1 releases, which is available for download from My Oracle Support patch 6880880.
2. Validate and Record Pre-Patch information :
Validate using the following commands :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and record registry information.
SQL> select comp_name,version,status from dba_registry;
[grid@rac12cnode1 bin]$ opatch lsinventory
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
from : /u01/app/12.1.0/grid/oraInst.loc
OPatch version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_02-41-32AM_1.log
Lsinventory Output file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2014-03-13_02-41-32AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c 12.1.0.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3
--------------------------------------------------------------------------------
OPatch succeeded.
[grid@rac12cnode1 bin]$
3. Create OCM Response File If It Does Not Exist :
Create ocm response file using the following command and provide appropriate values for the prompts.
NOTE: The Opatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. Without which we cant proceed further.
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ cd $ORACLE_HOME/OPatch/ocm/bin
[grid@rac12cnode1 bin]$ ls
emocmrsp
[grid@rac12cnode1 bin]$ ./emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (ocm.rsp) was successfully created.
[grid@rac12cnode1 bin]$ ls
emocmrsp ocm.rsp
[grid@rac12cnode1 bin]$ ls -lrt
total 16
-rwxr-----. 1 grid oinstall 9063 Nov 27 2009 emocmrsp
-rw-r--r--. 1 grid oinstall 621 Mar 13 02:38 ocm.rsp
[grid@rac12cnode1 bin]$ chmod 777 ocm.rsp
[grid@rac12cnode1 bin]$
Copy this response file ocm.rsp to all the nodes in the cluster to the same location. Or you can create a new response file on each node using the same method above.
4. Download and Unzip the JAN2014 PSU patch : (as grid user)
Patch 17735306 is the JAN2014 PSU patch. It is downloaded and unzipped to the locaiton "/mnt/software/RAC/1201_PSU/JAN2014_PSU/"
5. One-off patch Conflicts detection and Resolution :
Determine whether any currently installed one-off patches conflict with the PSU patch.
But i don't have any patches applied to my home yet. So i can ignore this step. But if you have any conflicts identified in GI home or in DB home, follow MOS ID 1061295.1 to resolve the conflicts.
6. Patch Application :
While applying the patch to a flex cluster, please note the following points .
a. First node must be a HUB node.The last node can be either hub/leaf node.
b. Make sure GI Stack is up on the first hub node and at least one other hub node.
c. Make sure stack is up on all the leaf nodes
d. The opatchauto command will restart the stack on the local node and restart the Databases on the local node.
Now patch can be applied using the following syntax ..
[root@rac12cnode1 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode1 ~]# which opatchauto/u01/app/12.1.0/grid/OPatch/opatchauto [root@rac12cnode1 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_02-55-54_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
[WARNING] SQL changes, if any, could not be applied on the following database(s): ORCL ... Please refer to the log file for more details.
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17077442, 17552800
opatchauto succeeded.
[root@rac12cnode1 ~]#
While applying the patch, the services running on node 1 are relocated to other nodes in the cluster. While applying the patch to node 1 ( rac12cnode1) , i have the following status for the clusterware services..
Repeat the above step to all the hub nodes in the flex cluster.
On Leaf Node :
Leaf node is not connected to storage directly and there is no oracle database instance running on this node. So when we run the patch application, it will apply the patch only to grid home. You can see that difference from the below session log from node 4( rac12cnode4) which is leaf node in my flex cluster.
[root@rac12cnode4 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode4 ~]# which opatchauto
/u01/app/12.1.0/grid/OPatch/opatchauto [root@rac12cnode4 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_10-00-11_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
opatchauto succeeded.
[root@rac12cnode4 ~]#
NOTE : please verify the difference in applying the patch to hub node and leaf node from the above output.
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and verify registry information.
SQL> select comp_name,version,status from dba_registry;
8. Issues and Resolution :
Issue 1:
While applying the patch i got one error saying "OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'..."
Content From Logs:
ERROR:
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
Log file Location for the failed command: /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_04-41-37AM_1.log
==
[Mar 13, 2014 4:53:26 AM] The following actions have failed:
[Mar 13, 2014 4:53:26 AM] OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 4:53:26 AM] Do you want to proceed? [y|n]
[Mar 13, 2014 4:53:29 AM] N (auto-answered by -silent)
[Mar 13, 2014 4:53:29 AM] User Responded with: N
[Mar 13, 2014 4:53:29 AM] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
[Mar 13, 2014 4:53:29 AM] Restoring "/u01/app/12.1.0/grid" to the state prior to running NApply...
[Mar 13, 2014 5:10:09 AM] Checking if OPatch needs to invoke 'make' to restore some binaries...
[Mar 13, 2014 5:10:09 AM] OPatch was able to restore your system. Look at log file and timestamp of each file to make sure your system is in the state prior to applying the patch.
[Mar 13, 2014 5:10:09 AM] OUI-67124:
NApply restored the home. Please check your ORACLE_HOME to make sure:
- files are restored properly.
- binaries are re-linked correctly.
(use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
[Mar 13, 2014 5:10:10 AM] The following warnings have occurred during OPatch execution:
[Mar 13, 2014 5:10:10 AM] 1) OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 5:10:10 AM] 2) OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
[Mar 13, 2014 5:10:10 AM] 3) OUI-67124:
NApply restored the home. Please check your ORACLE_HOME to make sure:
- files are restored properly.
- binaries are re-linked correctly.
(use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
[Mar 13, 2014 5:10:10 AM] Stack Description: java.lang.RuntimeException: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
Resolution :
Verified permission and there is no issue accessing and copying file manually to target location. Then re-executed without any modificaiton. Then it went fine. May be it is my environment specific as i have PSU patch located on NFS file system. But if you have also got the same issue, it might be something common to many environment and intemittent issue.
Logfile Location For PSU Patch :
/u01/app/12.1.0/grid/cfgtoollogs/opatch/
/u01/app/12.1.0/grid/cfgtoollogs/opatchauto/
Written By askMLabs on Saturday, March 15, 2014 | 11:57 PM
In this article , we will see how to clone RAC environment to extend the existing cluster. Clone can also be used to prepare a new cluster environment. But in this article, we are going to see extending the existing using clone method. A cluster can also be extended using addnode method. But cloning is a different method to extend the cluster.
Environment Details :
RAC Version Cluster Type
12c 12.1.0.1.0
Flex Cluster
Hub Nodes
rac12cnode1/2
Leaf Nodes
No
DB Running on
All Hub Nodes
Task
Clone cluster to extend from 2 node to 4 node
[grid@rac12cnode1 ~]$ olsnodes -s -t rac12cnode1 Active Unpinned rac12cnode2 Active Unpinned [grid@rac12cnode1 ~]$ crsctl get cluster mode status Cluster is running in "flex" mode [grid@rac12cnode1 ~]$ asmcmd showclustermode ASM cluster : Flex mode enabled [grid@rac12cnode1 ~]$ crsctl get node role config -all Node 'rac12cnode1' configured role is 'hub' Node 'rac12cnode2' configured role is 'hub' [grid@rac12cnode1 ~]$
Step By Step :
1. Prepare the new cluster nodes
2. Prepare the existing cluster nodes
3. Deploy the Grid Infrastructure to new cluster nodes
4. Run clone.pl on all the cluster nodes.
5. Run orainstRoot.sh script on all target cluster nodes.
6. Execute addnode.sh in silent mode to add hub node and leaf node
7. Copy config files from source system to all target cluster nodes
8. Run root.sh on all the target nodes to configure cluster.
9. Verify the cloned cluster with cluvfy
1. Prepare the new cluster nodes:
Our task is to add two nodes to the existing flex cluster in clone method. We need to first prepare the nodes which are to be added to the cluster.
I have the following points from documentation, but you can also follow my other articles/videos on rac to prepare nodes for cluster addition.
On each destination node, perform the following preinstallation steps:
Specify the kernel parameters
Configure block devices for Oracle Clusterware devices
Ensure that you have set the block device permissions correctly
Use short, nondomain-qualified names for all of the names in the /etc/hosts file
Test whether the interconnect interfaces are reachable using the ping command
Verify that the VIP addresses are not active at the start of the cloning process by using the ping command (the ping command of the VIP address must fail)
On AIX systems, and on Solaris x86-64-bit systems running vendor clusterware, if you add a node to the cluster, then you must run the rootpre.sh script (located at the mount point it you install Oracle Clusterware from a DVD or in the directory where you unzip the tar file if you download the software) on the node before you add the node to the cluster
Run CVU to verify your hardware and operating system environment
Complete all above steps , so that the nodes are ready for adding / clone to cluster node.
Verify if the nodes are ready to add to the cluster using the following commands
In this step, we need to create a copy of the existing oracle grid infrastructure home and remove the unnecessary files from the copied home. You can perform this step while the clusterware is up and running.
Create an exclusion list to be excluded while creating the tar backup.
Create compressed copy of the oracle grid infrastructure home using tar utility. Execute the following command on any existing node in the cluster :
[root@rac12cnode1 12.1.0]# tar -czf gridHome.tar.gz -X /tmp/askm/excl_list.txt /u01/app/12.1.0/grid tar: Removing leading `/' from member names tar: /u01/app/12.1.0/grid/log/rac12cnode1/gipcd/gipcd.log: file changed as we read it tar: /u01/app/12.1.0/grid/log/rac12cnode1/agent/crsd/oraagent_grid/oraagent_grid.log: file changed as we read it tar: /u01/app/12.1.0/grid/log/rac12cnode1/ctssd/octssd.log: file changed as we read it tar: /u01/app/12.1.0/grid/log/rac12cnode1/cssd/ocssd.log: file changed as we read it tar: /u01/app/12.1.0/grid/rdbms/audit: file changed as we read it [root@rac12cnode1 12.1.0]# ls -lrt total 3100700 drwxr-xr-x. 74 root oinstall 4096 Mar 13 03:41 grid -rw-r--r--. 1 root root 3172006616 Mar 14 04:53 gridHome.tar.gz [root@rac12cnode1 12.1.0]#
3. Deploy the Grid Infrastructure to new cluster nodes:
Now copy the compressed backup of the oracle grid infrastructure home created in step 2 above to all the target nodes ie rac12cnode3/4.
IMP IMP : Please don't run root.sh at this point of time.
5. Run orainstRoot.sh script on all target cluster nodes:(as root user)
This script populates the /etc/oraInst.loc directory with the location of the central inventory.
[root@rac12cnode3 ~]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac12cnode3 ~]#
[root@rac12cnode4 oraInventory]# /u01/app/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oraInventory. Adding read,write permissions for group. Removing read,write,execute permissions for world. Changing groupname of /u01/app/oraInventory to oinstall. The execution of the script is complete. [root@rac12cnode4 oraInventory]#
6. Execute addnode.sh in silent mode to add hub node and leaf node:
Run the addnode.sh script from $GRID_HOME/addnode/
[grid@rac12cnode1 addnode]$ ./addnode.sh -silent -noCopy ORACLE_HOME=/u01/app/12.1.0/grid "CLUSTER_NEW_NODES={rac12cnode3,rac12cnode4}" "CLUSTER_NEW_VIRTUAL_HOSTNAMES={rac12cnode3-vip}" "CLUSTER_NEW_NODE_ROLES={HUB,LEAF}" Starting Oracle Universal Installer... Checking Temp space: must be greater than 120 MB. Actual 7444 MB Passed Checking swap space: must be greater than 150 MB. Actual 407 MB Passed [WARNING] [INS-13014] Target environment does not meet some optional requirements. CAUSE: Some of the optional prerequisites are not met. See logs for details. /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log ACTION: Identify the list of failed prerequisite checks from the log: /u01/app/oraInventory/logs/addNodeActions2014-03-15_11-16-57AM.log. Then either from the log file or from installation manual find the appropriate configuration to meet the prerequisites and fix it manually. Prepare Configuration in progress. Prepare Configuration successful. .................................................. 40% Done. As a root user, execute the following script(s): 1. /u01/app/12.1.0/grid/root.sh Execute /u01/app/12.1.0/grid/root.sh on the following nodes: [rac12cnode3, rac12cnode4] The scripts can be executed in parallel on all the nodes. If there are any policy managed databases managed by cluster, proceed with the addnode procedure without executing the root.sh script. Ensure that root.sh script is executed after all the policy managed databases managed by clusterware are extended to the new nodes. .................................................. 60% Done. Update Inventory in progress. .................................................. 100% Done. Update Inventory successful. Successfully Setup Software. [grid@rac12cnode1 addnode]$
In the preceding syntax example, Node 4 is designated as a Leaf Node and does not require that a VIP be included.
7. Copy config files from source system to all target cluster nodes:
Copy the following files from Node 1, on which you ran addnode.sh, to Node3 and Node4.
8. Run root.sh on all the target nodes to configure cluster:
On Node 3 and Node 4, run the Grid_home/root.sh script.
[root@rac12cnode3 grid]# ./root.sh Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log for the output of root script [root@rac12cnode3 grid]# cat /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-55-20.log Performing root user operation for Oracle 12c The following environment variables are set as: ORACLE_OWNER= grid ORACLE_HOME= /u01/app/12.1.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. To configure Grid Infrastructure for a Stand-Alone Server run the following command as the root user: /u01/app/12.1.0/grid/perl/bin/perl -I/u01/app/12.1.0/grid/perl/lib -I/u01/app/12.1.0/grid/crs/install /u01/app/12.1.0/grid/crs/install/roothas.pl
To configure Grid Infrastructure for a Cluster execute the following command as grid user: /u01/app/12.1.0/grid/crs/config/config.sh This command launches the Grid Infrastructure Configuration Wizard. The wizard also supports silent operation, and the parameters can be passed through the response file that is available in the installation media. [root@rac12cnode3 grid]#
If you get this output after executing the root.sh , then your cluster is not configured correctly. You need to modify the script "/u01/app/12.1.0/grid/crs/config/rootconfig.sh" to uncomment the some part of the script. Please refer to my video if you need more details on this part.
[root@rac12cnode4 ~]# diff /u01/app/12.1.0/grid/crs/config/rootconfig.sh /u01/app/12.1.0/grid/crs/config/rootconfig.sh_bak_askm 33,37c33,37 < if [ "$ADDNODE" = "true" ]; then < SW_ONLY=false < HA_CONFIG=false < fi --- > #if [ "$ADDNODE" = "true" ]; then > # SW_ONLY=false > # HA_CONFIG=false > #fi [root@rac12cnode4 ~]#
Now run the root.sh again. This time it will execute successfully and configures the cluster.
[root@rac12cnode3 grid]# ./root.sh Check /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log for the output of root script [root@rac12cnode3 grid]#
[root@rac12cnode4 grid]# ./root.sh Check /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log for the output of root script [root@rac12cnode4 grid]#
The output for above executions for confirmation that they configured clusterware correctly ....
[root@rac12cnode3 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode3_2014-03-15_11-57-18.log ORACLE_HOME= /u01/app/12.1.0/grid Copying dbhome to /usr/local/bin ... Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params OLR initialization - successful 2014/03/15 11:59:52 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode3' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode3' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode3' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode3' CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode3' CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode3' succeeded CRS-2676: Start of 'ora.evmd' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode3' CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode3' CRS-2676: Start of 'ora.gipcd' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode3' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode3' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode3' CRS-2676: Start of 'ora.diskmon' on 'rac12cnode3' succeeded CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode3' CRS-2676: Start of 'ora.cssd' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode3' CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode3' CRS-2676: Start of 'ora.ctssd' on 'rac12cnode3' succeeded CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3' CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode3' CRS-2676: Start of 'ora.storage' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode3' CRS-2676: Start of 'ora.crsd' on 'rac12cnode3' succeeded CRS-6017: Processing resource auto-start for servers: rac12cnode3 CRS-2672: Attempting to start 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3' CRS-2672: Attempting to start 'ora.ons' on 'rac12cnode3' CRS-2676: Start of 'ora.ons' on 'rac12cnode3' succeeded CRS-2676: Start of 'ora.ASMNET1LSNR_ASM.lsnr' on 'rac12cnode3' succeeded CRS-2672: Attempting to start 'ora.asm' on 'rac12cnode3' CRS-2676: Start of 'ora.asm' on 'rac12cnode3' succeeded CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode1' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2' CRS-2664: Resource 'ora.DATA.dg' is already running on 'rac12cnode2' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode1' CRS-2664: Resource 'ora.proxy_advm' is already running on 'rac12cnode2' CRS-6016: Resource auto-start has completed for server rac12cnode3 CRS-2672: Attempting to start 'ora.proxy_advm' on 'rac12cnode3' CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2014/03/15 12:10:16 CLSRSC-343: Successfully started Oracle clusterware stack clscfg: EXISTING configuration version 5 detected. clscfg: version 5 is 12c Release 1. Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/03/15 12:11:15 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@rac12cnode3 ~]#
[root@rac12cnode4 ~]# tail -f /u01/app/12.1.0/grid/install/root_rac12cnode4_2014-03-15_12-24-46.log Copying oraenv to /usr/local/bin ... Copying coraenv to /usr/local/bin ...
Creating /etc/oratab file... Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with rac_on option Using configuration parameter file: /u01/app/12.1.0/grid/crs/install/crsconfig_params OLR initialization - successful 2014/03/15 12:27:14 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'rac12cnode4' CRS-2677: Stop of 'ora.drivers.acfs' on 'rac12cnode4' succeeded CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rac12cnode4' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Starting Oracle High Availability Services-managed resources CRS-2672: Attempting to start 'ora.mdnsd' on 'rac12cnode4' CRS-2672: Attempting to start 'ora.evmd' on 'rac12cnode4' CRS-2676: Start of 'ora.evmd' on 'rac12cnode4' succeeded CRS-2676: Start of 'ora.mdnsd' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.gpnpd' on 'rac12cnode4' CRS-2676: Start of 'ora.gpnpd' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.gipcd' on 'rac12cnode4' CRS-2676: Start of 'ora.gipcd' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.cssdmonitor' on 'rac12cnode4' CRS-2676: Start of 'ora.cssdmonitor' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.cssd' on 'rac12cnode4' CRS-2672: Attempting to start 'ora.diskmon' on 'rac12cnode4' CRS-2676: Start of 'ora.diskmon' on 'rac12cnode4' succeeded CRS-2789: Cannot stop resource 'ora.diskmon' as it is not running on server 'rac12cnode4' CRS-2676: Start of 'ora.cssd' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'rac12cnode4' CRS-2672: Attempting to start 'ora.ctssd' on 'rac12cnode4' CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'rac12cnode4' succeeded CRS-2676: Start of 'ora.ctssd' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.storage' on 'rac12cnode4' CRS-2676: Start of 'ora.storage' on 'rac12cnode4' succeeded CRS-2672: Attempting to start 'ora.crsd' on 'rac12cnode4' CRS-2676: Start of 'ora.crsd' on 'rac12cnode4' succeeded CRS-6017: Processing resource auto-start for servers: rac12cnode4 CRS-6016: Resource auto-start has completed for server rac12cnode4 CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources CRS-4123: Oracle High Availability Services has been started. 2014/03/15 12:31:56 CLSRSC-343: Successfully started Oracle clusterware stack Successfully accumulated necessary OCR keys. Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. 2014/03/15 12:32:05 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded [root@rac12cnode4 ~]#
9. Verify the cloned cluster with cluvfy:
[root@rac12cnode1 grid]# su - grid [grid@rac12cnode1 ~]$ olsnodes -s -t rac12cnode1 Active Unpinned rac12cnode2 Active Unpinned rac12cnode3 Active Unpinned rac12cnode4 Active Unpinned [grid@rac12cnode1 ~]$ crsctl get cluster mode status Cluster is running in "flex" mode [grid@rac12cnode1 ~]$ srvctl config gns GNS is enabled. [grid@rac12cnode1 ~]$ crsctl get node role config Node 'rac12cnode1' configured role is 'hub' [grid@rac12cnode1 ~]$ asmcmd showclustermode ASM cluster : Flex mode enabled [grid@rac12cnode1 ~]$ srvctl status asm -detail ASM is running on rac12cnode1,rac12cnode2,rac12cnode3 ASM is enabled. [grid@rac12cnode1 ~]$ crsctl get node role config -all Node 'rac12cnode1' configured role is 'hub' Node 'rac12cnode2' configured role is 'hub' Node 'rac12cnode3' configured role is 'hub' Node 'rac12cnode4' configured role is 'leaf' [grid@rac12cnode1 ~]$ clear [grid@rac12cnode1 ~]$ cluvfy stage -post nodeadd -n rac12cnode3,rac12cnode4 -verbose ... .... Post-check for node addition was successful. [grid@rac12cnode1 ~]$