In this article, we will see how to apply PSU patch to the flex cluster. I have a RAC 12c flex cluster installed with 4 nodes. 3nodes being hub nodes and 1 node being leaf node.
Environment Details :
2. Validate and Record Pre-Patch information :
Validate using the following commands :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and record registry information.
SQL> select comp_name,version,status from dba_registry;
Create ocm response file using the following command and provide appropriate values for the prompts.
Logfile Location For PSU Patch :
/u01/app/12.1.0/grid/cfgtoollogs/opatch/
/u01/app/12.1.0/grid/cfgtoollogs/opatchauto/
Hope This Helps
SRI
Environment Details :
RAC Version Cluster Type
|
12c 12.1.0.1.0
Flex Cluster
|
Hub Nodes
|
rac12cnode1/2/3
|
Leaf Nodes
|
rac12cnode4
|
DB Running on
|
All Hub Nodes
|
Task
|
Applying JAN2014 PSU Patch
|
Environment Configuration :
Type
|
Path
|
Owner
|
Version
|
Shared
|
Grid Infra Home
|
/u01/app/12.1.0/grid
|
Grid
|
12.1.0.1
|
False
|
Database Home
|
/u01/app/oracle/product/12.1.0/dbhome_1
|
Oracle
|
12.1.0.1
|
Flase
|
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ olsnodes -s -t
rac12cnode1 Active Unpinned
rac12cnode2 Active Unpinned
rac12cnode3 Active Unpinned
rac12cnode4 Active Unpinned
[grid@rac12cnode1 ~]$ crsctl get node role config -all
Node 'rac12cnode1' configured role is 'hub'
Node 'rac12cnode2' configured role is 'hub'
Node 'rac12cnode3' configured role is 'hub'
Node 'rac12cnode4' configured role is 'leaf'
[grid@rac12cnode1 ~]$ crsctl get node role status -all
Node 'rac12cnode1' active role is 'hub'
Node 'rac12cnode2' active role is 'hub'
Node 'rac12cnode3' active role is 'hub'
Node 'rac12cnode4' active role is 'leaf'
[grid@rac12cnode1 ~]$
Step By Step
- 1. Download and Unzip the Latest OPatch to all cluster nodes
- 2. Validate and Record Pre-Patch information
- 3. Create OCM response file if it does not exist
- 4. Download and Unzip the JAN2014 PSU patch for GI 12c ie 12.1.0.1.2
- 5. One-off patch Conflicts detection and Resolution
- 6. Patch application
- 7. Verification of Patch application
- 8. Issues and Resolutions
1. Download and Unzip the Latest OPatch to all cluster nodes:
You must use the OPatch utility version12.1.0.1.1 or later to apply this patch. Oracle recommends that you use the latest released OPatch for 12.1 releases, which is available for download from My Oracle Support patch 6880880.
[grid@rac12cnode1 1201_PSU]$ cd $ORACLE_HOME[grid@rac12cnode1 grid]$ ls -ld OPatchdrwxr-xr-x. 7 grid oinstall 4096 Mar 7 08:36 OPatch
[grid@rac12cnode1 grid]$ which opatch/u01/app/12.1.0/grid/OPatch/opatch
[grid@rac12cnode1 grid]$ opatch versionOPatch Version: 12.1.0.1.0
OPatch succeeded.
[grid@rac12cnode1 grid]$ pwd
/u01/app/12.1.0/grid
[grid@rac12cnode1 grid]$ mv OPatch OPatch_bakmv: cannot move `OPatch' to `OPatch_bak': Permission denied
[grid@rac12cnode1 grid]$ exit
logout
[root@rac12cnode1 ~]# cd /u01/app/12.1.0/grid/[root@rac12cnode1 grid]# mv OPatch OPatch_bak[root@rac12cnode1 grid]# unzip /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zipArchive: /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zip
..
..
[root@rac12cnode1 grid]# ls -ld OPatchdrwxr-xr-x. 7 root root 4096 Oct 9 20:25 OPatch
[root@rac12cnode1 grid]# chown -R grid:oinstall OPatch[root@rac12cnode1 grid]# ls -ld OPatchdrwxr-xr-x. 7 grid oinstall 4096 Oct 9 20:25 OPatch
[root@rac12cnode1 grid]# pwd
/u01/app/12.1.0/grid
[root@rac12cnode1 grid]# cd OPatch[root@rac12cnode1 OPatch]# ls
datapatch docs jlib opatch opatch.bat opatch.pl operr operr_readme.txt README.txt
datapatch.bat emdpatch.pl ocm opatchauto opatch.ini opatchprereqs operr.bat oplan version.txt
[root@rac12cnode1 OPatch]# ./opatch versionOPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode1 OPatch]#
It is always a best practice to keep both GRID_HOME and DATABASE HOME at same opatch level. So change the opatch in DATABASE HOME also.
[root@rac12cnode1 OPatch]# cd /u01/app/oracle/product/12.1.0/[root@rac12cnode1 12.1.0]# lsRepeat the above step to all the nodes in the cluster.
dbhome_1
[root@rac12cnode1 12.1.0]# cd dbhome_1/[root@rac12cnode1 dbhome_1]# ls -ld OPatch
drwxr-xr-x. 7 oracle oinstall 4096 May 27 2013 OPatch
[root@rac12cnode1 dbhome_1]# mv OPatch OPatch_bak[root@rac12cnode1 dbhome_1]# unzip /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zipArchive: /mnt/software/RAC/1201_PSU/p6880880_121010_Linux-x86-64.zip
..
..
[root@rac12cnode1 dbhome_1]# chown -R oracle:oinstall OPatch[root@rac12cnode1 dbhome_1]# cd Opatch
-bash: cd: Opatch: No such file or directory
[root@rac12cnode1 dbhome_1]# ./opatch version
-bash: ./opatch: No such file or directory
[root@rac12cnode1 dbhome_1]# cd OPatch
[root@rac12cnode1 OPatch]# ./opatch versionOPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode1 OPatch]#
[root@rac12cnode2 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode2 OPatch]#
[root@rac12cnode3 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode3 OPatch]#
[root@rac12cnode4 OPatch]# ./opatch version
OPatch Version: 12.1.0.1.2
OPatch succeeded.
[root@rac12cnode4 OPatch]#
2. Validate and Record Pre-Patch information :
Validate using the following commands :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and record registry information.
SQL> select comp_name,version,status from dba_registry;
[grid@rac12cnode1 bin]$ opatch lsinventory3. Create OCM Response File If It Does Not Exist :
Oracle Interim Patch Installer version 12.1.0.1.2
Copyright (c) 2013, Oracle Corporation. All rights reserved.
Oracle Home : /u01/app/12.1.0/grid
Central Inventory : /u01/app/oraInventory
from : /u01/app/12.1.0/grid/oraInst.loc
OPatch version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Log file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_02-41-32AM_1.log
Lsinventory Output file location : /u01/app/12.1.0/grid/cfgtoollogs/opatch/lsinv/lsinventory2014-03-13_02-41-32AM.txt
--------------------------------------------------------------------------------
Installed Top-level Products (1):
Oracle Grid Infrastructure 12c 12.1.0.1.0
There are 1 products installed in this Oracle Home.
There are no Interim patches installed in this Oracle Home.
Patch level status of Cluster nodes :
Patching Level Nodes
-------------- -----
0 rac12cnode4,rac12cnode1,rac12cnode2,rac12cnode3
--------------------------------------------------------------------------------
OPatch succeeded.
[grid@rac12cnode1 bin]$
Create ocm response file using the following command and provide appropriate values for the prompts.
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrspVerify the created file using,
$GRID_ORACLE_HOME/OPatch/ocm/bin/emocmrsp –verbose ocm.rspNOTE: The Opatch utility will prompt for your OCM (Oracle Configuration Manager) response file when it is run. Without which we cant proceed further.
[root@rac12cnode1 ~]# su - grid
[grid@rac12cnode1 ~]$ cd $ORACLE_HOME/OPatch/ocm/bin
[grid@rac12cnode1 bin]$ ls
emocmrsp
[grid@rac12cnode1 bin]$ ./emocmrsp
OCM Installation Response Generator 10.3.7.0.0 - Production
Copyright (c) 2005, 2012, Oracle and/or its affiliates. All rights reserved.
Provide your email address to be informed of security issues, install and
initiate Oracle Configuration Manager. Easier for you if you use your My
Oracle Support Email address/User Name.
Visit http://www.oracle.com/support/policies.html for details.
Email address/User Name:
You have not provided an email address for notification of security issues.
Do you wish to remain uninformed of security issues ([Y]es, [N]o) [N]: Y
The OCM configuration response file (ocm.rsp) was successfully created.
[grid@rac12cnode1 bin]$ ls
emocmrsp ocm.rsp
[grid@rac12cnode1 bin]$ ls -lrt
total 16
-rwxr-----. 1 grid oinstall 9063 Nov 27 2009 emocmrsp
-rw-r--r--. 1 grid oinstall 621 Mar 13 02:38 ocm.rsp
[grid@rac12cnode1 bin]$ chmod 777 ocm.rsp
[grid@rac12cnode1 bin]$
Copy this response file ocm.rsp to all the nodes in the cluster to the same location. Or you can create a new response file on each node using the same method above.
4. Download and Unzip the JAN2014 PSU patch : (as grid user)
Patch 17735306 is the JAN2014 PSU patch. It is downloaded and unzipped to the locaiton "/mnt/software/RAC/1201_PSU/JAN2014_PSU/"
5. One-off patch Conflicts detection and Resolution :
Determine whether any currently installed one-off patches conflict with the PSU patch.
$GRID_HOME/OPatch/opatchauto apply <UNZIPPED_PATCH_LOCATION>/17735306 analyze
But i don't have any patches applied to my home yet. So i can ignore this step. But if you have any conflicts identified in GI home or in DB home, follow MOS ID 1061295.1 to resolve the conflicts.
6. Patch Application :
While applying the patch to a flex cluster, please note the following points .
- a. First node must be a HUB node.The last node can be either hub/leaf node.
- b. Make sure GI Stack is up on the first hub node and at least one other hub node.
- c. Make sure stack is up on all the leaf nodes
- d. The opatchauto command will restart the stack on the local node and restart the Databases on the local node.
Now patch can be applied using the following syntax ..
# opatchauto apply <PATH_TO_PATCH_DIRECTORY> -ocmrf <ocm response file>On all HUB nodes ( rac12cnode1/2/3)
[root@rac12cnode1 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode1 ~]# which opatchauto/u01/app/12.1.0/grid/OPatch/opatchauto
[root@rac12cnode1 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_02-55-54_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
RAC home(s):
/u01/app/oracle/product/12.1.0/dbhome_1
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
Following database(s) were stopped and will be restarted later during the session: orcl
Applying patch(es) to "/u01/app/oracle/product/12.1.0/dbhome_1" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/oracle/product/12.1.0/dbhome_1".
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Starting RAC (/u01/app/oracle/product/12.1.0/dbhome_1) ... Successful
[WARNING] SQL changes, if any, could not be applied on the following database(s): ORCL ... Please refer to the log file for more details.
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
RAC Home: /u01/app/oracle/product/12.1.0/dbhome_1: 17077442, 17552800
opatchauto succeeded.
[root@rac12cnode1 ~]#
While applying the patch, the services running on node 1 are relocated to other nodes in the cluster. While applying the patch to node 1 ( rac12cnode1) , i have the following status for the clusterware services..
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/srvctl status database -d orcl
Instance ORCL1 is not running on node rac12cnode1
Instance ORCL2 is running on node rac12cnode2
Instance ORCL3 is running on node rac12cnode3
[grid@rac12cnode4 ~]$ /u01/app/12.1.0/grid/bin/crsctl status res -t
--------------------------------------------------------------------------------
Name Target State Server State details
--------------------------------------------------------------------------------
Local Resources
--------------------------------------------------------------------------------
ora.ASMNET1LSNR_ASM.lsnr
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
ora.DATA.dg
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
ora.LISTENER.lsnr
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
ora.LISTENER_LEAF.lsnr
OFFLINE OFFLINE rac12cnode4 STABLE
ora.net1.network
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
ora.ons
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
ora.proxy_advm
ONLINE ONLINE rac12cnode2 STABLE
ONLINE ONLINE rac12cnode3 STABLE
--------------------------------------------------------------------------------
Cluster Resources
--------------------------------------------------------------------------------
ora.LISTENER_SCAN1.lsnr
1 ONLINE ONLINE rac12cnode2 STABLE
ora.asm
1 ONLINE OFFLINE STABLE
2 ONLINE ONLINE rac12cnode2 STABLE
3 ONLINE ONLINE rac12cnode3 STABLE
ora.cvu
1 ONLINE ONLINE rac12cnode2 STABLE
ora.gns
1 ONLINE ONLINE rac12cnode3 STABLE
ora.gns.vip
1 ONLINE ONLINE rac12cnode3 STABLE
ora.oc4j
1 OFFLINE OFFLINE STABLE
ora.orcl.db
1 OFFLINE OFFLINE STABLE
2 ONLINE ONLINE rac12cnode2 Open,STABLE
3 ONLINE ONLINE rac12cnode3 Open,STABLE
ora.rac12cnode1.vip
1 ONLINE INTERMEDIATE rac12cnode3 FAILED OVER,STABLE
ora.rac12cnode2.vip
1 ONLINE ONLINE rac12cnode2 STABLE
ora.rac12cnode3.vip
1 ONLINE ONLINE rac12cnode3 STABLE
ora.scan1.vip
1 ONLINE ONLINE rac12cnode2 STABLE
--------------------------------------------------------------------------------
[grid@rac12cnode4 ~]$
Repeat the above step to all the hub nodes in the flex cluster.
On Leaf Node :
Leaf node is not connected to storage directly and there is no oracle database instance running on this node. So when we run the patch application, it will apply the patch only to grid home. You can see that difference from the below session log from node 4( rac12cnode4) which is leaf node in my flex cluster.
[root@rac12cnode4 ~]# export PATH=/u01/app/12.1.0/grid/OPatch:$PATH[root@rac12cnode4 ~]# which opatchauto
/u01/app/12.1.0/grid/OPatch/opatchauto
[root@rac12cnode4 ~]# opatchauto apply /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306 -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
OPatch Automation Tool
Copyright (c) 2013, Oracle Corporation. All rights reserved.
OPatchauto version : 12.1.0.1.2
OUI version : 12.1.0.1.0
Running from : /u01/app/12.1.0/grid
opatchauto log file: /u01/app/12.1.0/grid/cfgtoollogs/opatchauto/17735306/opatch_gi_2014-03-13_10-00-11_deploy.log
Parameter Validation: Successful
Grid Infrastructure home:
/u01/app/12.1.0/grid
Configuration Validation: Successful
Patch Location: /mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306
Grid Infrastructure Patch(es): 17077442 17303297 17552800
RAC Patch(es): 17077442 17552800
Patch Validation: Successful
Stopping CRS ... Successful
Applying patch(es) to "/u01/app/12.1.0/grid" ...
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17303297" successfully applied to "/u01/app/12.1.0/grid".
Patch "/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17552800" successfully applied to "/u01/app/12.1.0/grid".
Starting CRS ... Successful
Apply Summary:
Following patch(es) are successfully installed:
GI Home: /u01/app/12.1.0/grid: 17077442, 17303297, 17552800
opatchauto succeeded.
[root@rac12cnode4 ~]#
NOTE : please verify the difference in applying the patch to hub node and leaf node from the above output.
7. Verification of Patch application :
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -detail -oh /u01/app/12.1.0/grid/
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and verify registry information.
SQL> select comp_name,version,status from dba_registry;
$ORACLE_HOME/OPatch/opatch lsinventory -detail –oh /u01/app/oracle/product/12.1.0/dbhome_1
Login to each node in RAC as grid user and execute the following command.
$GRID_ORACLE_HOME/OPatch/opatch lsinventory
$GRID_ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Login to each node in RAC as oracle user and execute the following command.
$ORACLE_HOME/OPatch/opatch lsinventory
$ORACLE_HOME/OPatch/opatch lsinventory -bugs_fixed | grep -i 'PSU'
Connect to each instance and verify registry information.
SQL> select comp_name,version,status from dba_registry;
8. Issues and Resolution :
Issue 1:
While applying the patch i got one error saying "OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'..."
Content From Logs:
ERROR:
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
2014-03-13_05-10-11 :
Failed to run this command :
/u01/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGrid12c_patchList -local -invPtrLoc /u01/app/12.1.0/grid/oraInst.loc -oh /u01/app/12.1.0/grid -silent -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp
oracle.opatch.gi.RunExecutionSteps.runGenericShellCommands(RunExecutionSteps.java:724)
oracle.opatch.gi.RunExecutionSteps.processAllSteps(RunExecutionSteps.java:183)
oracle.opatch.gi.GIPatching.processPatchingSteps(GIPatching.java:747)
oracle.opatch.gi.OPatchautoExecution.main(OPatchautoExecution.java:101)
Command "/u01/app/12.1.0/grid/OPatch/opatch napply -phBaseFile /tmp/OraGrid12c_patchList -local -invPtrLoc /u01/app/12.1.0/grid/oraInst.loc -oh /u01/app/12.1.0/grid -silent -ocmrf /u01/app/12.1.0/grid/OPatch/ocm/bin/ocm.rsp" execution failed:
UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
Log file Location for the failed command: /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_04-41-37AM_1.log
==
[Mar 13, 2014 4:53:26 AM] The following actions have failed:
[Mar 13, 2014 4:53:26 AM] OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 4:53:26 AM] Do you want to proceed? [y|n]
[Mar 13, 2014 4:53:29 AM] N (auto-answered by -silent)
[Mar 13, 2014 4:53:29 AM] User Responded with: N
[Mar 13, 2014 4:53:29 AM] OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
[Mar 13, 2014 4:53:29 AM] Restoring "/u01/app/12.1.0/grid" to the state prior to running NApply...
[Mar 13, 2014 5:10:09 AM] Checking if OPatch needs to invoke 'make' to restore some binaries...
[Mar 13, 2014 5:10:09 AM] OPatch was able to restore your system. Look at log file and timestamp of each file to make sure your system is in the state prior to applying the patch.
[Mar 13, 2014 5:10:09 AM] OUI-67124:
NApply restored the home. Please check your ORACLE_HOME to make sure:
- files are restored properly.
- binaries are re-linked correctly.
(use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
"/u01/app/12.1.0/grid/.patch_storage/NApply/2014-03-13_04-41-37AM"
[Mar 13, 2014 5:10:10 AM] OUI-67073:UtilSession failed: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
[Mar 13, 2014 5:10:10 AM] --------------------------------------------------------------------------------
[Mar 13, 2014 5:10:10 AM] The following warnings have occurred during OPatch execution:
[Mar 13, 2014 5:10:10 AM] 1) OUI-67124:Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
[Mar 13, 2014 5:10:10 AM] 2) OUI-67124:ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
'
[Mar 13, 2014 5:10:10 AM] 3) OUI-67124:
NApply restored the home. Please check your ORACLE_HOME to make sure:
- files are restored properly.
- binaries are re-linked correctly.
(use restore.[sh,bat] and make.txt (Unix only) as a reference. They are located under
"/u01/app/12.1.0/grid/.patch_storage/NApply/2014-03-13_04-41-37AM"
[Mar 13, 2014 5:10:10 AM] --------------------------------------------------------------------------------
[Mar 13, 2014 5:10:10 AM] Finishing UtilSession at Thu Mar 13 05:10:10 EDT 2014
[Mar 13, 2014 5:10:10 AM] Log file location: /u01/app/12.1.0/grid/cfgtoollogs/opatch/opatch2014-03-13_04-41-37AM_1.log
[Mar 13, 2014 5:10:10 AM] Stack Description: java.lang.RuntimeException: ApplySession failed in system modification phase... 'ApplySession::apply failed: Copy failed from '/mnt/software/RAC/1201_PSU/JAN2014_PSU/17735306/17077442/files/bin/olsnodes.bin' to '/u01/app/12.1.0/grid/bin/olsnodes.bin'...
Resolution :
Verified permission and there is no issue accessing and copying file manually to target location. Then re-executed without any modificaiton. Then it went fine. May be it is my environment specific as i have PSU patch located on NFS file system. But if you have also got the same issue, it might be something common to many environment and intemittent issue.
Logfile Location For PSU Patch :
/u01/app/12.1.0/grid/cfgtoollogs/opatch/
/u01/app/12.1.0/grid/cfgtoollogs/opatchauto/
Hope This Helps
SRI
Post a Comment
Thank you for visiting our site and leaving your valuable comment.