2015-07-14 02:44 AM - last edited on 2015-08-25 05:40 AM by alissa
we are running into the following problem when testing the NetApp Plug-in for Oracle RMAN hosting databases on a clustered ONTAP setup:
We are using the plugin to duplicate an exising database and then present it to a host. This involves a volume clone operation on the NetApp storage, followed by a mount of the cloned volume on the host before it is able to mount the duplicated/cloned database.
However, we are following best practices on our clustered ONTAP setup, which means we have created load sharing replica volumes of the SVMs root volume across all the nodes in the cluster.
With the following setup:
- Oracle database volume sitting on an aggregate on node 01 of the cluster
- SVM root volume sitting on node 02 of the cluster
- LS mirror replicate SVM root volumes on node 01 and 02 of the cluster, with a 15-minute LS mirror update interval.
the duplication action fails and we are in fact running into a fairly common problem described here:
because the LS mirrors of the root volume haven't received the information about the new volume in the namespace yet.
This is the specific error message that is logged in sbtio.log:
SBT-23806 07/06/15 10:37:59 popen_cmd: The command failed to exectue error=mount.nfs: mounting vsnasnp02-473-1:/TAG20150706T103756_NetApp_clone_x_01qb261m_6_1_20150702172822_CL failed, reason given by server: No such file or directory
If we wait a number of minutes and then execute the mount manually (by then the LS mirrors of the root volume have received
the updates in the namespace), it works without problems.
I think the best solution that the author of this plugin could implement for clustered ONTAP setups is to perform the mount with the /.admin/ prefix so the cluster is forced to go to the real root volume of the SVM:
Alternatively, an LS mirror update could be launced and a small timeout could be built in to wait for the update to finish,
Is this something the authors of this plugin are aware of ?