I'm unable to clone from secondary storage to a remote host without granting some kind of access to primary storage to remote host.
I'm a satisfied user of:
Snapdrive for Linux 5.3P4
NetApp Release 9.1P13
Oracle is a RAC cluster mounting three volumes through NFS:
redo and archived log: MY_LOG_FS
Backups are correctly protected using the SnapManager_cDOT_Vault protection policy.
I want to create a standalone database running on a remote host using only the secondary storage, so my expected requirements are:
1- Create a FlexVolume clone on the secondary storage to be used by the clone database instance 2- Avoid NFS export policy from the primary storage to remote host 3- Avoid VServer user in the primary storage configured in snapdrive remote host
Actually I'm able only to obtain (1) easly with --from-secondary.
I can otbain (2) only starting from a FREED backup.
When I try to create a clone from a non FREED backup SMO try to mount the snapshot of MY_LOG_FS from the primary storage and I get an error like this:
... [DEBUG]: SD-00027: Error connecting filesystem(s) [<MY_LOG_FS>] from snapshot <MY_SMO_SNAPSHOT>: ... SD-10016: Error executing snapdrive command "/usr/sbin/snapdrive snap connect ... -fs <MY_FS> /opt/NetApp/smo/mnt/-<MY_SNAPDRIVE_GENERATED_TEMPORARY_MOUNT_DIRECTORY> ... -destfv <MY_PRIMARY_LIF_NAME>:<MY_PRIMARY_VOLUME_NAME> <MY_SNAPDRIVE_GENERATED_STRING> ... -snapname <MY_PRIMARY_LIF_NAME>:<MY_PRIMARY_VOLUME_NAME>:<MY_SMO_SNAPSHOT> -autorename -noreserve": ... 0001-859 Admin error: The host's <MY_REMOTE_HOST> interfaces, <MY_REMOTE_HOST_NIC_NAME> are not allowed to access the path <MY_PRIMARY_VOLUME_NAME> on the storage system <MY_PRIMARY_LIF_NAME>. ... To resolve this problem, please configure the export permission for path <MY_PRIMARY_VOLUME_NAME> on the storage system <MY_PRIMARY_LIF_NAME> so that host <MY_REMOTE_HOST> can access the path.
... [ERROR]: FLOW-11010: Operation transitioning to abort due to prior failure.
As stated by this error message I cannot avoid to add an export policy from the primary storage for the volume containing Oracle redo and archived log because when backup is still on primary storage the snapdrive SMO generated command try to mount the primary storage and not the secondary storage as my requirement.
The datafiles volume is still correctly cloned in the secondary storage but the recovery cannot be terminated because redo and archived log are missing.
I think this would be an usefull requirement to fulfill, it's not supported or I'm missing something in configuration, SMO options or best practices?
In the SMO Administration Guide I cannot found an explicit reference about this limitation, only about restoring backup I can read:
Restoring backups from secondary storage
You cannot use the -from-secondary option if the backup exists on primary storage; the primary
backup must be freed before a backup can be restored from secondary storage.
For requirements (3) it seems I always need the primary storage system accessibile from the remote indeed I always need to configure an user in the remote host snapdrive which is a security issue because the remote clone host can access the primary storage: it's avoidable?
There are a couple of bugs that fixed various issues with SMO contacting the primary storage when cloning from secondary. It looks like your best bet is to upgrade to 3.4.1P3 as that is where the software has been fixed to no longer look at the primary storage when cloning from secondary.
Note that since there was no repository change between 3.4.0 and 3.4.1 there is no need to run an upgrade command. You can upgrade the software only and keep running.
Sounds like you are in a tough spot then. The fixes you want are already in the newer SMO code, but your database version isn't supported with the newer SMO code. If you can upgrade the database you could get to the spot you want to be, but I'm guessing there are other factors involved that are keeping you on Oracle 10g. It looks like you are going to need to continue ot free backup from primary to use the clones on secondary.