Data Backup and Recovery

Trouble connecting to snapshot during local cloning

ACHOU_SIMG
4,258 Views

Hi,

Oracle DB in Solaris 10 sparse root zone

Filer in 8.0.3 7-mode

SDU 5.1

SMO 3.2

in NFS environment; no SAN

I've been trying to get cloning work in SMO. Backups were fine but cloning failed with the following error messages. It seems Snapdrive could not connect to the snapshot taken by the successful backup I made earlier today. I've checked that particular snapshot "smo_uat01_uat01_f_h_1_2c998110450a649701450a649d6a0001_0" is there. I use my own OS account to run SMO. My OS account has *all* SD.Snapshot capabilities plus SD.Storage's Read capability.

In the error messages below, message highlighted in blue seems to tell me because it's in Solaris zone, the mount option "zone=" is unknown. However, I am not sure if it's a just warning or root cause of the failure because the next line (highlighted in red) says NFS mount of the temp SMO volume (/vol/SnapManager_20140328165404589_erp_uat01_data/uat01data) failed because "permission denied."

This permission denied error makes me wonder: how can a non-root user mount from the DB server to the SnapManager?

I will try to to do it as a root user to see if it makes a difference. But then if it works, it beats the purpose of letting DBA to do the cloning since he does not have root privilege.

If you have any thoughts, please let me know. I've tried so many things to make this work and am running out of ideas.

Thank you very much.

Here are the error messages:

--[ERROR] FLOW-11019: Failure in ExecuteConnectionSteps: SD-00027: Error connecting filesystem(s) [/netapp01/uat01data] from snapshot smo_uat01_uat01_f_h_1_2c998110450a649701450a649d6a0001_0: SD-10016: Error executing snapdrive command "/usr/sbin/snapdrive snap connect -fs /netapp01/uat01data /CLONETEST/DATA -destfv filer1:/vol/erp_uat01_data SnapManager_20140328165404589_erp_uat01_data -snapname filer1:/vol/erp_uat01_data:smo_uat01_uat01_f_h_1_2c998110450a649701450a649d6a0001_0 -autorename -noreserve":

connecting /CLONETEST/DATA

            to filer directory: filer1:/vol/SnapManager_20140328165404589_erp_uat01_data/uat01data

          Volume copy filer1:/vol/SnapManager_20140328165404589_erp_uat01_data ... created

                     (original: erp_uat01_data)

          Cleaning up ...

destroying empty snapdrive-generated flexclone filer1:/vol/SnapManager_20140328165404589_erp_uat01_data ... done

0001-034 Command error: mount failed: mount: filer1:/vol/SnapManager_20140328165404589_erp_uat01_data/uat01data on /CLONETEST/DATA - WARNING unknown option "zone=erpuat01d_130309"

nfs mount: filer1:/vol/SnapManager_20140328165404589_erp_uat01_data/uat01data: Permission denied

.

--[ERROR] FLOW-11010: Operation transitioning to abort due to prior failure.

3 REPLIES 3

grahn
4,258 Views

Any reason why you are on smo 3.2 and sdu 5.1?

Also, I thought there was some info in the Solaris iag that went over setting up zones with sdu...have you looked through that guide?

Sent from my iPhone

ACHOU_SIMG
4,258 Views

Freddy,

Thanks for the questions. I use the specific versions of SMO and SDU because it is the only combination supported in my Solaris 10 environment based on Interoperability matrix. I am more than happy to go for the latest eat but sadly it only supports Solaris 11.

SDU IAG briefly mentioned about setting up in solaris zones but it apparently is not complete. I had additional errors when setting it up but eveuntually made SDU and SMO installed and configured in the zone hence successful backups. It is the cloning I get stuck.

I have digged into SD trace log and confirmed SMO grabs NFS mount options from the existing mounts. So I expect the NFS zone option got picked up. However I am not sure it failed because of the option. I suspect "permission denied" is the root cause. But then I thought only root can mount? If it is the case, how can a normal user such as DBA mount to the snapshot?

Thank you.

James_Pohl
3,808 Views

What does your filer have for this option?

 

options nfs.export.auto-update 

 

 

If you are like us you have changed the auto create export for new volumes feature.

 

options nfs.export.auto-update off

 

If you set it on,

 

options nfs.export.auto-update on

 

It will create an export that is available to all hosts  but your system would be then able to connect via nfs.

 

Hope this helps.

 

Public