Data Backup and Recovery

SMO Clone Issue After Server Move

KenO
3,546 Views

Greetings,

 

We are running into a cloning issue and looking for advice/thoughts.

 

Using SMO, we are cloning to a secondary destination server. The process "was" working before the secondary server was rebuilt.

The same settings were applied to the new secondary server.

The SOURCE and DESTINATION servers are are different subnets for data access, but both can communicate with the MGMT IP (primary storage system  IP).

 

A couple of things that I am unclear on:

1. What exacly is the datapath IP used for and do we really need it?

2. Where is SMO obtaining the "192.168.134.114" address from? Snapdrive config? Host file?

 

We have tried multiple configuration changes to no avail.

SNAPDRIVE communication with the storage systems has been verified on all hosts and works correctly.

 

Any help would be greatly appreciated.

 

 

ERROR] SMO-13032: Cannot perform operation: Clone Create.  Root cause: SMO-11007: Error cloning from Snapshot copy: FLOW-11019: Failure in ExecuteConnectionSteps: SD-00027: Error connecting filesystem(s) [/data] from snapshot smo_ebst_ebst3_f_h_1_8a829413552fe27401552fe2787b0001_0: SD-10016: Error executing snapdrive command "/usr/sbin/snapdrive snap connect -fs /data /data_AUTOCLONE -destfv 192.168.134.114:/vol/oranfs_ebsdbuatrac_data SnapManager_20160608093343286_oranfs_ebsdbuatrac_data -snapname 192.168.134.114:/vol/oranfs_ebsdbuatrac_data:smo_ebst_ebst3_f_h_1_8a829413552fe27401552fe2787b0001_0 -autorename -noreserve": 0001-136 Admin error: Unable to log on to storage system: 192.168.134.114

 

CONFIGS

-------------

 

SOURCE

# snapdrive config list
username     appliance name   appliance type
-----------------------------------------------
sdora-user   houfiler4a       StorageSystem
svc-sdora    houvntapoc1      DFM

 

# snapdrive config list -mgmtpath
system name   management interface   datapath interface
-------------------------------------------------------
houfiler4a    10.2.1.14              192.168.133.44

 

DESTINATION (Secondary)

 

username     appliance name   appliance type

-----------------------------------------------

sdora-user   houfiler4a       StorageSystem

svc-sdora    houvntapoc1      DFM

 

system name   management interface   datapath interface

-------------------------------------------------------

houfiler4a    10.2.1.14              192.168.134.114

 

Cheers!

 

Ken

 

2 REPLIES 2

Jeff_Yao
3,491 Views

try below solution, if not work, you might need to open a case

 

Make a change in the snapdrive.conf file:

snapcreate-check-nonpersistent-nfs=off

Check that the entries exist in /etc/fstab for specified nfs fs.

KenO
3,473 Views

Thanks,

 

The snapdrive configs are the same pre/post migration and the fstab is good.

 

The problem is that each server is on a different storage vlan.

The target server receiving the clone is trying to mount with the source server storage vlan IP.

It worked before the source server was migrated to a new VM.

 

We did notice that the fstab on the original source server had the data volume mounted with the filer hostname and the new source server it is mounted with the IP.

Will test this afternoon with moutning using the filer hostname.

 

Ken

 

 

 

 

Public