Data Backup and Recovery
Data Backup and Recovery
Hi
we have been testing SnapDrive 5.3 for Linux and CDOT 8.3.1P1.
The server OS is RHEL 6.4.
We have discovered that snapdrive snap restore fails when using NFS mounted volumes.
It looks like snapdrive initially unmounts the filesystem and then tries to mount it on a mountpoint created in the /tmp directory.
The command that is invoked looks like
mount -t <filer:/volume> /tmp/SDU_<datetimestamp>_<pid>
This fails because the fs type is not specified.
I ran the command using strace and I cannot see any attempt to determine the fstype.
Is this a bug or a configuration issue?
Thanks.
Solved! See The Solution
Turns out that the problem was down to the way the snapdrive command was structured.
We were doing:
snapdrive snap restore -fs /path/to/filesystem -snapname <SVMname>:/path/to/volume:snap_name
It should be:
snapdrive snap restore -fs /path/to/filesystem -snapname snap_name
Hi,
It could be because of bug http://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=826616
Hi Sahana,
the problem is not related to the bug you referred to:
We have both A and PTR records for the LIF used to access the volume and the SVM that owns the volume.
The error message we get is:
0002-890 Admin error: Error mounting temp directory /tmp/SDU_05202016_122529_7049: Unexpected error from mount:
this is followed by the mount 'usage' output.
It looks like snapdrive is unable to determine the mount type (in this case 'nfs').
Turns out that the problem was down to the way the snapdrive command was structured.
We were doing:
snapdrive snap restore -fs /path/to/filesystem -snapname <SVMname>:/path/to/volume:snap_name
It should be:
snapdrive snap restore -fs /path/to/filesystem -snapname snap_name