2015-05-07 09:17 AM
This may be a basic question, but I haven't managed to find anyone who knows the answer where I am, and it's causing problems.
See below - when I do snapdrive storage list for a filesystem, I get one host (filer1-mgmt), but when I look at mount it's different (filer1-stg). file1-mgmt is not actually accessible, and I can't work out where snapdrive is picking it up from.
# snapdrive storage list -fs /u02/oradata/data1
NFS device: filer1-mgmt:/vol/oradata01_data1 mount point: /u02/oradata/data1 (non-persistent)
filer1-stg:/vol/oradata01_data1 on /u02/oradata/data1 type nfs (rw,hard,bg,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600,addr=10.75.183.13)
Problems happen on snapdrive restore, for which I'm doing:
# snapdrive snap restore -snapname filer1-stg:/vol/oradata01_data1:test -fs /u02/oradata/data1 -vbsr execute -force -noprompt
this dismounts /u02/oradata/data1 at start (and removes entry from fstab), does the restore, but then fails to remount, as it tries to use filer1-mgmt, which isn't accessible.
0001-029 Command error: FS /u02/oradata01/data1 is not found
I've fixed it for now by adding a hosts entry for filer1-mgmt pointing to filer1-stg, but this is not actually the correct ip address, and is bound to cause confusion
So I just need to work out how to tell snapdrive not to look at filer1-mgmt
2015-05-08 09:12 AM
well, thanks! - I wasn't aware of that, and our netapp guy here noticed that the -mgmt config was different on the node that failed vs the one that works.
The one that works has no -mgmt config, where the one that fails does (and it looked odd - the mgmt and storage interfaces appeared switched).
So I've removed the -mgmt config on the failing node, and now the restore works.
Now I need to read up on it...