This may be a basic question, but I haven't managed to find anyone who knows the answer where I am, and it's causing problems.
See below - when I do snapdrive storage list for a filesystem, I get one host (filer1-mgmt), but when I look at mount it's different (filer1-stg). file1-mgmt is not actually accessible, and I can't work out where snapdrive is picking it up from.
# snapdrive storage list -fs /u02/oradata/data1 NFS device: filer1-mgmt:/vol/oradata01_data1 mount point: /u02/oradata/data1 (non-persistent)
filer1-stg:/vol/oradata01_data1 on /u02/oradata/data1 type nfs (rw,hard,bg,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600,addr=10.75.183.13)
Problems happen on snapdrive restore, for which I'm doing: # snapdrive snap restore -snapname filer1-stg:/vol/oradata01_data1:test -fs /u02/oradata/data1 -vbsr execute -force -noprompt
this dismounts /u02/oradata/data1 at start (and removes entry from fstab), does the restore, but then fails to remount, as it tries to use filer1-mgmt, which isn't accessible. fails with 0001-029 Command error: FS /u02/oradata01/data1 is not found
I've fixed it for now by adding a hosts entry for filer1-mgmt pointing to filer1-stg, but this is not actually the correct ip address, and is bound to cause confusion
So I just need to work out how to tell snapdrive not to look at filer1-mgmt