Subscribe

snapdrive storage list - where does the NFS device name come from?

Hi,

 

This may be a basic question, but I haven't managed to find anyone who knows the answer where I am, and it's causing problems.

 

See below - when I do snapdrive storage list for a filesystem, I get one host  (filer1-mgmt), but when I look at mount it's different (filer1-stg). file1-mgmt is not actually accessible, and I can't work out where snapdrive is picking it up from.

 

# snapdrive storage list -fs /u02/oradata/data1
NFS device: filer1-mgmt:/vol/oradata01_data1 mount point: /u02/oradata/data1 (non-persistent)

 

# mount

filer1-stg:/vol/oradata01_data1 on /u02/oradata/data1 type nfs (rw,hard,bg,nointr,rsize=32768,wsize=32768,tcp,actimeo=0,vers=3,timeo=600,addr=10.75.183.13)

 

Problems happen on snapdrive restore, for which I'm doing:
# snapdrive snap restore -snapname filer1-stg:/vol/oradata01_data1:test -fs /u02/oradata/data1 -vbsr execute -force -noprompt


this dismounts /u02/oradata/data1 at start (and removes entry from fstab), does the restore, but then fails to remount, as it tries to use filer1-mgmt, which isn't accessible.
fails with
0001-029 Command error: FS /u02/oradata01/data1 is not found

I've fixed it for now by adding a hosts entry for filer1-mgmt pointing to filer1-stg, but this is not actually the correct ip address, and is bound to cause confusion

So I just need to work out how to tell snapdrive not to look at filer1-mgmt

Cheers

 

Re: snapdrive storage list - where does the NFS device name come from?

Did you try to set filer management interface in SD (snapdrive config set -mgmtpath)? See "Multiple subnet configuration" in SD manual.

Re: snapdrive storage list - where does the NFS device name come from?

well, thanks! - I wasn't aware of that, and our netapp guy here noticed that the -mgmt config was different on the node that failed vs the one that works.

The one that works has no -mgmt config, where the one that fails does (and it looked odd - the mgmt and storage interfaces appeared switched).

So I've removed the -mgmt config on the failing node, and now the restore works.

Now I need to read up on it...