ONTAP Discussions

NAS volume to LIF mapping

kennvarg
6,858 Views

Hello All,

 

I have a quick question. How can I determine which LIF presents which volume? Here's the scenario. I got a few volumes in the "SVM1" vserver. I also have a few LIFs in the "SVM1" vserver.  The catch here is that we link the LIFs to ifgrps. So, how can I see which LIF (i.e svm1_10 ) maps to a certain volume (i.e nfs_volume_1 ) when they're not bound to a physical port but instead to an ifgrp? Is there a command?

 

-------------------------- Volumes ---------------------------------------

 

cluster1::*> vol show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----

 SVM1 nfs_volume_1 n01_ssd online RW 10.52TB 2.16TB 79%
SVM1 nfs_volume_2 n02_ssd online RW 5.24TB 716.3GB 86%
SVM1 nfs_volume_3 n03_ssd online RW 9.79TB 1.64TB 83%
SVM1 nfs_volume_4 n04_ssd online RW 7.24TB 861.3GB 88%
SVM1 nfs_volume_5 n05_ssd online RW 8.01TB 1.38TB 82%

---------------------------- LIFs ---------------------------------------------------------

cluster1::*> net int show
(network interface show)
Logical Status Network Current Current Is
Vserver           Interface      Admin/Oper        Address/Mask Node Port Home
-----------         ----------       ---------- --------       ---------- ------------- ------- ----

SVM1
                         svm1_10        up/up                  10.1.12.100/22 node-02 a0a true
                         svm1_11        up/up                  10.1.12.101/22 node-01 a0a true
                         svm1_12        up/up                  10.1.12.102/22 node-02 a0a true
                        svm1_13         up/up                  10.1.12.103/22 node-01 a0a true
                        svm1_15        up/up                   10.1.12.15/22 node-01 a0a true

7 REPLIES 7

SpindleNinja
6,833 Views

What version of ontap are you running?    This might be helpful.  

 

https://whyistheinternetbroken.wordpress.com/2019/11/08/ontap97-feature-sneak-peek-nfs-client-to-volume-mapping/

 

Curious though, Any idea why you’re configured that way?     Just sounds like lots of vlans connected to a single svm.  

paul_stejskal
6,774 Views

LIFs are mapped to a SVM, not a volume. The only exceptions is for SAN, where it is more node specific for direct access. You are able to get to all volumes in a SVM from a LIF thanks to the cluster interconnect.

SpindleNinja
6,770 Views

I am thinking he means that he has a volume and lif per vlan all in the same SVM.   

Something like..  

 

SVM -> Volume1 ->  Lif_vlan1-> client  

SVM -> Volume2 ->  Lif_vlan2-> client 

SVM -> Volume3 ->  Lif_vlan3-> client 

SVM -> Volume4 ->  Lif_vlan4-> client 

etc

paul_stejskal
6,768 Views

So if you want to know which client is accessing which LIF, the NFS stats in the blog posted above will work on ONTAP 9.7, otherwise the only way to know would be a trace for CIFS.

kennvarg
6,754 Views

Hi Guys

 

I appreciate your thoughts on this. 

 

This a NFS-only environment. No CIFS, no ISCSI, no FC. We are running 9.6P5. So far the only way I found to get what I need is to connect to vcenter server and from the summary I can see which IP and junction path was used to mount the datastore. That way I can determine which LIF is being used to connect that volume. 

 

I am looking a way to see which LIF I need to use to mount certain volume but the more I go through the more I start to believe that I can use any LIF within the same SVM as long as I specify the correct junction path.   Then the traffic will go through the Cluster Interconnect and hit the home node for the volume.  

 

PS1: I joined the company and this was already configured

PS2: I edited the image to not show private company info but the concept should be clear. 

 

kennvarg_0-1589910250561.png

Thanks!

 

SpindleNinja
6,743 Views

Got ya.  I've seen setups like that,  I do see the benefit of them (for example you can move the lif with the volume if you move it to a separate node),   but depends on the environment.   for setups like that, your best bet is to just name the volume and the name the same but put _lif or something at the end of the lif.      And besides the new feature in 9.7.  Your best bet is VMware side.    run rvtools at it for an output. 

unixnation
6,499 Views

Hi,

 

You're correct, you will be able to mount the volumes via any LIF assigned to the SVM with the NFS data protocol enabled. You could see these interfaces in the relevant SVM by running a command like:

 

network interface show -data-protocol nfs -vserver <svm>

 

The man pages on the system are also a good source of information for this sort of thing. If you run the command on your filer you should see what I mean:

man network interface show

 

We do exactly what a previous poster mentioned - creating an interface per volume or service, depending on what's appropriate, with a naming scheme that indicates what volume we have configured systems to access via that interface. That way we can move the interface with the volume, don't have to add the latency of traversing the cluster network etc.

 

Cheers,

Steve

Public