I have a quick question. How can I determine which LIF presents which volume? Here's the scenario. I got a few volumes in the "SVM1" vserver. I also have a few LIFs in the "SVM1" vserver. The catch here is that we link the LIFs to ifgrps. So, how can I see which LIF (i.e svm1_10 ) maps to a certain volume (i.e nfs_volume_1 ) when they're not bound to a physical port but instead to an ifgrp? Is there a command?
cluster1::*> net int show (network interface show) Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- -------- ---------- ------------- ------- ----
LIFs are mapped to a SVM, not a volume. The only exceptions is for SAN, where it is more node specific for direct access. You are able to get to all volumes in a SVM from a LIF thanks to the cluster interconnect.
This a NFS-only environment. No CIFS, no ISCSI, no FC. We are running 9.6P5. So far the only way I found to get what I need is to connect to vcenter server and from the summary I can see which IP and junction path was used to mount the datastore. That way I can determine which LIF is being used to connect that volume.
I am looking a way to see which LIF I need to use to mount certain volume but the more I go through the more I start to believe that I can use any LIF within the same SVM as long as I specify the correct junction path. Then the traffic will go through the Cluster Interconnect and hit the home node for the volume.
PS1: I joined the company and this was already configured
PS2: I edited the image to not show private company info but the concept should be clear.
Got ya. I've seen setups like that, I do see the benefit of them (for example you can move the lif with the volume if you move it to a separate node), but depends on the environment. for setups like that, your best bet is to just name the volume and the name the same but put _lif or something at the end of the lif. And besides the new feature in 9.7. Your best bet is VMware side. run rvtools at it for an output.
You're correct, you will be able to mount the volumes via any LIF assigned to the SVM with the NFS data protocol enabled. You could see these interfaces in the relevant SVM by running a command like:
network interface show -data-protocol nfs -vserver <svm>
The man pages on the system are also a good source of information for this sort of thing. If you run the command on your filer you should see what I mean:
man network interface show
We do exactly what a previous poster mentioned - creating an interface per volume or service, depending on what's appropriate, with a naming scheme that indicates what volume we have configured systems to access via that interface. That way we can move the interface with the volume, don't have to add the latency of traversing the cluster network etc.