Network and Storage Protocols

conflicting info on VIF settings

sigmajdblock
3,428 Views

I setup a new FAS2040 with the setup script and setup e0d and e0b as a LACP vif.  When we look at the IF through Filerview, it shows it as a single mode VIF.  I have a theory as to why this is, but I want to see what others think.  Also when the 'vif status' command is run, the load isn't being balenced across both interfaces.  I have the load balencing set for IP, but we have 4 ESX servers that have sequential IPs hitting the NFS share for the VMware Datastore, so from what I know, the load should be close to being balanced unless one node is doing MUCH more activity than the others.

Here is output from the 'vif status' command for the NFS vif.

host> vif status NFS

default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'

NFS: 2 links, transmit 'IP Load balancing', VIF Type 'lacp' fail 'default'
     VIF Status    Up     Addr_set
    up:
        e0d: state up, since 15Jun2010 10:04:05 (6+00:28:46)
                mediatype: auto-1000t-fd-up
                flags: enabled
                active aggr, aggr port: e0b
                input packets 161082107, input bytes 222587680735
                input lacp packets 20920, output lacp packets 19740
                output packets 78716661, output bytes 58320748457
                up indications 9, broken indications 5
                drops (if) 0, drops (link) 0
                indication: up at 15Jun2010 10:04:05
                        consecutive 0, transitions 14
        e0b: state up, since 15Jun2010 10:03:58 (6+00:28:53)
                mediatype: auto-1000t-fd-up
                flags: enabled
                active aggr, aggr port: e0b
                input packets 578535, input bytes 52371877
                input lacp packets 20924, output lacp packets 19736
                output packets 45087045, output bytes 15449186252
                up indications 8, broken indications 4
                drops (if) 0, drops (link) 0
                indication: up at 15Jun2010 10:03:58
                        consecutive 0, transitions 12

I want to make sure that this is setup correctly, but it doesn't look that it is at the moment.

Thanks,

John

3 REPLIES 3

spence
3,428 Views

It’ a limitation with FilerView.

Please read http://www.netapp.com/us/library/technical-reports/tr-3749.html. Add an IP alias on the LACP VIF so that there are two IP addresses representing the VIF. Then manually balance your NFS datastores across the two IP addresses. If you have only one Datastore then a LACP VIF will buy you little more than failover redundancy because of how we use the IP address hash for load balancing, which is best practice.

sigmajdblock
3,428 Views

The way I understand it, it should look like this. ESX1, say 192.168.0.3 -> 192.168.0.10 (Netapp NFS), will use path 1 and ESX2, say 192.168.0.4 -> 192.168.0.10 (NetApp NFS) should use path 2, ESX 3 (.05) should be back to path 1 and so on.   There are only 5 ESX servers hitting the same Datastore on the  NetApp through the same IP and the Datastore is small (500 GB).

Is that correct?

Thanks,

John

Sebastian_Goetze
3,428 Views

Hi John,

the OUTgoing traffic looks pretty balanced. That means the NetApp is doing it's job...

The INcoming traffic is not balanced. You've got to give the ESXs a little help:

Give the VIF(s) alias(es). General rule of thumb: n interfaces combined, n-1 aliases (+ 1 default address  = n )

Distribute the datastores to balance load. Alternatively give the ESXs two IP addresses to balance over.

Main thing: force ESX to use 2 different source/destination pairs, so that the INcoming paths (from a NetApp perspective) will be balanced.

Check tr-3749, currently chapter 3.5, pp. 34 where it's explained and illustrated.

Hope that helps

Sebastian    

Public