Subscribe
Accepted Solution

Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Environment :

ESX 4.0 udpate 2

FAS 3140

Nortel Switches

We have teamed 4 NIC as part of a vSwitch on the ESX and configured for "IP Hash" as the Load balancing policy

On the Nortel side these 4 ports have been aggregated together using LACP.

NetApp is also using LACP

During testing, we have found that traffic from the VMware host to the Nortel switch is only using 1 of the 4 configured ports. If we turn that interface off, all the traffic flows through another interface ( but only 1 interface is used).. What should be done to enable Load Balancing between the NICs that have been teamed together.. Any suggestions ?

Thanks

Amit

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Hi Amit,

Please take a look at TR 3749 NetApp and VMware vSphere Storage Best Practices. It deals with the exact issue you are experiencing.

Cheers,

-Eric

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Thanks Eric.. I did use the recommendations outlined in TR 3749 when designing the environment but appears that something is still missing.

The traffic should be going over multiple NICs ( that are teamed together in the vswitch ) but that is not happening. Only one NIC is being used.

Thanks

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

In order for ESX to take advantage of the “Route based on IP hash” in the Load-balancing policy, you'll need multiple IP address (aliases) on the VIF on the controller.  If you want traffic over all 4 ports, you'll need 4 datastores each mounted using a different ip address (alias on the vif).  If you use RCU or the provisioning plug-in in VSC 2.0 this balancing happens automatically for you (as long as the aliases are on the vif).

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Thanks Again Eric..

The way I have got my Storage Controllers configured is :

(output from /etc/rc )

...

...

vif create single vifa e0a e0b

vif create lacp vifb -b ip e4a e4c

vif create lacp vifc -b ip e4b e4d

vif create single svifa vifb vifc

....

....

The IP is configured for the svifa and I do not have any aliases configured.

I am however using VSC 2.0 for provisioning...

Thanks

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Let's say you have 192.168.0.201 configured as the ip for svifa.  It might look something like this:

ifconfig svifa 192.168.0.201 netmask 255.255.255.0 up


You can then add some additional ip addresses like this:

ifconfig svifa alias 192.168.0.202
ifconfig svifa alias 192.168.0.203

ifconfig svifa alias 192.168.0.204

When VSC provisions a new datastores, it will mount them like this:

192.168.0.201:/vol/newDatastoreA

192.168.0.202:/vol/newDatastoreB

192.168.0.203:/vol/newDatastoreC

192.168.0.204:/vol/newDatastoreD

If you start a VM on newDatastoreA and one on newDatastoreB, you should see ESX using 2 different ethernet interfaces.

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Did configure an alias for the svifa. Mounted a datastore from this new IP... Still the traffic is going over just one NIC on the Vmware side....

Amit

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

Your rc file says svifa is single-mode.  That's part of the problem - only one NIC will be active at a time on the NetApp side.  That, and your general problem leads me to the next thing:

Are your switches and the ports on the ESX and filer side configured for some stacking technology that allows link aggregation using active ports on both switches?  I believe Nortel calls this SMLT or DMLT depending on the switch family/model.

Peter

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

The svif is made up of vibc and vifb each of which have 2 active interfaces...

But anyhow Eric's solution worked. We just had to remount the NFS datastores using the alias IP

Thanks

Amit

Re: Load balancing does not work with NIC Teaming on ESX 4.0 update 2

I did have a question for Eric: Does the load balancing address assignments get lopsided over time or does it seem to work out? I am worried that the math "starts over" every time I open an instance of vSphere and will get too many address assignments on one single address. Thanks!