VMware Solutions Discussions

2 questions "Multi-switch link aggregation configuration" with alias's and conficting design recommendations

jamesconawaywwt
4,818 Views

I have configured a FAS2020 at a client’s site for “Multi-switch link aggregation configuration.” Based on recommendations in the document "NetApp and VMware vSphere Storage Best Practices" TR-3749. If you look at page 34 you will see that the vif’s have been configured with alias IP addresses.

I ran into a problem with implementing this configuration. I added the alias’s to the vifs, but then the ESX hosts started showing the NFS exports twice on each host.

As in:

ESX01 Server Storage list:

NFSEXPORT 1
NFSEXPORT 2
NFSEXPORT 1 (1)  duplicate
NFSEXPORT 2 (1)  duplicate

The only way I could get the NFS exports on the ESX host to reflect a single store once again was to remove the alias.

My understanding of the purpose of adding the alias’s was to enable two ip channels for the NFS traffic to travel on between esx host and SAN controller.

Secondary question:

I also referred to the document "Best Practices for running VMware vSphereTM on Network Attached Storage" white paper from VMware and it listed the following recommendation, 4 nics per server teamed in pairs together with each pair going to separate physical switches. (See Image #1 attached)

This is not the design I adopted. I used the following from TR-3749 pg34:

(See Image #2 attached) 2 pNics for each host cross connected to 2 stacked 3750's. As in the 1st pNic going to 1st 3750 and the 2nd pNic goes to the 2nd 3750. Because they are stacked LACP can be configured and they can be ether-channeled together.

Which one is better/right method?

3 REPLIES 3

aborzenkov
4,818 Views

My understanding of the purpose of adding the alias’s was to enable two  ip channels for the NFS traffic to travel on between esx host and SAN  controller.

The purpose of adding IP alias is to have static load balancing across aggregate ports; this is explained on next page 35 in TR-3749. You should nevere access a single NFS export via more than one IP address and you must access the same NFS export using the same IP address on all ESX servers, otherwise they won't treat it as the same datastore. As shown on p35, half of available datastores is accessed via the first alias and half - via the second alias. Assuming proper switch configuration, this will distribute load between two available links.

Which one is better/right method?

Personally I would favour stack if supported by switch. Keep in mind, that in case of NFS you will never have truly separate physical switches because NFS does not have any failover capabilities and relies on underlying TCP/IP infrastructure to handle any transmission path errors. So what you actually have is interconnected switches which still represent single fabric; and IMHO it is more error prone and difficult to configure right than just a (logically) single stacked switch.

jamesconawaywwt
4,818 Views

I was able to figure out why the duplicate NFS exports were showing up. It was because I was configuring the NFS shares on the ESX hosts using the DNS entry of the FAS2020. When it is configured by IP address, it works fine.

Regarding the alias IP addresses enabling the separate routes to different "data stores."

At what level do the alias ip addresses get assigned to the "Data stores?" aggro or volume?

Where do you configure in the management utilities which IP address is assigned to what?

aborzenkov
4,818 Views

At what level do the alias ip addresses get assigned to the "Data stores?" aggro or volume?


You do it on ESX server when you configure NFS datastore.

Public