Subscribe

FAS2050 + VMware Following TR-3428 doesn't make any sense???

Hello All,

I am trying to configure my netapp fas 2050 for VMware I am trying to follow best practice but it just isn't making any sense and I wondered if someone can help.

Here is my kit

2 X DL380 G7 Server

2 X Cisco Catalyst 3560 Switches

Netapp FAS2050 with dual controllers and extra pci cards.

Now the configuration I want to set up is on section 9.3 of the netapp technical reference document here

Now my issue is, considering my switches don't support multi channel trunking in the diagram in section 9.3 what is the best practice actually asking me to configure between the switches in the diagram at the bottom of the page , Figure 30???

I hope someone can enlighten me.

Thanks

Re: FAS2050 + VMware Following TR-3428 doesn't make any sense???

Since your switches do not support stacking you may look into cisco switch clustering.  Otherwise you will have to simplify your design.  if you have enough interfaces on the 2050, you can configure a single mode VIF of 2 multimode VIF's one to each switch to cover switch failures.  The hosts will all be failover redundancy, and there will be no load balancing accross interfaces.

Re: FAS2050 + VMware Following TR-3428 doesn't make any sense???

Hello Aron,

Many Thanks for your help and reply.

I have tried to draw the latter part of your suggestion out.

"if you have enough interfaces on the 2050, you can configure a single mode VIF of 2 multimode VIF's one to each switch to cover switch failures.  The hosts will all be failover redundancy, and there will be no load balancing accross interfaces."

Re: FAS2050 + VMware Following TR-3428 doesn't make any sense???

You have to be very careful with this "active-passive" style config though. Because you end up configuring one switch as active, and the other as passive (failover network teaming from the controllers), you need to make sure the ESX servers obey this also.

For example, if the top switch is the active path from the controller, the bottom switch is passive, and so the NetApp controller will effectively ignore this network port (it has to, it's failover teaming). ESX however is configured to use all ports actively, so you'll get connectivity issues if it ever tries to initiate a connection from the bottom switch. Set your ESX server to also be active/passive.

However this further complicates things. If you have ESX and the NetApp using the top switch as active, bottom as passive, what happens when someone accidentally unplugs the ESX server from the top switch? ESX happily fails over to the bottom switch, but the NetApp sees no issue and so it does not failover it's networking as all it's connections are still up.

You may want to configure an ISL between the 2 switches to make sure cross switch traffic is possible in the event of isolated failure. This can cause a I/O bottleneck, but that's better than disconnecting all your storage!

Having 2 non-stackable switches is a very difficult configuration to make resilient. You may be better off simply configuring the 2 switches identically, and then leaving one on the shelf. Connect everything into 1 switch with all the fancy teaming protocols. Your failover mechanism is to manually replace the switch. Not ideal, but certainly less complicated and the 99% of the time when there is no failure, you will have a lot more bandwidth and capacity.