VMware Solutions Discussions

FAS2040 with CIFS, NFS and iSCSI - NIC failover and VLANs help needed!

monstermunch
5,428 Views

Hi there,

Long time lurker here who needs a bit of advice.

I have a dual controller FAS2040 in an active/active configuration, so 4 NICs per controller.  I plan to serve NFS for VMware from the first controller and iSCSI for Exchange and SQL from the second.  I will also be serving CIFS shares out from one of the controllers.

Originally I wanted to segregate traffic using 4 subnets as follows:

1 for CIFS
1 for NFS (VMware)
2 for iSCSI (SQL and Exchange)

I want two separate subnets for iSCSI as I use MPIO for SQL and Exchange and I understand that it is best practice to keep the iSCSI targets in separate subnets when using multipathing.

So, on to my question...

My issue is with the NIC configuration and what will happen in the event of a failover.  All networks are critical, although CIFS slightly less so.  Currently I have configured things as follows:

2 x Cisco 3750-X in a stack to allow LACP for the NFS vif across two physical switches.

Filer1

e0a and e0c in a dynamic multimode vif for CIFS in subnet W (partnered with CIFS vif on Filer2)
e0b and e0d in a dynamic multimode vif for NFS in subnet X

Filer2

e0a and e0c in a dynamic multimode vif for CIFS in subnet W (partnered with CIFS vif on Filer1)
e0b for iSCSI in subnet Y
e0d for iSCSI in subnet Z

This works fine in normal operation as expected.  The problem is, NFS and iSCSI will currently not work if either of the controllers fail over.  CIFS will be fine as they have partners in the same subnet but since NFS and iSCSI are in different subnets, I have not been able to set up a partner for them.

I've read many very useful posts on this forum about using VLANs which I think will solve my problem, but I am not a networking guy and am having trouble figuring out exactly how to set it up to get the result I want.  I have set up the LACP OK and I am familiar with Cisco switches to some extent but I don't know how to configure the ports on the switch to allow multiple VLANs on the same ports and also how to configure the filer NICs to complement this.  I also don't know how to set up the switch ports to allows multiple VLANs.

I have tried, I just can't get it to work.  I think I know what I need to do, it's just getting it to work that's the problem.

Does anyone have any specific switch configurations that should allow me to do what I need to do?  I'm happy to give more information if required...

Any advice appreciated!

6 REPLIES 6

monstermunch
5,428 Views

OK, I think I have made some progress...

Everything seems to work pretty much but I am getting error messages when I failover.  If I fail filer2 over to filer1, I get the following:

ifconfig: Unable to determine IP address of partner interface vif_iSCSI-1.
ifconfig: 'vif_iSCSI-1-10' cannot be configured: Address does not match any partner interface.

If I refresh the view in System Manager, I get the following:

ifconfig: Unable to determine primary for interface e0d.
ifconfig: Unable to determine primary for interface e0b.
ifconfig: Unable to determine primary for interface e0P.
ifconfig: Unable to determine primary for interface vif_NFS-10.
ifconfig: Unable to determine primary for interface vif_NFS-5.

If I fail filer1 over to filer2, I get the following:

ifconfig: Unable to determine IP address of partner interface vif_NFS.

If I refresh the view in System Manager, I get the following:

ifconfig: Unable to determine primary for interface vif_iSCSI-2-6.
ifconfig: Unable to determine primary for interface e0d.
ifconfig: Unable to determine primary for interface vif_iSCSI-1-5.
ifconfig: Unable to determine primary for interface e0c.
ifconfig: Unable to determine primary for interface e0b.
ifconfig: Unable to determine primary for interface e0P.
ifconfig: Unable to determine primary for interface vif_iSCSI-2-10.
ifconfig: Unable to determine primary for interface vif_iSCSI-1-10.

However, when in a failed over state, all the IP addresses are available and they do all work.  Should I not worry about the error messages too much?

I haven't configured the CIFS vifs yet as I am concentrating on getting the iSCSI/NFS failover working first.  Here is the config so far...

Filer1

hostname (correct hostname)
vif create lacp vif_NFS -b ip e0b e0d
vlan create vif_NFS 5 6 10
ifconfig e0a (Subnet A IP) netmask 255.255.255.0 mediatype auto flowcontrol full partner e0a wins
ifconfig e0P down
ifconfig e0c  partner e0c mtusize 1500 trusted wins mediatype auto flowcontrol full down
ifconfig vif_NFS-10 (Subnet B IP) netmask 255.255.255.0 partner vif_iSCSI-2-10 mtusize 1500 trusted -wins up
ifconfig vif_NFS-10 alias (Subnet B IP) netmask 255.255.255.0
ifconfig vif_NFS-10 alias (Subnet B IP) netmask 255.255.255.0
ifconfig vif_NFS-10 alias (Subnet B IP) netmask 255.255.255.0
ifconfig vif_NFS-5 (Subnet C IP) netmask 255.255.255.0 partner vif_iSCSI-1-5 mtusize 1500 trusted -wins up
ifconfig vif_NFS-6 (Subnet D IP) netmask 255.255.255.0 partner vif_iSCSI-2-6 mtusize 1500 trusted -wins up
route add default (correct default route here) 1
routed on
options dns.domainname domain.co.uk
options dns.enable on
options nis.enable off
savecore

Filer2

hostname (correct hostname)
vif create single vif_iSCSI-1 e0b
vif create single vif_iSCSI-2 e0d
vlan create vif_iSCSI-2 6 10
vlan create vif_iSCSI-1 5 10
ifconfig e0a (Subnet A IP) netmask 255.255.255.0 mtusize 1500 mediatype auto flowcontrol full partner e0a wins
ifconfig e0c `hostname`-e0c netmask 255.255.255.0 partner e0c mtusize 1500 trusted -wins mediatype auto flowcontrol full down
ifconfig e0P down
ifconfig vif_iSCSI-2-10 (Sunet B IP) netmask 255.255.255.0 partner vif_NFS-10 mtusize 1500 trusted -wins up
ifconfig vif_iSCSI-2-6 (Subnet D IP) netmask 255.255.255.0 partner vif_NFS-6 mtusize 1500 trusted -wins up
ifconfig vif_iSCSI-1-5 (Subnet C IP) netmask 255.255.255.0 partner vif_NFS-5 mtusize 1500 trusted -wins up
ifconfig vif_iSCSI-1-10 (Subnet B IP) netmask 255.255.255.0 partner vif_NFS-10 mtusize 1500 trusted -wins up
route add default (correct default route here) 1
routed on
options dns.domainname domain.co.uk
options dns.enable on
options nis.enable off
savecore

3750 LACP config

interface Port-channel1
switchport access vlan 10
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 5,6,10
switchport mode trunk
flowcontrol receive on

interface GigabitEthernet1/0/1
switchport access vlan 10
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 5,6,10
switchport mode trunk
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active

interface GigabitEthernet2/0/1
switchport access vlan 10
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 5,6,10
switchport mode trunk
flowcontrol receive on
channel-protocol lacp
channel-group 1 mode active

3750 iSCSI ports config

interface GigabitEthernet1/0/2
description cir1fas02 vif_iSCSI-1-5
switchport access vlan 5
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 5,10
switchport mode trunk

interface GigabitEthernet2/0/2
description cir1fas02 vif_iSCSI-2-6
switchport access vlan 6
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 6,10
switchport mode trunk

Does anyone see anything obviously wrong with the config?  Sorry for posting so much stuff...

Many thanks

aborzenkov
5,428 Views

Filer1

ifconfig vif_NFS-10 (Subnet B IP) netmask 255.255.255.0 partner vif_iSCSI-2-10 mtusize 1500 trusted -wins up

Filer2

ifconfig vif_iSCSI-2-10 (Sunet B IP) netmask 255.255.255.0 partner vif_NFS-10 mtusize 1500 trusted -wins up

ifconfig vif_iSCSI-1-10 (Subnet B IP) netmask 255.255.255.0 partner vif_NFS-10 mtusize 1500 trusted -wins up

I am not sure if such many-to-one configuration is valid; in general is advisable to have configuration as symmetrical as possible unless there are strong reasons to do differently. At least the first error message is likely because of this.

monstermunch
5,428 Views

Ah right, that would indeed explain the message.  The reason I did this as because I need 2 x individual iSCSI NICs on one controller but 1 x LACP interface for NFS (made of 2 NICs) on the other.  I don't necessarily need to have both iSCSI interfaces up in the event of failover though, although it would be nice.

I will go and change things so only one of the iSCSI interfaces fails over and see how it goes....

monstermunch
5,428 Views

Quick update for anone who is interested;  I made some changes based on the advice of aborzenkov and now have a configuration I am happy with.  I've tested failover both ways and all services remain available.  I have had to make some compromises but I guess that is to be expected in this situation.

If anyone is interested I can post the config.  I'm not promising that it is exactly correct but it works and gives me what I need and that's good enough for me

theinze2378
5,428 Views

Hey monster, I wouldn't mind seeing the working config you have if you wouldn't mind. I'm looking for something similar

Thanks!

monstermunch
5,428 Views

Hi theinze,

You're more than welcome to have a look, I've attached the files to this post and a little diagram that I hope shows what will happen in the event of a failover.  The arrows show what will (or should) happen if a controller fails over.  I've added my RC files from the filers and the switch config from a 3750 stack showing the relevant port configs.  Load balancing is set to src-dst-ip on the 3750.

I'm using NFS for VMware on fas01 and MPIO for iSCSI on fas02 and CIFS on both.  You'll notice that there is no config for the CIFS etherchannels in the 3750 config, that's because they are connected to different switches.

I'm not promising anything but it works for me and in the (hopefully unlikely) event of a failover I know I can still access all my interfaces and protocols albeit with less NICs than on the primary controller.

If anyone has any comments about the configs, please reply as I'm more than happy to be corrected and can't claim to be an expert!

Hope it helps, let me know how you get on

Public