Network and Storage Protocols

Using VIFs with iSCSI MPIO

BriggsCorp
5,838 Views

I think this is a rather simple question, but am strugging to find the answer in the NetApp documentation.  I am changing the connectivity for our NetApp 3040 active/active san.  Currently each filer has 4 gbe connections and each connection is on a different subnet.  I would like to reconfigure the nics to be on a single subnet so that I don't have to route all the iscsi traffic.  My hosts have 2 gbe connections and are using MPIO with the NetApp DSM, so my layout now looks like this:

Host nic 1 -->  NetApp filer 1 IP 1

Host nic 1 -->  NetApp filer 1 IP 2

Host nic 1 -->  NetApp filer 1 IP 3

Host nic 1 -->  NetApp filer 1 IP 4

Host nic 1 -->  NetApp filer 2 IP 1

Host nic 1 -->  NetApp filer 2 IP 2

Host nic 1 -->  NetApp filer 2 IP 3

Host nic 1 -->  NetApp filer 2 IP 4

Host nic 2 -->  NetApp filer 1 IP 1

Host nic 2 -->  NetApp filer 1 IP 2

Host nic 2 -->  NetApp filer 1 IP 3

Host nic 2 -->  NetApp filer 1 IP 4

Host nic 2 -->  NetApp filer 2 IP 1

Host nic 2 -->  NetApp filer 2 IP 2

Host nic 2 -->  NetApp filer 2 IP 3

Host nic 2 -->  NetApp filer 2 IP 4

I was assuming that I would set up a VIF on each filer using the 4 connections and LACP.  That would give me an MPIO setup like this:

Host nic 1 --> NetApp filer 1 vif

Host nic 1 --> NetApp filer 2 vif

Host nic 2 --> NetApp filer 1 vif

Host nic 2 --> NetApp filer 2 vif

After reading through the VIF survival guide: http://communities.netapp.com/blogs/ethernetstorageguy/2009/04/04/multimode-vif-survival-guide, I am confused and it seems like the configuration should be:

Host nic 1 -->  NetApp filer 1 vif

Host nic 1 -->  NetApp filer 1 alias 1

Host nic 1 -->  NetApp filer 1 alias 2

Host nic 1 -->  NetApp filer 1 alias 3

Host nic 1 -->  NetApp filer 2 vif

Host nic 1 -->  NetApp filer 2 alias 1

Host nic 1 -->  NetApp filer 2 alias 2

Host nic 1 -->  NetApp filer 2 alias 3

Host nic 2 -->  NetApp filer 1 vif

Host nic 2 -->  NetApp filer 1 alias 1

Host nic 2 -->  NetApp filer 1 alias 2

Host nic 2 -->  NetApp filer 1 alias 3

Host nic 2 -->  NetApp filer 2 vif

Host nic 2 -->  NetApp filer 2 alias 1

Host nic 2 -->  NetApp filer 2 alias 2

Host nic 2 -->  NetApp filer 2 alias 3

After thinking through this I'm not sure that I need the VIF at all since I am doing iSCSI MPIO, the VIF configuration seems more applicable to NFS or CIFS services.  If I don't use a VIF I can see that the incoming request would be load balanced using multiple nics appropriately, however my network engineer said they tried this configuration initially and the return traffic from the NetApp all came through one NIC so they were not getting the performance we needed.  Can anybody shed any light on this for me?  Thank you!

1 ACCEPTED SOLUTION

treyl
5,838 Views

You are headed down the right path.   Some of the overlapping concepts contribute to confusion in how to deploy the best solution for your scenario. 

The multimode vif survival guide was written to explain the following.

a.) How load-balancing really works with link aggregation and thus how to exploit it to your advantage.

b.) Define the differences between Static and LACP and guide people to use LACP

c.) Help people understand the configuration requirements on the switch side specifically with Cisco.

You are correct that link aggregation's load-balancing policies are best suited for protocols which do not have some means of providing load-balancing somewhere in the communications stack.  

NFSv4 and iSCSI when using a MPIO DSM can perform load-balancing natively sort-of.

Most people admittedly don't use MPIO for performance in iSCSI environments but for availability.   So it looks like you have 4 interfaces per controller and the goal is to collapse four interfaces on one subnet ensuing that your are utilizing all of the ports for availability and I suspect performance from your hosts.

The diagram depicts 2 hosts, but not sure if there are more, lets try to walk through some logic regarding why advice may change based on your situation.

First question I would want to know is how many switches to you have in the Ethernet Storage Fabric which connects the hosts to the filer.

If you have 1 then I would tell you to bond all interfaces into a 4 port LACP link

If you have 2 that don't support some type of multiple switch link aggregation technology then I would tell you to create 2 2 Port LACP links and terminate them in both switches.

If you have 2 that do support some type of multiple switch link aggregation technologies then I would tell you to create a 4 port LACP link spanning across both switches. 

If the case of the second scenario then you would have 2 active LACP port-channels running and we would want MPIO running so that if one LACP VIF failed that your iSCSI clients could remain connected because they were dual path'd to the 2nd LACP VIF.   Now that covers availability.    The next piece is performance.   Remember that we are going to load-balance within the VIF based on source and destination hashing of the IP address.  If I only had 1 host sourcing a session to a single IP address on the LACP iSCSI target then I would always be load-balanced to just one of the links in the 2 port LACP channel.  That is sub-optimal because I would want to use both.   In this case I would create an alias on that LACP VIF which would introduce another iSCSI target IP on each LACP VIF.  Then I would let my DSM software running on top of the iSCSI software initiator load-balance across the 4 iSCSI target IP addresses.

Trying not to write a book here in the response but in short,  there are 2 primary considerations you want to cover availability and performance.   You would only use the alias concept if we wanted to exploit or maniplate the algorithm for performance reasons.  

Shoot me an email treyl@netapp.com and I can help you with your particular scenario and then we can follow up this question with the elements of the answer that solved your specific problem.

Sorry for the long winded answer.

Trey

View solution in original post

2 REPLIES 2

treyl
5,839 Views

You are headed down the right path.   Some of the overlapping concepts contribute to confusion in how to deploy the best solution for your scenario. 

The multimode vif survival guide was written to explain the following.

a.) How load-balancing really works with link aggregation and thus how to exploit it to your advantage.

b.) Define the differences between Static and LACP and guide people to use LACP

c.) Help people understand the configuration requirements on the switch side specifically with Cisco.

You are correct that link aggregation's load-balancing policies are best suited for protocols which do not have some means of providing load-balancing somewhere in the communications stack.  

NFSv4 and iSCSI when using a MPIO DSM can perform load-balancing natively sort-of.

Most people admittedly don't use MPIO for performance in iSCSI environments but for availability.   So it looks like you have 4 interfaces per controller and the goal is to collapse four interfaces on one subnet ensuing that your are utilizing all of the ports for availability and I suspect performance from your hosts.

The diagram depicts 2 hosts, but not sure if there are more, lets try to walk through some logic regarding why advice may change based on your situation.

First question I would want to know is how many switches to you have in the Ethernet Storage Fabric which connects the hosts to the filer.

If you have 1 then I would tell you to bond all interfaces into a 4 port LACP link

If you have 2 that don't support some type of multiple switch link aggregation technology then I would tell you to create 2 2 Port LACP links and terminate them in both switches.

If you have 2 that do support some type of multiple switch link aggregation technologies then I would tell you to create a 4 port LACP link spanning across both switches. 

If the case of the second scenario then you would have 2 active LACP port-channels running and we would want MPIO running so that if one LACP VIF failed that your iSCSI clients could remain connected because they were dual path'd to the 2nd LACP VIF.   Now that covers availability.    The next piece is performance.   Remember that we are going to load-balance within the VIF based on source and destination hashing of the IP address.  If I only had 1 host sourcing a session to a single IP address on the LACP iSCSI target then I would always be load-balanced to just one of the links in the 2 port LACP channel.  That is sub-optimal because I would want to use both.   In this case I would create an alias on that LACP VIF which would introduce another iSCSI target IP on each LACP VIF.  Then I would let my DSM software running on top of the iSCSI software initiator load-balance across the 4 iSCSI target IP addresses.

Trying not to write a book here in the response but in short,  there are 2 primary considerations you want to cover availability and performance.   You would only use the alias concept if we wanted to exploit or maniplate the algorithm for performance reasons.  

Shoot me an email treyl@netapp.com and I can help you with your particular scenario and then we can follow up this question with the elements of the answer that solved your specific problem.

Sorry for the long winded answer.

Trey

BriggsCorp
5,838 Views

Here's my first shot at how this will be configured:

Filer1
vif create lacp vif1 -b ip e0a e0b e0c e0d
ifconfig vif1 192.168.90.110 netmask 255.255.255.0 flowcontrol full mtusize 9000 partner 192.168.90.120
ifconfig vif1 alias 192.168.90.111 netmask 255.255.255.0
ifconfig vif1 alias 192.168.90.112 netmask 255.255.255.0
ifconfig vif1 alias 192.168.90.113 netmask 255.255.255.0


Filer2
vif create lacp vif1 -b ip e0a e0b e0c e0d
ifconfig vif1 192.168.90.120 netmask 255.255.255.0 flowcontrol full mtusize 9000 partner 192.168.90.110
ifconfig vif1 alias 192.168.90.121 netmask 255.255.255.0
ifconfig vif1 alias 192.168.90.122 netmask 255.255.255.0
ifconfig vif1 alias 192.168.90.123 netmask 255.255.255.0

My concern is about the partner parameter...I would normally set partner to be the IP of the other filer, but in your post you mention using the "partner-vif-name," so that seems like I should use vif1 for the parameter.

Public