Microsoft Virtualization Discussions

NetApp + Server 2008 R2 ALUA/Multipathing FC

dkkelly
7,304 Views

NetApp FAS2050 7.3.3

fcp show cfmode: single_image

Please see attached Microsoft Visio diagram. You can obtain Microsoft's free Visio Viewer here: http://www.microsoft.com/en-us/download/details.aspx?id=21701

The red lines indicate paths to FCP target interfaces.

The black lines indicate paths to FCP initiator interfaces.

Scenario: I have two physical Windows Server 2008 R2 servers that will be connected via fiber. These two servers will be in a cluster. Server A will be the active primary node. Server B will be the passive secondary fail over node.

The LUN's needed by these two servers are owned by controller 2.

Server A will have a single-port fiber card that will be cabled to FSW01.

Server B will have a single-port fiber card that will be cabled to FSW02.

(Yea I know it would be better to have two FC connections on each server cabled to each switch to get the maximum redundancy and the ability to use RR for load balancing.) Anyways...

As of right now I do not have ALUA enabled on the igroup for Server A & B. This is because the LUN's in question are also shared with a iSCSI igroup. The iSCSI part of that will be phased out shortly and I will enable ALUA...

Server A & B are cabled to different fiber switches...

Each switch has a path to each controller. With that there should never be a scenario where a server is presenting partner ops to get to it's LUN unless a path, switch or controller itself fails. This is ideal...

I'm reaching out to get some input for this scenario.

1. Once I can enable ALUA, will that make it so the servers know which path is preferred/optimal vs. un-preferred/un-optimal and keep partner ops out of the picture unless in the event of a path, switch or controller failure. This is what ALUA does right? (I haven't quite grasped this part yet...)

2. Should I have the NetApp FC host utilities installed on server A & B? What does it actually do?

3. This whole multipathing thing... Really for any scenario: Does there need to be at least two paths from start to finish? Server-fc switch-NetApp.. Can multipathing (RR) also be leveraged say in a scenario where there is only a single path from the server to the fc switch and two paths from the fiber switch to the NetApp? (Haven't quite grasped this part yet either...) Multipathing isn't ideal for my scenario because one of the two path's I have available is un-preferred/un-optimal...

4. Is there a need for any of these Microsoft MPIO DSM plugins? http://technet.microsoft.com/en-us/library/cc725907.aspx

     Here's what server 2008 does by default:

     The Microsoft DSM preserves load balance settings even after the computer is restarted. When no policy has been set by a management application, the default policy that is used by the DSM is either Round Robin, when the storage controller follows the true Active/Active model, or simple failover in the case of storage controllers that support the SPC-3 ALUA model. With simple Failover, any one of the available paths can be used as the primary path, and remaining paths are used as standby paths.

     New MPIO features in Windows Server 2008 include a Device Specific Module (DSM) designed to work with storage arrays that support the asymmetric logical unit access (ALUA) controller model (as defined in SPC-3), as well as storage arrays that follow the Active/Active controller model.

I guess with that my FAS2050 supports ALUA and Microsoft will know what to do (Yes, I know. Shocking) and set the preferred path and use the other path as a standby path.

Once I have these few questions answered I believe I will have a solid understanding of this. All input appreciated. Please let me know if any further information is needed. I don't think I missed any major factors of the picture here...

Thank you,

Caleb Meadows

1 ACCEPTED SOLUTION

bsti
7,304 Views
1. Once I can enable ALUA, will that make it so the servers know which path is preferred/optimal vs. un-preferred/un-optimal and keep partner ops out of the picture unless in the event of a path, switch or controller failure. This is what ALUA does right? (I haven't quite grasped this part yet...)
Yes, but you will need to reboot your Windows servers before they recognize the change to ALUA.  Basicallly, enable ALUA on your igroups, then reboot your hosts.  It appears understand ALUA correctly.  It basically communicates to your MPIO software which paths are optimized (direct to the controller owning the LUN) and which are non-optimized (Go through the partner controller to the controller owning the LUN via the interconnect). 
2. Should I have the NetApp FC host utilities installed on server A & B? What does it actually do?
It's not 100% necessary anymore.  My understanding is it sets some best-practice HBA settings (queue depths, timeouts) and such on the WIndows hosts.  This is now bundled in the NetApp DSM MPIO software package. 
Per the MPIO installation instructions:
The Windows Host Utilities are no longer required. The Windows Host 
Utilities components that enable you to configure Hyper-V systems (mbralign.exe
and LinuxGuestConfig.iso) are now included with the DSM. While no longer
required, installing the Windows Host Utilities on the same host as the DSM is
still supported.
3. This whole multipathing thing... Really for any scenario: Does there need to be at least two paths from start to finish? Server-fc switch-NetApp.. Can multipathing (RR) also be leveraged say in a scenario where there is only a single path from the server to the fc switch and two paths from the fiber switch to the NetApp? (Haven't quite grasped this part yet either...) Multipathing isn't ideal for my scenario because one of the two path's I have available is un-preferred/un-optimal...
Probably not very well.  You have 2 paths to any LUN with your current config:
HBA 1 -> Switch 1 -> Controller A -> LUN A
HBA 1 -> Switch 1 -> Controller B -> LUN A
The default Load Balance Policy is Least Queue Depth, which will only leverage the one optimized path you have unless you lose that path.  Round Robin  will use both, but that's not optimal because you will be hitting that slower, unoptimized path.  This isn't best practice, and you may pay a performance penalty, but it will work.  Prepare to see a lot of FCP_PATH_MISCONFIGURED ASUPs...
Still, multipathing IS useful to you because of redundancy.  You can still take down one controller (via a takeover) and still get to your data on path 2.  Or if something happens to the target port on Controller A or the FC cable, you will still have access as well.  Obviously, with a second HBA connected to switch 2,  you gaina  LOT more redundancy.
I'm not sure you will gain any performance benefit given your current configuration, so I'd stick with LQD for your load balance policy.
4. Is there a need for any of these Microsoft MPIO DSM plugins? http://technet.microsoft.com/en-us/library/cc725907.aspx
MS MPIO is actually requred.  NetApp's MPIO does NOT replace it.  It simply sits atop it and adds some NetApp-specific mojo.  If you watch the installer for MPIO very carefully, you can see the first thing it does is add the MS MPIO feature.  One thing you gain with NetApp's MPIO is the DSM GUI.  Also, before ALUA was widely supported on NetApp controllers, I'm pretty sure NetApp DSM provided the intelligence to determine Optimized/Unoptimzed paths.  Now ALUA does that for you (which is why ALUA is REQUIRED on DSM 3.5 and higher). 
Alternately, you can remove NetApp DSM and just use MS MPIO. I'd not do so though unless you have a specific reason to do that.
Hope that helps.

View solution in original post

2 REPLIES 2

VKALVEMULA
7,304 Views

in your scenario.. i suggest and the best practice is to install snap drive s/w on the server

once you install the SD, create the LUNs from the SD s/w so that it has more control on the LUNs rather you provision from the filer perspective.

Also, you can manage the paths easily with this s/w. it will show you which paths are active, passive, preferred, optimal etc .

As you mentioned above Server A and Server B are in cluster

and it always recommended and better to have multiple paths to the server so as to overcome the path issues in future.

* if you need HA then go for multiple paths.

my suggestion would be go with the queue depth policy rather than RR.

bsti
7,305 Views
1. Once I can enable ALUA, will that make it so the servers know which path is preferred/optimal vs. un-preferred/un-optimal and keep partner ops out of the picture unless in the event of a path, switch or controller failure. This is what ALUA does right? (I haven't quite grasped this part yet...)
Yes, but you will need to reboot your Windows servers before they recognize the change to ALUA.  Basicallly, enable ALUA on your igroups, then reboot your hosts.  It appears understand ALUA correctly.  It basically communicates to your MPIO software which paths are optimized (direct to the controller owning the LUN) and which are non-optimized (Go through the partner controller to the controller owning the LUN via the interconnect). 
2. Should I have the NetApp FC host utilities installed on server A & B? What does it actually do?
It's not 100% necessary anymore.  My understanding is it sets some best-practice HBA settings (queue depths, timeouts) and such on the WIndows hosts.  This is now bundled in the NetApp DSM MPIO software package. 
Per the MPIO installation instructions:
The Windows Host Utilities are no longer required. The Windows Host 
Utilities components that enable you to configure Hyper-V systems (mbralign.exe
and LinuxGuestConfig.iso) are now included with the DSM. While no longer
required, installing the Windows Host Utilities on the same host as the DSM is
still supported.
3. This whole multipathing thing... Really for any scenario: Does there need to be at least two paths from start to finish? Server-fc switch-NetApp.. Can multipathing (RR) also be leveraged say in a scenario where there is only a single path from the server to the fc switch and two paths from the fiber switch to the NetApp? (Haven't quite grasped this part yet either...) Multipathing isn't ideal for my scenario because one of the two path's I have available is un-preferred/un-optimal...
Probably not very well.  You have 2 paths to any LUN with your current config:
HBA 1 -> Switch 1 -> Controller A -> LUN A
HBA 1 -> Switch 1 -> Controller B -> LUN A
The default Load Balance Policy is Least Queue Depth, which will only leverage the one optimized path you have unless you lose that path.  Round Robin  will use both, but that's not optimal because you will be hitting that slower, unoptimized path.  This isn't best practice, and you may pay a performance penalty, but it will work.  Prepare to see a lot of FCP_PATH_MISCONFIGURED ASUPs...
Still, multipathing IS useful to you because of redundancy.  You can still take down one controller (via a takeover) and still get to your data on path 2.  Or if something happens to the target port on Controller A or the FC cable, you will still have access as well.  Obviously, with a second HBA connected to switch 2,  you gaina  LOT more redundancy.
I'm not sure you will gain any performance benefit given your current configuration, so I'd stick with LQD for your load balance policy.
4. Is there a need for any of these Microsoft MPIO DSM plugins? http://technet.microsoft.com/en-us/library/cc725907.aspx
MS MPIO is actually requred.  NetApp's MPIO does NOT replace it.  It simply sits atop it and adds some NetApp-specific mojo.  If you watch the installer for MPIO very carefully, you can see the first thing it does is add the MS MPIO feature.  One thing you gain with NetApp's MPIO is the DSM GUI.  Also, before ALUA was widely supported on NetApp controllers, I'm pretty sure NetApp DSM provided the intelligence to determine Optimized/Unoptimzed paths.  Now ALUA does that for you (which is why ALUA is REQUIRED on DSM 3.5 and higher). 
Alternately, you can remove NetApp DSM and just use MS MPIO. I'd not do so though unless you have a specific reason to do that.
Hope that helps.
Public