2009-07-14 12:47 AM
We predominantly use our NetApp filer for VMware via NFS, so my actual SAN experiences have been rather limited in our setup.
I've recently discovered our consultant who helped me setup a physical Windows host to our NetApp box for SnapDrive and SMSQL functions made a few mistakes.
First, he never had me install the Windows Host Utilities Kit. My understanding is that this needs to be installed in active/active setups to configure the correct timeout settings.
Second, he had me create a nic team with the Intel Proset tools for the dedicated iSCSI nics. Microsoft states link aggregation is not a supported setup with the iscsi software initiator in their initiator user guide. NetApp confirms this in TR-3441.
Our setup is very similar to the multi-network active/active configuration listed in the Fibre Channel and iSCSI Configuration Guide:
Since this is purely iSCSI via software initiator, my understanding is that I can use either MCS or MPIO. I'm leaning towards MPIO due to its maturity, popularity, and full support with SnapDrive but am curious for any feedback from others.
What do others consider better for iSCSI with a software initiator -- MPIO or MCS?
Which load balancing policy is the best -- round robin, least queue depth, etc?
TR-3441 mentions least queue depth being the best with the Data Ontap DSM, but I would be using the free Microsoft DSM.
Interestingly, Microsoft's iscsi software initiator guide recommends using MCS if the target supports it and the software initiator is being used. Anyone, care to disagree?
2009-07-14 01:35 AM
Probably there are no firm answer for your question - just opinions varying from person to person.
However reading this document is good starting point:
Conclusion on page 18 is fantastic :
"It is not possible to make a single recommendation as to which multipathing solution to use."
(having said that there is handy table as well to help you make your choice)
2009-07-14 06:35 AM
Thanks for the feedback. I have read TR-3441 which helped me determine that I can use MCS or MPIO in my situation. What none of the documentation is clear on is if one method is considered more efficient than the other or more preferred. The only statement, in documentation, I've come across is from Microsoft in their iscsi software initiator user guide that simply states:
"If your target does support MCS and you are using the Microsoft software initiator driver then MCS is the best option. There may be some exceptions where you desire a consistent management interface among multipathing solutions and already have other Microsoft MPIO solutions installed that may make Microsoft MPIO an alternate choice in this configuration."
But it provides no further explanation as to why MCS is recommended. Is it more efficient? Is it more stable? No explanation.
I'm curious...is there more cpu overhead with MPIO as sessions are added for each path (versus additional connections for 1 session in MCS)?
2009-07-14 02:21 PM
I've come across 2 resources that I believe suitably answer my question for my particular setup.
From the Microsoft paper:
For Windows 2008:
1. MCS can provide higher throughput than MPIO, especially at 4 paths or less. At around 6 paths to 8 paths, MPIO begins to edge out MCS.
2. Contrary to my thoughts in the previous post, MCS appears to tax the CPU harder. In one Microsoft paper, it consumed 25% more CPU than MPIO.
For Windows 2003:
MPIO seems to be even closer to MCS in throughput speed and is still more efficient on the CPU.
From Mike Richardson's blog:
"MPIO is most common across all OS vendors and what I would recommend...There are few applicable differences between MPIO and MCS in terms redundancy or performance. The additional complexity of MCS, some MCS limitations with iSCSI HBAs, and the aforementioned OS commonality of MPIO are the basis of my MPIO recommendation."
Add in the fact that MPIO is more mature, popular, and has full support with SnapDrive, I believe I have my answer.