Network and Storage Protocols

iSCSI Network Design - Windows Physical Server

terryzolinski
9,499 Views

FAS8040 CDOT 8.2.2, 10 GBe, Windows Server 2012 R2.

 

We are currently working on an Exchange server deployment with a "consultant". To put it bluntly, these guys can't answer a simple question about a key piece of the puzzle, the fabric from the hosts to the storage.

 

Building an iSCSI design using a single subnet and dedicated NICs on the servers. The NetApp side is configured properly, using LACP LIFs, creating two target IP addresses.

 

I initially tried to set this up using a single subnet and MPIO, with dual NICs not teamed (each NIC was standalone on the same subnet).

Ex. NetApp Targets: LIF1: 192.168.110.180, LIF2: 192.168.110.181

Windows NICs: NIC1: 192.168.110.235, NIC2: 192.168.110.237

 

The world I come from I'm used to using two separate subnets, so 192.168.110.180, 192.168.111.180 as an example.

 

With the config intially setup, if I disable a port on the switch (or unplug the cable) there is a 30 second timeout before Windows flips over to use the other NIC. So, even though MPIO is setup properly, we still need to wait for Windows to flip to use the other NIC. To me this config doesn't have 2 active paths, but rather one active and one standby. The 30 second convergence time on the Windows host is what I'm trying to solve. This timeout only occurs when simulating a NIC failure on the Windows Server.

 

I haven't found a design document which discusses this exact config, using a single subnet without teaming, which is why I'm coming to the forum. I'm still waiting for the consultant to come back with what their client's have done, but that was weeks ago.

 

Now, I can reconfigure this as LACP or I can configure this to use two subnets. But I'd also like to find out what this consultant is talking about to see if they are even on the right track and there is just a small piece missing. 

 

So, LACP will solve this problem I would assume, however I just want to see if anyone else has built a design like this to see if there is just something I'm missing. It's rather disappointing that our consultants recommend a design they can't even explain why it was recommended, nor can they provide any information on how to build out the fabric to our Windows hosts.

 

Thanks in advance.

6 REPLIES 6

ekashpureff
9,473 Views

 

Terry -

 

Using the same subnet shouldn't be an issue here. We do it in our NetApp class labs regularly.

 

LIF setup should be split between partner nodes for HA failover.

Windows should be using ALUA for MPIO.

BCP is to install the NetApp Host Utility Kit (HUK) for Windows.

You may want to use the NetApp DSM (Device Specific Module) instead of the default Windows DSM.

The 'vserver iscsi session/connection' commands are useful for verifying the session connections from the cluster side.

Host use of Link Agregation is not BCP - If LACP hasn't configured the ports before you try to access the LUN ... ?

 

Some references for you:

 

  • Clustered Data ONTAP SAN Administration Guide
  • Clustered Data ONTAP SAN Configuration Guide
  • Clustered Data ONTAP SAN Express Setup Guide
  • Clustered Data ONTAP iSCSI Configuration for Windows Express Guide
  • TR-4080: Best Practices for Scalable SAN in Clustered Data ONTAP

I hope this response has been helpful to you.

At your service,

Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, IT Learning Solutions http://sg.itls.asia/netapp
(P.S. I appreciate 'kudos' on any helpful posts.)

 

terryzolinski
9,460 Views

Hi Eugene and thanks for your reply.

 

I am using the NetApp DSM, HUK and ALUA is configured. There are 4 sessions to the SVM and the HA is setup properly on the cluster, I have no concerns about that as I have tested failover on both nodes and switch reboots, no issues. There is also an NFS SVM on this cluster and failover is 100% from the ESX hosts as we are using LACP. I'm very certain the configuration on the Windows hosts is correct as well if I can just figure this one kink out, which is not related to the storage at all.

 

The problem is that ping drops for 30 seconds to both IP targets of the SVM (one LACP team for each node in the 2 node cluster, dispersed between 2 Nexus 3548 using VPC) , so it's not an iSCSI issue I wouldn't think. If the network connection isn't communicating, no mpath configuration in the world is going to work. The problem is Windows waits 30 seconds, I assume it's ensuring the network is down, before flipping to use another NIC on the system. So at this point iSCSI has no path to the storage for that 30 seconds. It seems like Windows sees 192.168.110.180 and 181 coming out of NIC1. Then when the port drops, Windows waits 30 seconds to ensure the networks is down before giving up and finding another path.

 

TR-3441 Windows Multipathing Options with Data ONTAP: Fibre Channel and iSCSI

 

Since Server 2012 R2, when using LBFO and not 3rd party teaming software, LACP is supported for iSCSI connectivity (Sec. 5). I have tested this in pre-prod and it works 100%.

 

The above document also states that when using iSCSI MPIO, NetApp recommends using 2 separate subnets (Sec. 7.1).

 

TR-4080 as you have shown me does not specifically talk about a Windows host with multiple NICs, it only talks about a NetApp iSCSI target with multiple IPs on the same subnet. The screen shots don't show any source info.

 

TR-3441 is the only document which discusses the issue I am facing, indicating it's not a recommended configuration.

 

Following the logic in this MS KB 175767, having 2 adapters on the same subnet in any situation will not load balance and may cause issues.

http://support.microsoft.com/en-us/kb/175767

 

The SAN configuration guide makes no indication about number of subnets to use. Just a general design of how the network should be cabled. Mine is the fully redundant model on page 10. My interpretation of multiple IP networks is just that, 2 separate networks which would require 2 separate subnets to work properly.

 

Can't find this document Clustered Data ONTAP SAN Express Setup Guide which pertains to the 8020 or 8040, just 32xx series. I wouldn't expect it to be any different since this document doesn't discuss Windows side in any way other than installing DSM.

 

Clustered Data ONTAP iSCSI Configuration for Windows Express Guide, no real value to this document I could find. Very basic iSCSI configuration, which also makes no mention of number of subnets to use when you have a fully redundant model.

 

Very general documentation overall is all that I can find.

 

If this configuration should work, what's missing if the DSM and HUK are installed? To me this just seems like an unresolvable issue, due to how basic Windows networking works.

 

Thank you

Brando
5,803 Views

Hi

Are you able to share with me, I am also doing some testing on this and would like to find the answer .

  1. The Adapters make model , driver and firmware
  2. The adapters properties configuration , including advanced and Power management
  3. MS networking service and features enabled on that adapter
  4. Details of MS iSCSI initiator for example this KB and specifically initiator to target bindings 
  5. Also the details of the initiator's switch port and target's switch port  parameters things like send and receive hardware flow control, priority flow control.

Many thanks

 

M3-Steve
5,196 Views

This situation really caught my eye.
https://support.microsoft.com/en-us/topic/how-multiple-adapters-on-the-same-network-are-expected-to-behave-e21cb201-2ae1-462a-1f47-1f2307a4d47a
in this article, it explains about Microsoft connected subnet handling.
I know the OS is outdated in the article, but networking in Windows at that level probably has not changed.
I imagine you can prove this with a "route print" statement and reference the Metric on the interface to connected subnet.
I guess I can see why you would want multiple subnets for iSCSI sessions if a Windows client is in the mix.
I not sure how other OSes would handle the same scenario, but given the collision detection mechanism of physical ethernet, and the MS client decision tree to handle this, I would think the 30 seconds is normal, just not good for iSCSI.
hmmm, I guess it would not be a good idea to change to multiple subnets since all clients connected in this manner would have a 50/50 chance of a 30-second timeout upon changing the LIF.

Brando
5,621 Views

Changed link down time  and max request hold time to 5 sec resulted in 12 – 15 seconds failover times this in my testing was the most stable, any lower created unnecessary events. 

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Class\{4d36e97b-e325-11ce-bfc1-08002be10318}\0003\Parameters

This is with host utils installed without you would also need to look at MPIO path verification and set to 0

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\mpio\Parameters

---------------------------------------------------------------------------

For reference this is the full list of registry values changed by  Host utilities 7.1

https://library.netapp.com/ecmdocs/ECMLP2789202/html/GUID-7A8FEDEC-2645-4995-AB18-EE2697CF4D63.html

https://mysupport.netapp.com/site/products/all/details/hostutilities/downloads-tab/download/61343/7.1/downloads

 

https://library.netapp.com/ecm/ecm_download_file/ECMLP2789202

Brando
5,445 Views

NB: Any custom configuration to MPIO should also be tested with applications and other failure scenarios before being used in production please also review KB:

What are the parameters that control how MS iSCSI survives lost TCP connections without causing applications harm?

Public