If I have a vif with one target portal group – A Win2008 host with two NICs and the software initiator how do we configure multiple iscsi sessions to use multiple physical links at both the host and storage (vif) layer?
You have to remember that "multi-session" and "multi-path" are two different things. Multiple sessions can be done pretty much with any link combination, although you will hit it's limits faster on single links. Multi-path requires at least 2 different subnets (best if isolated logically and physically). Ideally, you would use at least two physical interfaces on different nic's on the filer if you really wanted redundancy. Using vlans helps you get around the need for physical interfaces, but will not give you the same level of physical redundancy and probably not the same level of bandwidth as "port aggregation" or "etherchannel" load balancing is relatively deterministic and has no idea of "load" per-se, just an 8-member per MAC/IP connection round-robin algorithm.
Setting up multi-session is just a matter of configuring it on the Windows host.
Pls could you explain why 2 separate ip networks are required? Is it not feasible to have two separate NICs each will an ip on the same network and multipath?
Also, I checked the MS iSCSI guide for info on how to enable multi session, however there is no info here. Can you give me the high level steps required to set this up?
Lastly, do I need multi session to allow me to take advantage to more physical links (I do this in ESX environments but not sure for Windows)?
Multi-pathing is just that: multiple paths. The point here being a sort of emulation of typical FC SAN networks. There is no good way to segregate traffic on the same subnet. Operating systems don't have deterministic connections between multiple IP's on the same subnet. Since all are of equal value from a routing standpoint, the traffic can leave and enter any interface it wants. MPIO requires at least 2 vlans, at most 2 (or more) separate physical networks.
Configuration of MCS for Windows is explained on page 42 of the users guide for the iSCSI software initiator. You can get a copy here: iSCSI Users Guide
Operating systems don't have deterministic connections between multiple IP's on the same subnet. Since all are of equal value from a routing standpoint, the traffic can leave and enter any interface it wants.
Sorry for being pedantic, but this is wrong. IP traffic is pretty much deterministic. Do not start yet another town legend ...
Sorry to be pendantic as well, but "IP traffic is pretty much deteministic" ? How is that a refutation for what I wrote? That is hardly a specific sentence at all.
Traffic between hosts with multiple interfaces in the same subnet have no deterministic source IP/interface for sending traffic, either for initiating or replying. (There is an option on NetApps that lets you sort of "force" the NetApp to reply from the interface where traffic entered)
This is where so many "newbies" make the mistake of thinking that things will be better if I just add more IP addresses. Because the host with multiple IP's in a single subnet (netmask) can answer from or initiate traffic from any IP address (the only criteria the OS cares about is routing) within that subnet, it breaks things in "mysterious" ways for firewalls and things like NFS exports, just to name a few. That tons of linux admins add the addition firewall hacks of source routing just obfuscates the problem.
It is not a viable solution for MPIO (and a lot of other things) because you don't really know which interface a host is initiating or replying from and if one of the interfaces on the server or the netapp goes down, then you may or may not (again non-deterministic) get the path failover that you want, not to mention that I don't even think you can setup MPIO without 2 vlans/interfaces on the Windows side.
So, no town legend, just the facts.
Traffic between hosts with multiple interfaces in the same subnet have no deterministic source IP/interface for sending traffic, either for initiating or replying
Again - this is incorrect statement. To put it mildly. Traffic has very well defined source address and interface for each connection. It is true that source address/interface may differ between connections. In quite deterministic way
But it becomes off-topic here.
Well, I hate to belabor the point or be this confrontational, but you are wrong, at least about me being wrong, but your refutation doesn't ever actually meet what I am talking about.
This is easily demonstrable.
1. Setup a unix/linux host with 2 IP's in the same subnet and one IP on a NetApp in the same subnet. Create an export rule for just one of the IP's on the unix host. Try to mount the export a few times. At random times it will fail.
2. Put a NetApp filer in a different subnet. Setup 2 IP's on the filer in that subnet. Setup firewall rules that allow for snmp requests to one of the interfaces. Run snmpwalks for a while. It might not fail right away, but at some point it will fail. The NetApp will answer from the IP/interface that is not allowed.
3. Just sniff the traffic from example one. Even add another IP on the NetApp side to make it even more interesting.
But basically, there is no difference between interfaces on the same subnet. If the OS chooses to initiate traffic from any one of them, it is behaving correctly, and to a point deterministically, that is, it is following routing rules, it is behaving correctly, but not predictably for a specific interface/IP. The OS just makes the choice.
This will basically work with any 2 interfaces in the same subnet. I guess I wouldn't be sitting here writing all of this if you had ever tried this. Even if I had 2 interfaces on the same subnet on this laptop, I couldn't, without some source route hack, determine which of them would contact this website. The OS would just choose an IP, because they have equal value in reaching the default gateway, because they are in the same subnet.
Basically, you are avoiding direct refuation. That connections have source and destination IP's is defined, but that is hardly the point here. Qualifying your arguments with "quite" and "pretty much" does seem to detract a bit from your contentions as well.
Anyway, like you said, we are wondering off topic here.
I would certainly be interested in more details of environment in both examples (host OS versions, details of interface configurations, routing tables on hosts/filers etc) as well as network sniff results if you have them available.
I am hearing / seeing conflicting information.
In ESX environments the configuration we have followed is seen at section 3.8 of the “NetApp and VMware vSphere Storage v2.1” which explains multisession and multipath with two vmk ports both of which are in the same ip subnet (169.254.177.0/24). Is this not the same principal for a windows host? Or am I missing something here?
A reference link to the document and which page you are referring to would be very helpful, but if you are referring to TR-3749, then I don't seem to be able to view that at the moment. I just get an error saying that "the document is damaged and could not be repaired".
Multi-pathing on ESX on FC hasn't been a very good experience for the most part. I haven't had to use it for iSCSI there but I assume the same rules apply. If you really want the advantages of multi-pathing, you should be using some sort of redundant network setup that segregates the network paths physically enough that single hardware failures will only affect one path at a time.
Multi-connection will just add connections up to a limit after the initial connection has been established.
Interesting choices for IP's in that example.