VMware Solutions Discussions

iSCSI supported and unsupported etherchannel and mpio options?

rprudent
3,801 Views

Hi,

May work with a FAS2240A, 2 x 2960S vm traffic, 2 x 2960S iSCSI ip storage, x 3 servers (2 x nic vmnetworking, 2x nic iSCSI storage, vmware

I plan to use 1 controller for block based traffic only and this is based around iSCSI ip storage mainly

Below is my understanding and questions, appreciate inputs on current understanding and possible options depending on types of switch functionality

Note the 2960S supports flex stack (not sure I can get this, keeping config options open)

Understanding

  • ONTAP HA failover would automatically allow takeover to assume identity and continue to service from failed controllers’ configured nics.

  • Etherchannel at a switching level should work for both vmnetworking and ip storage the same way.  Etherchannels can use different types of algorithms on storage, switch, host and nics (similar types on all). Ip load balancing preferably as ip’s never changes and provides true networking redundancy.  Etherchannels can also be static or dynamic (lacp) but should use static for vswitch compatibility.

  • This approach handles redundancy and load balancing at the network layer but respective to ip iSCSI storage if using traditional switches may be best to use iSCSI mpio NMP at path level with RR.  Frames can reach out of sync with RR.

  • I can use single mode vifs for traditional switches for active passive paths with automatically managed virtual port load-balancing and no etherchannel on switches which has two active connections to controller but only one active path per datastore.

  • I believe RR is good load distribution but only one int used at a time whereas ip balancing would allow simultaneous unique connections and should only be used if cross stack if not supported.

Questions

  • If using non cross stack switches with single mode vif should I use single vmkernel port with multiple vmnics using uplinks to multiple switches going to single ip vif using vswitch algorithm vSwitch port-based load balancing from vswitch end?

  • With single mode vif what is meant by ”Data I/O to a single IP is not aggregated over multiple links without addition of more links”

  • If using cross stacked switches with etherchannel should I still use iSCSI mpio RR with storage multimode vifs with ip aliasing to properly utilize # of vmnic pnics paths to storage ints and does this use same ip load balancing switch algorithm from vswitch end? Would I have to use more than one target mapping luns similar to NFS in this scenario to utilize paths evenly?

  • If possible is can I use NIC teaming with vSwitch port-based load balancing for vmnetworking without cross stack support and use ip based load balancing for with cross stack support without iSCSI mpio?

  • Fort iSCSI ip storage can I use etherchannel approach with iSCSI mpio but does this make sense?

  • What is the overall recommended way to connect vswitch nics to  iSCSI storage with cross stack etherchannel support and without cross stack etherchannel support?
3 REPLIES 3

crocker
3,801 Views

Hi,

Did you try the new search on the NetApp Support Site?  If you do not find any thing, we have external and internal subject matter experts in the NetApp Support Community answering questions about Filers.  If you have an active NetApp Single Sign On (SSO) account login, this link enables you to engage them about

rprudent
3,801 Views

Hi,

Thanks, took your advice and posted same there as well, hope for some kind of feedback as I would think this is a good topic.

AWOROCH2012
3,801 Views

Did you ever get a good answer or resolve this?  I'm having similar stumbling blocks. 

In the Best Practices guide, I see samples for NFS or ISCSI, but not really one that covers both.  I'm looking to run NFS for primary vSphere storage, may or may not need ISCSI for LUN's, but will need to pass through ISCSI access to/from the guests to the filer.  It looks like I can use a single EtherChannel config from the host to switch to storage to aggregate my bandwidth, and possibly provide a VM port group to the VM the same way to give it access to the same iSCSI target on the filer. 

Public