VMware Solutions Discussions

iSCSI or NFS for Citrix 5.6? if iSCSI then VIFs?


TR 3732 recommends block iSCSI or FC storage for Citrix Xenserver 5.x on NetApp using the new  StorageLink Gateway feature in XenServer.  This is in contrast to the general consensus that NFS is the preferred method for virtualization on NetApp.  TR 3732 jumps from the discussion of iSCSI and StorageLink Gateway directly to a discussion of VIFs for NFS and doesn't mention iSCSI.  The paper then recommends iSCSI with DMP for the XenServer but makes no mention of a preferred iSCSI networking configuration for the NetApp other than to indicate separate target portal groups. 

Does this mean one should set up the NetApp networking, for Citrix XenServer with iSCSI and StorageLink Gatway, with VIFs the same way as one would for NFS? 

In my case with a 2240 the plan is a 2 Port VIF for the "front end" toward CIFS users and a 2 Port VIF for the "Storage Network",  I guess I would need to set up two VLans on my 2 Port Storage VIF to have two targets for the DMP correct?  Or maybe just two single 1GBit interfaces one for each DMP target? 

How would one do a VIF for NFS while doing  separate IPs for iSCSI DMP?   Any ideas?



Good topic Russ. I have some thoughts on the subject but I will wait to hear from the experts before chiming in. I suspect the TR may be implying that you don't use VIFs for iSCSI if you are using DMP.

As an aside, if your front-end VIF is accessible from your XenServer SAN interfaces you may need to explicitly disable iSCSI on this VIF to prevent storage network traffic going over it. I don't know for sure about XenServer but I have seen this occur on vSphere.


What you would do is use 802.1q trunked connections to your storage interfaces.  Create a VIF like you normally would but then create a VLAN interface on top of the VIF rather than assigning it an IP address.  You can create multiple VLAN interfaces and assign an IP address to each on thier own subnets.

I prefer NFS where possible but the way XenServer does snapshots is by using LVM which requires underlying block devices.  That may not be the whole story but I think that's where the practice originated.



This is the same conclusion I came to.  My alternatives are to do physical separation with no VLANs or VLANs on VIFs.

Physical separation would look like this

e0a - CIFS

e0b - NFS

e0c - iSCSI-1

e0d - iSCSI-2

Or, VLANS on VIFS as you suggest like this.

vif-front = e0a+e0c - CIFs

vif-back = e0b+e0d - vlan-NFS at nnn.fff.sss.xxx, vlan-iSCSI-1 at iii.scs.ii1.001, vlan-iSCSI-2 iii.scs.ii2.001

For VLANs on VIFS, LACP would be configured on the NetApp and on the Ethernet Switches with IP Load balancing.


Correct, LACP trunks and the switches with src-dst-ip as the load balancing policy.  VIF type LACP with IP load balancing.

I prefer using the VIF method for high-availability reasons.  Also helps balance 1Gbps traffic.  I also recommend using a VLAN interface on you CIFS VIF.  The reason is that someday you might want to add another VLAN to the interface.  If it's already configured that way, it's easy, otherwise you have to rebuild your network interfaces.

The interface names always have the VLAN number in them so they would look like:





"IP load balancing" works based on the source and destinations, so isn't "src-dst-ip" L  same as "IP" LB ?


"IP load balancing" and "src-dst-ip" are generally the same thing.  NetApp uses the term "ip", Cisco and others use the term "src-dst-ip", VMware uses the term "IP Hash".  They all describe the same standard.  The important thing is that the switch and the end host are using the same algorithm.