ONTAP Discussions

Multiple SVM or Single SVM for Hyper-V




I have a FAS2554 in cluster mode Data OnTap. By default, SystemSetup created 2 agregates :


- "sata_data_1" on my node 1 (9 TB)

- "sata_data_2" on my node 2 (9 TB)


I have 2 hyper-V clusters. I'm going to create 1 volume et 1 lun per cluster in ISCSI.


What are the best practices concerning SVM. Do I have to create 1 SVM or 2 SVM ?





One or two SVMs really comes down to design considerations with your environment.  Are the two Hyper-V clusters going to contain different customer operations and potentially be managed by separate hypervisor administrators?  If so, separating them into two SVMs would let you leverage the secure multi-tenancy features that SVMs provide (i.e. separate SVM admin accounts, network segregation, etc).  


If you're not separating the Hyper-V clsuters for security reasons and you have basically the same kinds of operations/tenants in both, then separating them doesn't really buy you anything.  You'll end up creating (at least) two iSCSI LIFs for the SVM - one per node.  Hyper-V cluster#1 will connect to the iSCSI LIF on node#1 (where it's LUN lives) and Hyper-V cluster#2 will connect to the iSCSI LIF on node#2.


If you haven't already evaluated, be sure to look into the SnapManager for Hyper-V for your point-in-time-restoration backup copies.  We're we've been using the VMware equivalent and it's a great product stack.  Additionally, since you'll be running these Hyper-V clusters on SATA, be aware of your performance during a boot-storm.  WIthout some sort of flash in front of those spindles, hypervisors are very impactful when it comes to I/O - especially if you reboot lots of your guests simultaneously.


Also, be aware that with SMB 3.0 you can leverage Hyper-V datastores over an SMB connection if you didn't want to bother with iSCSI:




I've not directly utilized this solution (since we're an ESXi shop) but just something to research as you implement your infrastructure.


Good luck,




Hi Chris,


Thank you for your answer. I think, I will create 2 SVM, 1 for each Hyper-V cluster.


- The SVM 1 will be in the aggregate 1 (which is hosted by the node A).

- The SVM 2 will be in the aggregate 2 (which is hosted by the node B).


Two lasts questions :


1) When I'll create the SVM 1, I will see 4 lifs are created :


- iSCSI_lif_1  / Current port : e0A of node A

iSCSI_lif_2  / Current port : e0B of node A


- iSCSI_lif_1  / Current port : e0A of node B

iSCSI_lif_2  / Current port : e0B of node B


What am I supposed to do when I will configure the MPIO ? Do I have to choose the 4 targets ?


2) I found this documentation : https://kb.netapp.com/support/s/article/ka31A00000011fNQAQ/how-to-set-up-iscsi-mpio-on-windows-2008-windows-2008-r2-and-windows-2012-using-microsoft-m...


I think I have to make the same thing, then, install the OnTAP DSM for MPIO. Am I wrong ?


Kind regards.


1) Yes, for this you need MPIO software. Either you install OnTap DSM or you use Microsoft MPIO.

I would recommend to install SnapDrive additionally to establish the sessions and mount the LUNs

2) No you don't need to do that, because OnTap DSM and SnapDrive will do this for you


Thanks a lot.


Do I have to install Snapdrive first or OnTap DSM ?


What about permissions ? My clients will access on the nodes. I don't want they manage volumes and LUNs with Snapdrive.


Kind regards.


DSM first.


Just remove SnapDrive after you did the whole configuration 😃


So I have to  :


1 - Create the SVM, then the Volume.


2 - Install MPIO feature on Windows Server.


3 - Install Windows Unified Host Utilities 7.0


4 - Install DSM and configure MPIO (Round Robin I think).


5 - Install SnapDrive, create and map the LUN with it.


6 - Uninstall Snapdrive.


7 - Create the cluster.


What do you think ?


1 - Create the SVM, then the Volume.


2 - Not needed


3 - Not needed


4 - Install DSM and configure MPIO (Round Robin I think). <- I would recommend to use the default (Queue Depth I think)


5 - Install SnapDrive, create and map the LUN with it.


6 - Uninstall Snapdrive. <- I would do this after creating the cluster 


7 - Create the cluster.



Back to the permissions: Who needs permission on the HyperV Host?



Why MPIO and Host Utilities are not needed ? (https://library.netapp.com/ecm/ecm_get_file/ECMP1656700)


In fact, my clients will administrate the HV hosts (add / delete VMs for example). But I don't want they can extend ou create a LUN with Snapdrive.




MPIO should be activated automatically when installing DSM

Sorry the HU are needed. I expected a virtual machine but it is a physical machine because of HyperV and DSM sets the needed parameters already..


I don't know HyperV well but isn't there a possiblity that VMs can be setup remotely without accessing the Host directly?


I'll install only HU.


Yes but they will need to access to the CSV to copy some '.vhd' templates for example. If it's impossible, I will remove SnapDrive (If it's not dangerous for the cluster ^^ )




Careful with mutiple SVM's  - it's not well scalable in your situation

you can only have up to 32 iSCSI IP address (LIF's) on a given port accross HA pairs.


Also, you will not be able to easaely move volumes accross SVM's or clone them (for example if you doing in place upgrade of the hyper-v, or restroe cross cluster).



Remember that hyper-v clusters comes and goes every two years when Microsoft release a Windows version or Revision (R2 like)..

i would also advise to consider SMB3 directly from the filer instead of iSCSI.



Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK