Microsoft Virtualization Discussions

Initial physical network setup

JKINGPERFECTION
3,498 Views

Hello all!  We are new to Netapp. We are going to migrate our current Windows Server 2003 insfrastructure to Windows Server 2008 with Hyper-V and Netapp. 

We have a FAS2040 with two controllers with 6 disks each.  I have been reading a lot of the best practice guides pertaining to how to setup the neworking for our project and I am still quite confused.

We have two Windows Server 2008 host servers with 7 NIC's each.  We are also going to setup failover with Microsoft Clustering.

My confusion stems around how to best setup the network.  I've read that the Netapp should be on it's own separate network.  We have a Cisco core switch with 3 different VLANs for our LAN infrastructure.  I am not quite sure how to setup the FAS2040 and the host servers.  Thanks for any assistance!

4 REPLIES 4

andrc
3,498 Views

It's hard to give an exact answer without more detail but a generic best practice would be as follows:

Assuming you have 4 NICS per controller create 2 multi-mode VIFs of 2 NICs each, each multi-mode VIF connected to a different physical switch. Create a single-mode VIF containing the 2 multi-mode VIFs to give yourself redundancy.

If you're planning to use iSCSI do not use Microsoft NIC Teaming for iSCSI connections, use MCS (Multiple Connections per Session) instead. IP SAN data should be separated from NAS data, preferably using different physical interfaces but if that's not possible then by seperate VLANs.

The 3 VLANs won't be a problem as you can configure the filer interfaces to use VLAN tagging.

Check out the Network Management Guide for more detail: http://now.netapp.com/NOW/knowledge/docs/ontap/rel7351/html/ontap/nag/frameset.html

JKINGPERFECTION
3,498 Views

Andrc,

Thanks for your reply and for the link to the guide. Much appreciated.

paulstringfellow
3,498 Views

Hi,

i do lots of hyper-V and NetApp installs, so right up my street! although of course every install is different.

so couple of things, you can use NIC teaming on your Hyper-V host should you wish, however it has to be 3rd party driver driven as Microsoft have no NIC teaming option inherently in Windows Server, this is not Hyper-V specific, this a Windows Server general consideration.

in terms of the back end network, certainly get your SAN traffic on a separate VLAN or physical switch, as you want this separated out.

a 2040 only have 2 ports per controller unfortunately, so a VIF is an option, however splitting this VIF across two switches will be dependent on the switches capability to handle a trunked pair of ports across two separate switches.

what i tend to go for, to rule that out, is to use MCS or ISCSI mutlipath, if you make sure you download the ONTAP DSM this will give you a host more options for how you weight and balance traffic between the multiple 2040 ports, if you install the Windows host utilities kit on each Hyper-V server that would be a bonus for you.

so basically i'd be going for at least 2 ports per server connected to you ISCSI VLAN, using mutlipathing to hit more than one NIC per controller, have each of these connections sat in separate switches to ensure switch resilience.

bear in mind that the LUNS you will try to connect to can only be delivered by one controller at a time, so don't try to have one ISCSI session connecting to one controller and one to the other assuming this will provide resilience, the second controller will only be able to deliver LUNS from those volumes in the instance of a controller failure, when it will take the personality of the failed controller and present all IP addresses and volumes from the failed controller.

make sure you use CSV's as well as this will give you much more flexibility in hyper-v deployement and if you are using System Centre to manage the environment (i'd strongly recommend you did!! :-)) then get Appwatch pro integrated into it as well...allow you a large amount of management control.

hope all that helps, feel free to drop me a line if you want any more detail, happy to help!

JKINGPERFECTION
3,498 Views

Paulstringfellow, thanks for your reply!

I've been spending quite a bit of time lately reading MS docs on setting this all up including Failover Clustering.  Based on what I've been reading, this is how I've been thinking of how the host server nics should be setup..

Nic1 - Hyper-V Parent Partition management traffic - Public

Nic2 - VM traffic - Public

Nic3 - iSCSI SAN traffic - Private

Nic4 - iSCSI SAN traffic - Private (for redundancy for Nic3)

Nic5 - MS Cluster heartbeat - Private

Nic6 - Cluster Shared Volume - Private

Nic7 - Live Migration - Private

I had no intentions of using nic teaming as we are not a large datacenter.  We have 4 host servers, and each one will run 2 VM's for a total of 8 VM's.  So based on that, would it be possible to have the CSV, Live Migration, and the Heartbeat all running off one Nic, or should they be separated out the way I stated it above? 

My problems are on the Netapp side.  I have no prior experience with SAN's or iSCSI and defintely not Netapp.  Some of the terminology is confusing to me.  From my research and reading, I was going to set up one LUN per controller (for redunancy) and create a large Cluster Shared Volume on each controller and have all the VM's point to the CSV.  At least this is how I understand the use of CSVs!

I will also be using the System Center suite of products.  So am I making any sense?

Thanks!

Public