Microsoft Virtualization Discussions

Hyper-V and physical NIC count

radek_kubka
11,099 Views

Hi all,

I just had a look at "NetApp Storage Best Practices for Microsoft Virtualization" (http://www.netapp.com/us/library/technical-reports/tr-3702.html). The table about physical NICs requirement on page 8 in somewhat confusing to me - I think it relates to a config without NIC redundancy, so ideally twice as many ports are required. The following page talks about NIC teaming as being doable, but I assume I shouldn't team ports across different types of traffic, should I?

If above is correct, the staggering number of 14 physical ports appears to be needed for a fully blown Hyper-V implementation...

Any thoughts, comments, ideas?

Regards,

Radek

1 ACCEPTED SOLUTION

chaffie
11,096 Views

Radek,

Maybe I can provide some insight, as the author of TR-3702; I can explain how these best practices came about and offer supporting evidence. This is because most of our recommendations mirror those that Microsoft has provided in their technical documentation about Hyper-V. When I reference sections and page numbers in this reply, please assume (unless otherwise noted) that they are referring to content in TR-3702 v3.

On page 7, I begin to discuss the different types of network connectivity that is present in a Hyper-V server, if we assume all of the network connectivity represented by each of the bullets on page 7 is present in a Hyper-V environment – I am assuming that you have deployed more than one Hyper-V server and those Hyper-V servers are clustered together with the VMs (and their VHDs) deployed on Cluster Shared Volumes (CSVs, see page 32) over iSCSI connectivity, and that you are making use of Live Migration in the Hyper-V environment to move VMs between Hyper-V servers in the cluster. Although I feel I have explicitly discussed the reason for the types of network connectivity listed on page 7, let me briefly discuss it here too.

  • Hyper-V Management – Microsoft also recommends that you dedicate a physical network adapter to Hyper-V for remote management. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends. It is also discussed here under the section titled “Network Recommendations for using Live Migration” and within the first bullet titled “Network Adapters”.
  • Virtual Machines – Microsoft also recommends that you dedicate one or more physical network adapters to the virtual machines via the virtual switches. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends. Obviously we would prefer to provide multiple physical network adapters for redundancy for the VMs, therefore NIC Teaming is supported with Hyper-V and you can present a logical teamed adapter to a virtual switch for VM external connectivity.
  • IP Storage – Microsoft also recommends that you dedicate one or more physical network adapters to the iSCSI connectivity if you have deployed iSCSI in your environment. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends.
  • Windows Failover Cluster Private – Microsoft recommends that a private network be configured for private communication between the cluster nodes.  While a separate physical network adapter could be dedicated for this purpose, many customers have chosen to piggyback this connectivity with the physical network adapter used for Hyper-V Management, as discussed above. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends.
  • ·Live Migration – If using Live Migration, Microsoft recommends that connectivity for this purpose is dedicated its own Gigabit+ physical network adapter as discussed here under the section titled “Network Recommendations for using Live Migration” and within the first bullet titled “Network Adapters”. If you must provide connectivity for Live Migration on a network adapter used for another purpose already - this physical network adapter must not be shared with the network adapters used for private communication between the cluster nodes, for VM communication external to the Hyper-V parent, and the connectivity to iSCSI storage.
  • · Cluster Shared Volumes – Again, Microsoft recommends that connectivity for this purpose is dedicated its own Gigabit+ physical network adapter as discussed here under the section titled “Network Recommendations for using Cluster Shared Volumes” and within the first bullet titled “Network Adapters”. If you must provide connectivity for Cluster Shared Volumes on a network adapter used for another purpose already - this physical network adapter must not be shared with the network adapters used remote access to the Hyper-V parent and for VM communication external to the Hyper-V parent.

Based upon Microsoft’s recommendations, which NetApp follows and in some cases improves upon, the minimum number of network adapters ideal for a Hyper-V configuration, as I outlined in the opening paragraph of this reply, is 5 for a non-production deployment where redundancy is not a concern. Where redundancy is a concern, such as in production environments, then the minimum number is 7, because we’d want to use MPIO for iSCSI connectivity if present, and multiple networks adapters (likely teamed) for VM external connectivity. In the tables on page 8, I take into account the need for multiple physical network adapters for redundancy when using Clustered Hyper-V servers using live migration, CSVs, and configured with iSCSI – therefore, 7 or 8 network adapters is the minimum number of network adapters for a production Hyper-V environment. This assume that redundancy is not needed for the followin

  • Hyper-V Management –although if failure occurs here, then there is little option to manage the Hyper-V host, therefore VMs would have to be migrated off remotely or the server would have to be powered down and VMs would incur an outage in the meantime. I have customers that use the network adapter for the private cluster communication as a backup.
  • Windows Failover Cluster Private – if failure occurs here,  Hyper-V may migrate VMs to another node in the cluster or provide a  warning. Some customer use the network adapter configured for Hyper-V  management as a backup.
  • Live  Migration – This is a bit of a risk, but most customers aren’t  using any functionality to dynamically migrate the VMs between nodes  yet, so the risk is minimized in those configurations. If a failure  occurs here, the nodes would use the Windows Failover Cluster Private  network as a backup, the only risk being that multiple Live Migrations  could saturate the link and cause problems for the cluster as discussed  above or the ability to do successful Live Migrations.
  • Cluster Shared Volumes – This a probably the biggest risk of all, but just like the Live Migration connectivity, if a failure occurs here, the nodes would use the Windows Failover Cluster Private network as a backup. For more information on the possible risk, please see page 35 and section 4.3.3.4 on Dynamic I/O Redirection or “Understanding redirected I/O mode in CSV communication” here.

As far as NIC teaming goes, the recommendations in the industry ar as follows:

  • When configuring NIC teaming, to use two ports that are the same speed and with the same configuration (speed, duplex, VLAN, Jumbo frames, etc.).
  • When configuring NIC teaming, use two ports that span separate network adapters, in separate PCI slots/buses, that way if you lose one physical adapter, the other adapter will remain online, as will the port you configured for network teaming that resides on it as well.
  • You must never send iSCSI traffic over a logical teamed network adapter. MPIO is the preferred method for providing redundancy with iSCSI connectivity over multiple adapters. As such, if you configure a teamed network adapter for use with a virtual switch in the Hyper-V parent, you shouldn’t configure a virtual machine with a virtual NIC that connects to the virtual switch and allow iSCSI traffic to traverse the virtual switch configured to use a teamed network adapter (as configured on the Hyper-V parent).

Honestly, although our recommendations are spelled out pretty clearly, for some of this connectivity it is bandwidth that is the primary concern that leads to these recommendations which were made primarily on the assumption that most customers will be using 1GbE network adapters. In the next version of TR-3702, I plan to add a few pages and make the distinction between recommended network configurations for those deploying with 1GbE connectivity and 10GbE connectivity. The issue is that most customers who deploy 10GbE will have a mix of that connectivity, because server will have 1GbE onboard in either 1 or 2 adapters, but customers may deploy mixes of Quad port 1GbE and 10GbE adapters in PCI slots.

Here are a few –off the record and example only – configurations; where all assume that you don’t have the minimum number of ports available or are using a mix of 1GbE and 10GbE connectivity, probably found in a 1U server that is limited to two PCI slots.

·         Config #1

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slots (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

§  One 10GbE NICs in the 2nd PCI slot – NIC7

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Windows Failover Cluster Private + Hyper-V Management

§  NIC7 – Cluster Shared Volumes (CSVs) + Live Migration

·         Config #2

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slots (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

§  Two 10GbE NICs in the 2nd PCI slot – NIC7 and NIC8

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Windows Failover Cluster Private + Hyper-V Management

§  NIC7 – Cluster Shared Volumes (CSVs)

§  NIC 8 – Live Migration

·         Config #3

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slot (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Cluster Shared Volumes (CSVs) + Live Migration

·         Config #4

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Two 10GbE NICs in PCI slot (Dual Port adapter) – NIC 3 and NIC4

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI

§  NIC3 – VMs

§  NIC4 – Cluster Shared Volumes (CSVs) + Live Migration

If you are looking for more information on a recommended configuration based on the number of network ports that you do have available, I would be happy to discuss a possible configuration that best suits the environment. This is a conversation I often have with customers who are using blade servers for deploying Hyper-V or unique solutions such as HP’s Flex-10 – both of which fall outside of my off the record examples above. I can be contacted through the communities or via twitter @virtualizethis.

View solution in original post

11 REPLIES 11

winfield
11,067 Views

Hi Radek,

Some interesting questions here

Firstly NIC teaming is supported in Windows 2008 R2 however there is a bug in the cluster validation test that doesn’t recognise NICS that have been teamed and have the same virtual MAC address

I believe this is going to be fixed in Windows 2008 SP1

So just ignore the error during the validation and continue with the cluster build

One of the best resources we have for Network connectivity setup and config for hyper-v is the Hyper-V boot camp presentation Chris and Chaffie put together (slides 24-25-26)

You can find this on v-portal.netapp.com under insight 2009

And attached

Just give me a call if you need anything else.

R

Steve

radek_kubka
11,067 Views

Hi Steve,

Many thanks for posting this. The slides though are identical as the tables in the TR-3702.

OK, I am reading the table 2-4 on page 8 again & arguably iSCSI redundancy & VM traffic redundancy is taken into account already.

So, realistically – mgmt, cluster, migration & CSV could use just one port each& all in all, we need 8 ports as a reasonable minimum (if using iSCSI), correct?

Regards,
Radek

chaffie
11,097 Views

Radek,

Maybe I can provide some insight, as the author of TR-3702; I can explain how these best practices came about and offer supporting evidence. This is because most of our recommendations mirror those that Microsoft has provided in their technical documentation about Hyper-V. When I reference sections and page numbers in this reply, please assume (unless otherwise noted) that they are referring to content in TR-3702 v3.

On page 7, I begin to discuss the different types of network connectivity that is present in a Hyper-V server, if we assume all of the network connectivity represented by each of the bullets on page 7 is present in a Hyper-V environment – I am assuming that you have deployed more than one Hyper-V server and those Hyper-V servers are clustered together with the VMs (and their VHDs) deployed on Cluster Shared Volumes (CSVs, see page 32) over iSCSI connectivity, and that you are making use of Live Migration in the Hyper-V environment to move VMs between Hyper-V servers in the cluster. Although I feel I have explicitly discussed the reason for the types of network connectivity listed on page 7, let me briefly discuss it here too.

  • Hyper-V Management – Microsoft also recommends that you dedicate a physical network adapter to Hyper-V for remote management. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends. It is also discussed here under the section titled “Network Recommendations for using Live Migration” and within the first bullet titled “Network Adapters”.
  • Virtual Machines – Microsoft also recommends that you dedicate one or more physical network adapters to the virtual machines via the virtual switches. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends. Obviously we would prefer to provide multiple physical network adapters for redundancy for the VMs, therefore NIC Teaming is supported with Hyper-V and you can present a logical teamed adapter to a virtual switch for VM external connectivity.
  • IP Storage – Microsoft also recommends that you dedicate one or more physical network adapters to the iSCSI connectivity if you have deployed iSCSI in your environment. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends.
  • Windows Failover Cluster Private – Microsoft recommends that a private network be configured for private communication between the cluster nodes.  While a separate physical network adapter could be dedicated for this purpose, many customers have chosen to piggyback this connectivity with the physical network adapter used for Hyper-V Management, as discussed above. See section “Virtual Networking Basics” here, in the first paragraph they cover this best practice, which NetApp also recommends.
  • ·Live Migration – If using Live Migration, Microsoft recommends that connectivity for this purpose is dedicated its own Gigabit+ physical network adapter as discussed here under the section titled “Network Recommendations for using Live Migration” and within the first bullet titled “Network Adapters”. If you must provide connectivity for Live Migration on a network adapter used for another purpose already - this physical network adapter must not be shared with the network adapters used for private communication between the cluster nodes, for VM communication external to the Hyper-V parent, and the connectivity to iSCSI storage.
  • · Cluster Shared Volumes – Again, Microsoft recommends that connectivity for this purpose is dedicated its own Gigabit+ physical network adapter as discussed here under the section titled “Network Recommendations for using Cluster Shared Volumes” and within the first bullet titled “Network Adapters”. If you must provide connectivity for Cluster Shared Volumes on a network adapter used for another purpose already - this physical network adapter must not be shared with the network adapters used remote access to the Hyper-V parent and for VM communication external to the Hyper-V parent.

Based upon Microsoft’s recommendations, which NetApp follows and in some cases improves upon, the minimum number of network adapters ideal for a Hyper-V configuration, as I outlined in the opening paragraph of this reply, is 5 for a non-production deployment where redundancy is not a concern. Where redundancy is a concern, such as in production environments, then the minimum number is 7, because we’d want to use MPIO for iSCSI connectivity if present, and multiple networks adapters (likely teamed) for VM external connectivity. In the tables on page 8, I take into account the need for multiple physical network adapters for redundancy when using Clustered Hyper-V servers using live migration, CSVs, and configured with iSCSI – therefore, 7 or 8 network adapters is the minimum number of network adapters for a production Hyper-V environment. This assume that redundancy is not needed for the followin

  • Hyper-V Management –although if failure occurs here, then there is little option to manage the Hyper-V host, therefore VMs would have to be migrated off remotely or the server would have to be powered down and VMs would incur an outage in the meantime. I have customers that use the network adapter for the private cluster communication as a backup.
  • Windows Failover Cluster Private – if failure occurs here,  Hyper-V may migrate VMs to another node in the cluster or provide a  warning. Some customer use the network adapter configured for Hyper-V  management as a backup.
  • Live  Migration – This is a bit of a risk, but most customers aren’t  using any functionality to dynamically migrate the VMs between nodes  yet, so the risk is minimized in those configurations. If a failure  occurs here, the nodes would use the Windows Failover Cluster Private  network as a backup, the only risk being that multiple Live Migrations  could saturate the link and cause problems for the cluster as discussed  above or the ability to do successful Live Migrations.
  • Cluster Shared Volumes – This a probably the biggest risk of all, but just like the Live Migration connectivity, if a failure occurs here, the nodes would use the Windows Failover Cluster Private network as a backup. For more information on the possible risk, please see page 35 and section 4.3.3.4 on Dynamic I/O Redirection or “Understanding redirected I/O mode in CSV communication” here.

As far as NIC teaming goes, the recommendations in the industry ar as follows:

  • When configuring NIC teaming, to use two ports that are the same speed and with the same configuration (speed, duplex, VLAN, Jumbo frames, etc.).
  • When configuring NIC teaming, use two ports that span separate network adapters, in separate PCI slots/buses, that way if you lose one physical adapter, the other adapter will remain online, as will the port you configured for network teaming that resides on it as well.
  • You must never send iSCSI traffic over a logical teamed network adapter. MPIO is the preferred method for providing redundancy with iSCSI connectivity over multiple adapters. As such, if you configure a teamed network adapter for use with a virtual switch in the Hyper-V parent, you shouldn’t configure a virtual machine with a virtual NIC that connects to the virtual switch and allow iSCSI traffic to traverse the virtual switch configured to use a teamed network adapter (as configured on the Hyper-V parent).

Honestly, although our recommendations are spelled out pretty clearly, for some of this connectivity it is bandwidth that is the primary concern that leads to these recommendations which were made primarily on the assumption that most customers will be using 1GbE network adapters. In the next version of TR-3702, I plan to add a few pages and make the distinction between recommended network configurations for those deploying with 1GbE connectivity and 10GbE connectivity. The issue is that most customers who deploy 10GbE will have a mix of that connectivity, because server will have 1GbE onboard in either 1 or 2 adapters, but customers may deploy mixes of Quad port 1GbE and 10GbE adapters in PCI slots.

Here are a few –off the record and example only – configurations; where all assume that you don’t have the minimum number of ports available or are using a mix of 1GbE and 10GbE connectivity, probably found in a 1U server that is limited to two PCI slots.

·         Config #1

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slots (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

§  One 10GbE NICs in the 2nd PCI slot – NIC7

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Windows Failover Cluster Private + Hyper-V Management

§  NIC7 – Cluster Shared Volumes (CSVs) + Live Migration

·         Config #2

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slots (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

§  Two 10GbE NICs in the 2nd PCI slot – NIC7 and NIC8

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Windows Failover Cluster Private + Hyper-V Management

§  NIC7 – Cluster Shared Volumes (CSVs)

§  NIC 8 – Live Migration

·         Config #3

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Four 1GbE NICs in PCI slot (Quad Port adapter) – NIC3, NIC4, NIC5, and NIC6

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI #1

§  NIC3 – iSCSI #2

§  NIC4 – VMs

§  NIC5 – VMs

§  NIC6 – Cluster Shared Volumes (CSVs) + Live Migration

·         Config #4

o   Network Adapters Available:

§  Two 1GbE NICs onboard – NIC1 and NIC2

§  Two 10GbE NICs in PCI slot (Dual Port adapter) – NIC 3 and NIC4

o   As configured:

§  NIC1 – Hyper-V Management + Windows Failover Cluster Private

§  NIC2 – iSCSI

§  NIC3 – VMs

§  NIC4 – Cluster Shared Volumes (CSVs) + Live Migration

If you are looking for more information on a recommended configuration based on the number of network ports that you do have available, I would be happy to discuss a possible configuration that best suits the environment. This is a conversation I often have with customers who are using blade servers for deploying Hyper-V or unique solutions such as HP’s Flex-10 – both of which fall outside of my off the record examples above. I can be contacted through the communities or via twitter @virtualizethis.

radek_kubka
11,067 Views

Hi Chaffie,

I couldn't think of a more thorough explanation - many thanks for this!

Kindest regards,

Radek

chaffie
11,067 Views

No problem - Glad I could help!

amiller_1
11,067 Views

Wow....that's like a mini-TR all by itself.

bathnetapp
11,067 Views

Hi Chaffie,

We are still getting to grips with our Hyper-V + NetApp setup. We now have 8 x 1GB network cards available (4 port internal + 4 x 1 port cards) in each of our servers. Our initials thoughts are as follows:

§     NIC1 - Hyper-V Management

§     NIC2 - iSCSI #1

§     NIC3 - VMs (Teamed with NIC6)

§     NIC4 - Clustered Shared Volumes (CSVs)

§     NIC5 - Windows Failover Cluster Private (aka Cluster Heartbeat)

§     NIC6 - iSCSI #2

§     NIC7 - VMs (Teamed with NIC3)

§     NIC8 - Live Migration

However we're not sure this is optimal. What we did think was that we could team NIC4 and NIC5 together to give us additional bandwidth for the Clustered Shared Volumes (CSVs) traffic, since we can't afford 10GB at this stage. However we're not sure if we can team cards for the Clustered Shared Volumes (CSVs) traffic and also how private the Cluster heartbeat traffic needs to be.

In some of your examples you have teamed up the Hyper-V Management traffic with the Windows Failover Cluster Private traffic. Would this be 2 VLANS tagged to the same port or just sending the traffic over one VLAN?

Your thoughts would be most welcome!

Richard Whitcher

University of Bath

johnpaulmorrison
6,801 Views

So for HyperV (config #4) : Two 10GbE nics, Two 1GbE nics, and no redundancy unless I'm missing something. Extra ports, cabling and complexity.

Vmware: Two 10GbE nics (teamed) - full redundancy for vms and storage (NFS, iSCSI or FCoE)

what gives? HyperV: not ready for real data centers?

radek_kubka
6,801 Views

Well, good point.

Not sure whether this removes, or rather adds complexity , but technologies like Cisco UCS with Virtual Interface Card, or HP Flex-10 can split a single physical NIC into multiple "virtual" NICs (seen as discrete physical devices by the hypervisor).

Regards,

Radek

rgraves2572
11,067 Views

We ended up operating on a total of 6 phsyical NIC's due to the limits of our blades.

We used the following:

1 NIC - Cluster Hearbeat

3 NICs - SERVER Team with 4 VLANS (MGMT, SERVER, DMZ, VMHB)

2 NICs - ISCSI TEAM

MGMT - Management VLAN for parent partition

SERVER - VM Virtual Switch on SERVER VLAN

DMZ - VM Virtual Switch on CMZ VLAN

VHHB - Cluster Heart Beat VLAN (For Application Clusters)

We operate in 5 node Windows 2008 Hyper-V cluster and haven't had any issues with bandwidth constraints for VM traffic or ISCSI traffic for the parent or virtual machines.

Our networks are teamed with HP network utility.

** The above is a production cluster that has been running for 18 months without issue. Powering 55 Virtual Machines and 100 dedicated LUN's hosting VHD's.

Active Directory

Citrix Farm

Exchange 2007 CCR

MOSS 2007

Two SQL Active / Active Clusters (4 SQL Instances)

Muliple Applicatons Servers

Running on a FAS3140 (43 Disk FC disk Aggregate)

400 Users workload

-Robert

radek_kubka
11,067 Views

Hi Robert,

Many thanks for posting this - real life production examples are always priceless!

Regards,

Radek

Public