We are a VMware shop and use a distributed vSwitch for our networking.
We have 10G switches connnected to the hosts with all ports configured to trunk the VLANs required for VMware.
Our portgroups are configured to use all NICs with each portgroup having different primary/secondary NICs for teaming.
We do not use the link local address space 169.254. used by the ONTAP Select Deploy utility for the internal network.
We do have other RFC1918 address spaces available in our network and I'd like to use those for the internal network IP address space.
Is there a method to configure the internal network IPs prior to or during deployment of a 2-node cluster?
TR-4517 has all of the details.
To summarize: a VLAN for 169.254/16 needs to be created in your network infrastructure, connected to your VMware hosts, and configured in either a vSwitch or dvSwitch.
The MTU 9000 requirement is silly IMHO. We run 10G fine at 1500 and have 1ms or less latency in our VMware environment.
If any NetApp people read this, consider this an RFE for ONTAP Select to allow the owner of the cluster to configure the internal network in a multi-node cluster.
I guess I should clarify the MTU 9000 comment.
MTU is more related to throughput than latency.
In our VMware network infrastructure (which is small; 6 hosts and 100 VMs) our 10G interfaces are nowhere near their theoretical bandwidth/throughput limits.
I realize in a 10G environment it is advantageous to deploy MTU 9000 but we are comfortable with maintaining MTU 1500 for now.
I believe the internal network (non-)config is one of those things that happens when a physical appliance is converted to a virtual appliance.