VMware Solutions Discussions

ESXi and cDOT 8.3 really need non-routable private IP?

I have experience with VMware only on SAN, so this is quite new for me as I have to setup a new NAS env. for hosting a VMware on cDOT filers.  I have read many documents out there regarding the topic and Netapp's Best Practice is to setup a non-routable private IP LIF for VMware.  Is that really that critical and necessary?


Re: ESXi and cDOT 8.3 really need non-routable private IP?

Hello @NAMAN,


It's important to remember that best practices are guidelines, not laws.  There are a number of reasons why we made the recommendation for a private storage network, though I beleve that these two are the most significant:


  • NFS traffic is not encrypted, and therefore can be snooped by anyone on the network.  A non-routable network segment serves to help prevent external access to the VMware datastores.  Remember, they are just NFS exports, and if the export policies are lax, they could potentially be mounted by other hosts.
  • Dedicated/separate infrastructure makes it easier to manage throughput for the storage traffic, ensuring that VM and storage traffic don't compete.


Generally speaking, most issues that we see with regard to performance are a result of shared networking between the virtual machines and the storage.  I tend to recommend, in order of most to least desirable, one of these solutions:


  • Physically separate switches with a flat network dedicated to the storage traffic.  This is the most cost prohibitive (2x as many switches), but you can think of it as being similar to fibre channel...separate infrastructure where it will not have to compete for resources with other traffic types.
  • Dedicated links to shared switches with a dedicated VLAN.  
  • Shared links with a dedicated VLAN and Network IO Control guaranteeing some reasonable amount of bandwidth to the storage traffic.
  • Shared links with a dedicated VLAN + network QoS.
  • And finally, simply shared links and shared IP space.  This will be the least secure and offer the least amount of control of throughput and packet path through the datacenter.


Note that I almost always recommend a flat network (no routing).  This is to prevent bottlenecks at uplinks on the physical switches and routers.  That being said if you're using a leaf-and-spine network architecture routing is expected and the uplinks are typically sized accordingly...as opposed to a hierarchical network where uplinks are typically highly oversubscribed and large amounts of north-south traffic to/from storage devices can cause issues.


Along those lines, with any solution that is using VLANs and spanning tree you should make sure that the link path between the switches for the virtualization hosts and the storage devices is optimal.


There is nothing inherently wrong with any of the configurations I desribed above.  Just be aware of the implications and ramifications which can affect your virtualization environment with each choice.


Hope that helps!  Please let me know if I can answer any questions.



If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

Re: ESXi and cDOT 8.3 really need non-routable private IP?

Thank you so much for your prompt reply.  That was very informative.  


Few more stupid questions still remains though.  Would it be possible to add private IP network in the same broacast-domain where routable IPs are residing already?  If not do I have to implement two separate broadcast-domain for the same SVM where CIFS for user home folders and VMware ESX servers will be hosted?


thank you 



Re: ESXi and cDOT 8.3 really need non-routable private IP?

Is the broadcast domain you're referring to here the ONTAP construct for configuring LIF movement / failover, or the broader networking term?


For ONTAP, broadcast domains are assigned at the port level, not interface level.  Failover groups are assigned to LIFs, and I don't think there is anything that would prevent subnet overloading.  That being said, I've never tested it from the ONTAP or VMware side.


There's nothing technical I know of which prevents you from overloading the same network interface with more than one subnet...just be aware that it's not layer 2 isolation so there's no real improvement to the security isolation.



If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

View solution in original post

Review Banner
All Community Forums