In a LAN environment without cross-stack LACP (EtherChannel) functionality - can NFS load-balancing potentially use two stand-alone ports on each NetApp controller on two different subnets?
I found this in TR-3749:
*If* we remove vifs on the NetApp side, so only 2x stand-alone port is used per each controller - what happens on the ESX host when, let’s say, “green” port on FAS1 goes down – business as usual, “blue” path is used and data store doesn’t lose connectivity? Or does it??
I do not see anything about failover in this post, sorry. What they say - you may need to explicitly configure route to data store, to overcome single default route. As long as you use single interface per link, this configuration is not high-available when considering single ESXi server. But nothing prevents you from adding more interfaces to each subnet and pool them.
- if we ignore NetApp side for a moment, on that picture above there would be no link redundancy between ESX host and switches? (just two NICs on two different subnets)
Well, I can't speak for TR author, but picture is titled "Storage side ..." so ESX side should not be considered as authoritative guideline. Also, different logical subnets do not yet imply different physical broadcast domains (VLANs).
I view it more as conceptual outline. But I agree that it makes things confusing. You have access to fieldportal, right? Go to TR and submit comment ...
If you setup the NFS mount/datastore using hostnames & each subnet had access to a DNS server that could resolve the hostname to the IP then yes that should work. I would test that setup in a lab environment first & once you have it working then take a maintenance window to implement it on your production systems.
It depends on your objective. It is easier to build redundant failure tolerant and load balanced connection to storage using SAN than NFS. OTOH NFS is easier to integrate with NetApp features (you have one layer less).
DNS is evaluated during mount request only, so it does not help when connectivity to data store is lost. Even if ESX can transparently remount, it still means anything running off this data store had crashed. Not to mention that DNS server has no idea of interface connectivity on ESX, so it can return the same non-working address.