ONTAP Discussions

Direct NFS for multiple Esxi

AllanHedegaard
14,608 Views

I would like the community advise on the correct route for connecting 2 esxi servers to an AFF using sfp+ without the use of a L2 switch. Goal is to have the same NFS datastore mounted on both servers. 

 

Direct connection from each server NIC to a port on each controller. This is to provide redundancy in case of link or controller failure.

 

Is one subnet possible for multiple ports, so the same mount IP can be used from different hosts, or should the esxi hostfile be used for allowing different mount IP but same DNS name?

 

I really cant be the first to encounter this scenario and would like to know what is the recommended path.

 

Thanks

 

 

1 ACCEPTED SOLUTION

AllanHedegaard
14,340 Views

Then assigning a failover adapter in vmware would be the best way to go. It will not provide load balancing, but failover cability if the actual link fails. Such an event would also move the lif to the secondary port. Correct?

View solution in original post

27 REPLIES 27

walterr
2,326 Views

"not optimal" is an understatement. Any time the controller does a takeover, connection to the datastore is lost and most likely VMs will crash. You would need to disconnect the physical ports of the failed controller in order to get  ESX nic failover working. This probably can be done during a planned storage failover, but during an unplanned storage failover every VM will crash.

jcolonfzenpr
2,320 Views

totally agree with your statement! unsupported and highly unpredictable configuration. 

Jonathan Colón | Blog | Linkedin

AllanHedegaard
2,304 Views

Unpredictable? You got an active and a passive link. The setup will ensure continuity in case of active link failure. I fully agree, that it is not optimal.

 

I tested it, and physically removing active link, will make the setup failover. Reverting can be a little tricky.

 

If you wanted to advance, you could just write some script on the esxi host.

walterr
2,282 Views

This is what I said, you need to physically unplug the active link in case of storage failover. Cause you never know when a storage failover might occur, you cannot automate it also with an ESX Script. While the failed controller reboots it soon gets into a waiting for giveback state with the link of the ports active and ESX believes, the port is active, although the port on the other controller is active.

AllanHedegaard
2,279 Views

If the primary controller fails in a way, that causes the physical link to die, the failover will actually work. I have never stated that, this is bullet proof. Instead I recommend putting the HA responsibility to the ESXi. 

jcolonfzenpr
3,493 Views

Its work, but i think it's a unsupported configuration beware maybe netapp support won't help if you have any issues 😉 

Jonathan Colón | Blog | Linkedin

AllanHedegaard
2,942 Views

I have had it working fine on +5 installations for more than 18 month now. We are very happy with the low latency and cost savings on the 10gbe switch 🙂

Public