Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Printer Friendly Page
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
I would like the community advise on the correct route for connecting 2 esxi servers to an AFF using sfp+ without the use of a L2 switch. Goal is to have the same NFS datastore mounted on both servers.
Direct connection from each server NIC to a port on each controller. This is to provide redundancy in case of link or controller failure.
Is one subnet possible for multiple ports, so the same mount IP can be used from different hosts, or should the esxi hostfile be used for allowing different mount IP but same DNS name?
I really cant be the first to encounter this scenario and would like to know what is the recommended path.
Thanks
Solved! See The Solution
9 REPLIES 9
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
NFS high availability is based on failing over IP to different physical port. This requires L2 switch and won’t work with direct connection.
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
Thanks for your reply. I am not sure what you exactly mean by high availability NFS?
I am just talking about exposing LIF home and failover port to each server. Esxi support beacon probing. I am running a similar setup today without switches.
For smaller setups a 10g switch is not necessarily required. Only the amount of physical ports sets the limit.
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
For direct connect, you're best off going with iSCSI.
While I think it would work in the sense you'd be able to mount the datastores, you'd have issues during failover, which is what aborzenkov was saying.
Though, random question, what version on NFS were you planning to use?
Highlighted
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
ESXi beacon probing relies on L2 connectivity between all physical ports in NIC team that you do not have here. I am not aware of any automatic mechanism to detect LIF move and redirect traffic to another port (in direct attach case).
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
I am not sure the actual NFS version should matter in this case, as multipathing is not supported. So I would go v. 3 for simplicity purposes.
Consider one server:
One data LIF (a1,b1) could be connected directly to the server nic. If a1 fails the lif will migrate to b1.
I would just use multiple lif pairs for multiple servers.
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
Just had a thought. Why not create a LACP between the two pair of ports. Would that be possible?
I know VM support port aggregation. Can I span such a 'port-channel' to both Netapp controllers? I am not looking after active/active. Just failover capability.
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
@AllanHedegaard wrote:
Can I span such a 'port-channel' to both Netapp controllers?
No. ifgrp can include only ports in the same controller.
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
Then assigning a failover adapter in vmware would be the best way to go. It will not provide load balancing, but failover cability if the actual link fails. Such an event would also move the lif to the secondary port. Correct?
- Bookmark
- Permalink
- Email to a Friend
- Report Inappropriate Content
I can confirm that this works. A lot of money saved on Nexus and not any drawbacks from my point-of-view.