VMware Solutions Discussions
VMware Solutions Discussions
When booting from san via FC how does the host maintain connection to the boot lun after a failover event? Using an Emulex CNA (fcoe) within the BIOS you have the option to specify multiple boot luns which will be tried sequentially during boot (target wwpn’s), also the 2nd adapter (port) can attempt boot if all fails on the primary, I think?? Based on this all is well during boot even if a cable, switch, controller was to fail. However , if a failover event were to occur during operation how would the host see its boot volume? Single image presents the same WWNN, however, after a failover event the WWPNs do NOT failover....?
Or is this facilitated via the host (ESXi) mulitpathing once the OS is loaded??
Lastly, does Emulex Multi-Port Failover Boot work with NetApp controller failover? I read in some documentation that is only works with LUN failover as opposed to controller failover......
After OS is booted failover is implemented using multipathing support on OS level. You have to configure OS to use redundant pseudo-device and not physical paths.
If your question is about ESX, VMs usually are not even aware of multiple paths. They just see VMDK presented as connected to single controller. Here failover is completely handled by hypervisor.
Could you provide link to Emulex Multi-Port Failover Boot description?
Yes, this is ESXi which will be booting from SAN, nothing to do with the VMs.
Could you expand on “You have to configure OS to use redundant pseudo-device and not physical paths” please? As this is the OS boot LUN then my understanding is this LUN will NOT be added as storage with the vSphere client as it does not host VMs, this being the case you cannot amend the MPIO settings? Unless I am wrong?
IBM BladeCentre config for boot from SAN with Emulex discussed multi port failover boot. This is based on IBM DS which does LUN failover as opposed to controller failover. Look up “Enabling Emulex Boot from SAN on IBM BladeCentre” (can’t find how to attach the PDF)...
As far as I understand ESXi automatically uses multipathing for all LUNs, including boot LUN, so in this case there nothing to configure.
As for statement in document you mention - they simply mean that to ensure boot failover both HBA must be able to access boot LUN. VMware guides basically state the same using different words: “Multipathing to a boot LUN on active-passive arrays is not supported because the BIOS does not support multipathing and is unable to activate a standby path”. In case of IBM DS3000 and DS4000 this translates to AVT, but this is IBM specific. In case of NetApp with single_image cfmode you can access LUN from any controller, so it is not an issue.
Thanks for the comments, very helpful.
So my strategy is to use multiport boot with port1 configured to boot from LUN0 on wwpn01 on controller1, then wwpn02 controller1, wwpn01 controller2 and finally wwpn02 on controller02.
I will then do the same on port2 and set the boot sequence to use port1 then 2.
Once the host is loaded ESXi multi pathing will takeover.