2011-04-07 03:21 PM
I had success today creating a Windows 2008 SP2 x64 MSCS two-node environment on top of Vmware esxi 4.1 using Snapdrive 6.3p2 but with some caveats.
Let me give you some hints to get started.
First I found this doc from VMware: (see attachment). Take a look at pages 19-24
On each 2008 node i had to create an outbound/inbound firewall rule for the snapdrive service executable (KB 2013396)
On each windows vm, i had to create a second controller (controller 1 and set it as LSA SAS drive - with physical sharing)
I had to pre-create an AD account for the cluster group name and mark it disabled
Make sure all nodes and cluster name are in DNS
Ensure that both esx hosts have two network adapters each
I made sure my existing igroup had ALUA disabled
Now create the cluster using Failover Cluster Management
Next i ran cluster group to see which node owned the Cluster Group.
Log onto the node that owns the cluster group and run snapdrive. For the quorum I choose a 2048MB LUN size and added 10% for WAFL overhead for the volume size. Script snapdrive.txt attached. Now for the igroup. This was tough one for me to figure out after many re-tries. I already had all my vmware 12 esxi hosts in single igroup, however it appears snapdrive follows VMware best practice to have your MSCS vm nodes in its own igroup.
This creates a problem in now that you are pinned to the actual esx hosts so vmware vmotion is out the door. I confirmed this fact in the 6.3 Snapdrive Admin Guide Page 53
Even adding additional HBA's would not work, since I would still be pinned to the actual esxi hosts, just on different HBA's. So I was resigned to that fact and just used the existing HBA's that were already in the servers.
I then came across a KB 3012721 article providing additional confirmation that Snapdrive does not support igroups containing initiators from other ESX hosts not involved in the MSCS.
Now I am ready to create the shared quorum drive using snapdrive. During the snapdrive create disk process, I chose manual igroup mode and lo and behold, my existing igroup was not shown that contained my 12 esxi hosts. I then went back to the snapdrive disk creation process and let snapdrive automatically create the igroup. It turns out that snapdrive will concatenate both vm nodenames into a single igroup. For example:
viaRPC.SQL001SQL002(FCP) (ostype: vmware)
and map it to the quorum LUN. From there I was able to then follow the procedures outlined in the Snapdrive Admin guide to create the failover cluster witness disk on page 113
My Windows 2008 SP2, esxi 4.1, MSCS two-node vm cluster is now online!
2011-04-29 12:03 PM
A couple of questions, as I am attempting a similar configuration.
(1) Is this supported via iSCSI? I am having issues with it. I saw in the attached PDF on page 11:
The following environments and functions are not supported for MSCS setups with this release of vSphere:
-Clustering on iSCSI, FCoE, and NFS disks.
(2) How did you add a second SCSI controller in VMWare? I think you need to have a disk to attach it to...
If iSCSI is not supported, I will need to switch to FCP, but wanted your insight. Thanks in advance for the help!
2011-05-03 01:25 PM
I am not sure about iSCSI being supported especially upon reading page 11. I wrote up my steps using a pure fiber channel environment.
I found this document that talks about In-Guest iSCSI being supported. Not familiar with this term so it might prove useful to research this option for your environment.
As for creating a second SCSI controller inside the vm, I used the following link:
I assigned c:\ drive for the vm on controller 0 and a d:\ drive for controller 1 for my Netapp executables such as snapdrive and smsql.
2012-03-20 11:28 AM
Thanks for posting your experience, this is really useful. I am not sure what this reference is for: (KB 2013396) Do you think you could please post a link to this reference article?
2012-03-20 11:35 AM
I imagine this is what I was referencing:
Matthew L. Gauch
N Series Technical Team Lead
Systems and Technology Group
NetApp Certified Data Management Administrator
Phone: (919) 543-0143