Data Backup and Recovery

Adding servers to existing Windows Failover Cluster with SnapDrive...


I am working with a two-node Windows Failover Cluster running Windows 2008R2 with multiple instances of MS SQL Server.  There are approximately 20 LUNs attached to the cluster.  DSM/MPIO 3.5, Host Utilities 6.0, and SnapDrive 6.3.1R1 are installed on each server, and all LUNS were created/managed using SnapDrive.

I would like to add a couple of new servers to this cluster configuration, but I have not found any instructions on how to add a new server to an existing configuration when used with SnapDrive. 

Which step comes FIRST

-  Adding the new servers to the Cluster in Windows Failover Cluster Manager


- Performing "Connect Disk" in SnapDrive on the new server... connect to storage system, selecting the LUN, mapping initiators, etc

It seems intuitive to me that the new servers would need to be in the cluster first, so SnapDrive on the new servers could see the LUNs as part of a Cluster Resource group, but it was suggested to me by someone I consider an expert that the SnapDrive part has to be done first before adding the nodes to the existing cluster.  I'm concerned that the new systems would need to understand the new LUNs as shared disks, which wouldn't make sense unless the Failover Clustering configuration on the new systems was already configured.

Can anyone clarify this for me, please!

Thanks in advance



Install Snapdrive on the new node first, make sure you add firewall exceptions if used.  If using FCP, then zone accordingly, if iSCSI then establish a session first, then join the node to the cluster using WSFC Manager.  Install appropriate application(s) [SQL\FileServices\etc] and configure cluster and application for failover options.  Then follow MS best practice and run cluster validation wizard. 


The server has to be added to the cluster first.  Remember that LUNs cannot be attached to more that one server unless those servers are clustered together.  Otherwise you risk data corruption since each server would think it has sole ownership of the disk and could unknowingly overwrite the same blocks.


That's what I was thinking.  I had a VAR tell me, though, that adding the SnapDrive end of things should occur first.  That didn't add up for me for the reasons you stated.

I had a chance to test this in my lab, and had no problems joining a new server to the WFC first, and then using SnapDrive - Connect Disk to actually connect to the LUN, create the igroup, etc. 

I thought that adding the server first to the WFC would result in either Cluster Validation failing, or it would just block me from adding the new server because it would not yet have access to the quorum disk.  That was not the case.

Thanks for responding!