ONTAP Discussions

Windows 2008 R2 fail-over cluster for SQL


Greetings folks,

Need to setup a windows 2008 failover cluster for win 2008 R2 running MSSQL 2008 R2 using FC. Please point me in a direction to find some best practices and startup guides.Pieces of the puzzle are:

Ontap 8.0.2


Snapdrive 6.1






Step 1 will be to confirm with the NetApp Interoperability Matrix Tool (http://support.netapp.com/matrix/) that the configuration above is supported.  You will also need to ensure that the storage controller (FC/FCoE HBA, iSCSI HBA, iSCSI software initiator, etc.) are supported as well.  I suggest using the IMT to first confirm that the OS, DSM/MPIO, and Data ONTAP versions are compatible.  This will tell you which version(s) of the Host Utilities Kit (HUK) and other components are certified to interoperate correctly.  If you are using an HBA, it is *essential* that the firmware and driver of the HBA match the versions on IMT *exactly.*  I once used a newer firmware/driver combination than recommended and had unexpected problems.  When I opened a support ticket with NetApp, I found out that the newer firmware/driver combo was not approved on the IMT because NetApp's internal testing had found bugs with that combination.  After you have identified the correct supported configuration for the Windows OS, I suggest using the IMT to determine the supported versions of SnapDrive and SMSQL that match the OS support information.  Sometimes, this necessitates using a slightly older version of some components.

Once you have confirmed the supported versions of the components, the next step will be following the best practices guide for SnapManager for SQL  You should also consult the best practices guides for SnapDrive and for MS-SQL on NetApp storage.  For example, here is a best practices guide for MS-SQL on NetApp storage from 2010 -- http://www.netapp.com/us/media/tr-3821.pdf.  And here is a best practice guide for SMSQL with SnapDrive 7.0 on CLustered Data ONTAP -- http://www.netapp.com/us/media/tr-4225.pdf.

In a nutshell, your volumes should be arranged based on their workload and their disaster recovery value.  For example, MS-SQL uses its TempDB to store temporary information while the database calculates a response to a query.  If MS-SQL is restarted or moved between nodes of the cluster, MS-SQL will discard the TempDB data.  As a result, the TempDB data has *absolutely no disaster recovery value.*  However, the TempDB can require a lot of storage I/O and possibly require a lot of storage capacity.  For these reasons, the TempDB LUN should be in a volume by itself.  That way, if a snapshot is taken of the database, the TempDB LUN will not be included in the snapshot.

My recommendations for deploying MS-SQL failover clusters on a NetApp filer would be....

  • One and only one LUN per volume
  • A separate LUN for MS-SQL database data files (*.MDF, *.NDF).
  • A separate LUN for MS-SQL database log files (*.LDF)
  • A separate LUN for MS-SQL TempDB files
  • A separate LUN for the SMSQL snapinfo information
  • Older versions of MS-SQL installed MS-SQL binaries and the MS-SQL master, model, and system databases to a cluster shared drive.  With MS-SQL 2008 R2 and later, the SQL binaries install to the local storage of each Windows server.  In that scenario, I recommend putting the master, model, and system databases on the same volume as the MS-SQL database data files.

if you will be creating several database instances within the cluster, you may want to consider using NTFS mount points to conserve drive letters.  In one instance, I configured a 4-node Windows cluster with 3 SQL instances.  We limited our LUNs to 2TB for replication purposes (if replication breaks and must be reinitialized, very large LUNs can become very difficult to replicate).  The result was that we nearly ran out of drive letters.

I hope this has got you started in the right direction.  If you have additional questions, please let me know.  I have been building MS-SQL clusters on NetApp storage for about 7 or 8 years.

-- Bill



How did you get on with this?  You probably discovered as I did that there is precious little understanding at Netapp of how to build any kind of Windows cluster (physical in particular)  on their storage. I am talking about multi site cluster based on SM replicated volumes.


  • Ucs blade infrastructure (physical windows Os)
  • dataontap 8.3
  • replicated volumes using sync snapmirror
  • replicating data and log volumes only.

I discovered eventually the Oncommand Site recovery Manager (not to be confused wit the VMware product!) that is the cluster group plug in for each node that manages the filer interaction and snapmirror.  Like the Emc cluster enabler and Hp eva cluster extension etc.

Was reasonably simple in the end. Build a typical windows fail over, multi site cluster, in this case 2 nodes, one at each datacentre.  Install the SRM plugin to each node for filer, SM management and the install the cluster ware instance of Sql to each node.

Srm looks after the replication reverse and bringing up the remote volumes prior to Sql coming online.

Sql fails from site A to site B within 10secs or so.

I have a case open at NA for some minor issues but struggling to find someone who understands non virtualised environments.



Hello Bill,


I am setting up a Microsoft SQL 3 nodes Cluster, Windows OS 2012R2, SQL 2014, on VM.

NetApp Storage FAS-8040 Data OnTap 8.2.1


Separated iSCSI Volumes for each individual :









Use SnapDrive 7.1 to provision the LUNs for each of the drives for each node.


Problem encountered:

SnapDrive created LUN, the first drive letter say 😧 for SysDB LUN

subsequence when I use SnapDrive to provision LUN for UserDB drive for Microsoft cluster, I use mountpoint instead of the drive letter say D:\UserDB.

I can't mount the D:\UserDB to create LUN on UserDB volume for the SQL server


Error message: can't mount a non shared volumes to mount point


What am I missing when configuring the volume in the NetApp Filer?


Your help is appreciated.