Microsoft Virtualization Discussions

SnapDrive failed to connect LUN on Windows Cluster


Also posted at but no response so also posting here....

When trying to create or add a LUN as a shared drive on a Win 2008 two-node cluster I receive the following error which I've never seen before:

Event ID 1116

Failed to connect to LUN.

Error Code: Failed to create disk in virtual machine, Failed to Map virtual disk: Error Adding Raw Device Mapping -GetLunInformation returned false-P3h8c4dxiUSv11

I can add the LUN directly via VMware but not using SnapDrive. I've also been able to add five other LUNs as clustered resources using SnapDrive but no more after that. If I fail services and drives over to the second node and try to add there, the message is different.

Event ID 1116

Failed to connect to LUN. Failure in connecting non-owner nodes to shared physical disk cluster resource.

Error code: Failed to connect LUN on some or all non-owner nodes. Please check non-owner node's application event log for details.

Unfortuantely, there is nothing in the non-owner application log for that time!

Has anyone seen anything similar to this?

SnapDrive 6.3P2 64bit on Windows 2008.





Make sure all of the nodes have an iSCSI session to the filer. I ran into a similar error message:

Failed to connect to LUN. Failure in connecting non-owner nodes to shared physical disk cluster resource.

Turned out that the cluster partner had no iSCSI session to the filer.

Hope this helps.




Thanks Hans!

I also had the same issue.  The iSCSI Initiator on the partner node was in the disconnected state. 



Have very similar  issues  -  SnapDrive in a Windows 2008 Hyper-V  Cluster and  separate physical SQL cluster,  have tried upgrading to v6.4 with out much success. I am using iSCSI  - and enumeration of LUNs  will take a very long time or fail totally.


Same issue here! Any news?




I couldn't get the GUI to connect the disk, but i was able to get them connect via command line.

"c:\program files\NetApp\SnapDrive\sdcli.exe" disk connect -p :/vol/TestVolume/TestLun.lun -I myServerNameA myServerNameB -d O -dtype shared -c myClusterName -n "SnapInfo" "SQL"

I believe you do the -IG for the group name instead of -I that lists all the initiators

The -d is for the drive letter

-dtype shared is so you can use it in cluster

-c is your cluster name

-n is the service name that is shared  (this creates a service for you)


This may be the issue.

Each virtual machine has a limit of 15 scsi devices per bus (Nics, Disks, etc).  If you already have 15 devices then you will need a new SCSI controller added to the guest before you can add more disks.  SnapDrive does not create SCSI controllers for you.  What I typically do is create a small (2GB thin) VMDK on a new SCSI controller so that the VM holds or keeps the SCSI controller between reboots.  VMware seems to delete extra SCSI controllers if there is nothing using them.  This way when you want to add disks with SnapDrive or if you are doing verifications with a SnapManager (which adds disks for verification and then deletes them) you won't run into any issues.

NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner