2011-12-16 07:01 AM
I have an issue where RDMs disconnected from the Guest OS (in this case, Windows 2008R2 64bit) using SnapDrive 6.3.1 are sometimes not fully removed from the virtual machine. The disk is not visible in the OS or in SnapDrive so appears to have been removed, but going to Edit Settings on the VM will still show the RDM as connected, and the LUN still exists and appears mapped on the filer. I've also noticed when viewing the storage paths on the host a number of 'dead' paths, which I assume are old RDM connections.
This happens on both ESXi4 and ESXi5 hosts.
This can cause problems if a LUN is removed from the filer, as it appears to be no longer connected to the Guest OS, when the VM still thinks it is connected.
Also posted at VMware community.
Solved! SEE THE SOLUTION
2011-12-22 10:08 AM
OK, tell me if that resolves temporarly the problem, I will take a look into the internal bug database to see if there is an explanation and a correction.
De : c-xdl-communities
Envoyé : Thursday, December 22, 2011 09:24 AM
À : Loiseau, Valery
Re: RDM LUNs not fully removed from VM
created by GRAEMEOGDEN<https://communities.netapp.com/people/GRAEMEOGDEN> in Virtualization - View the full discussion<https://communities.netapp.com/message/70418#70418>
I will try authenticating with ESX host rather than vCentre.
Restarting the vCentre service isn't really a solution as this happens quite regularly
Reply to this message by replying to this email -or- go to the message on NetApp Community<https://communities.netapp.com/message/70418#70418>
Start a new discussion in Virtualization by email<mailto:email@example.com> or at NetApp Community<https://communities.netapp.com/choose-container.jspa?contentType=1&containerType=14&container=2160>
2011-12-23 01:30 AM
Rescanning the datastores will remove the dead paths, however we have an automated process which maps/unmaps LUNS via snapdrive every hour to update some databases. Whenever a LUN is removed there's chance of a dead path remaining. Eventually these accumulate and seem to cause performance issues.
Of course I could manually rescan the datastores every week or so but a root cause would be nice!
2011-12-27 01:13 PM
This exact thing is also happening to us. The rescan drops the dead luns, but manually scanning isn't a good enough solution. We have had hosts disconnect because of the Snapdrive mounting the LUNS to the ESX's local datastore too when using Snapdrive 6.3.1. I've had opened many tickets with Netapp and Vmware but still no concrete solution.
At first it was the version of Snapdrive we were using, things clear up and then it happens again out of the blue. I have now been able to recreate the Host disconnection issue, and it happens during a Snapmanager backup for SQL. As soon as VMware scans the HBA's the Host disconnects. I haven't tried to connect Snapdrive to the host, because of HA, I wouldn't think you'd want to do that if the machine running Snapdrive migrated off that host, right?