VMware Solutions Discussions

Error attaching a RDM disk to a VM in vSphere 5

ajuntamentprat
4,927 Views

Hi everyone,

We are currently having a problem attaching a RDM disk to a VM in vSphere 5.

Our environment is this:

     - FAS 2040 with Ontap 8.0.1P1 - 7mode, connected to hosts via FC.

     - 6 datastores VMFS5 with 40 Vms running on 4 hosts with vSphere 5, with Vcenter server 5.

     - All Vms are version 8 with vmware tools up to date and running nicely.

Now, for the first time, we're trying to add a RDM hard disk to a VM, (not using datastores) but we're not able to.

     - First, using Netapp OnCommand System Manager we create the volume and the Lun. Good.

     - Lun type is set to Windows 2008, which is the OS of the VM. Ok

     - Then we set the Intitiator group to the same Initiator group that's already working with the Datastores  (cluster with 8 hbas for all 4 hosts) which is working and has type "Vmware". Great.

     - Next, we go to vcenter, and we Rescan All adapters for every host, and we can see the Lun we've just created. Wonderful.

     - So we go to edit settings for the vm and try to add the RDM hard disk. The vcenter is able to "see" the rdm hard disk, and the option for Raw Device Mapping is not greyed out. Nice.

At that point we can actually see the disk, with its name, Lun, capacity, all is ok.

But after we click on the next steps (store with VM, physical, SCSI 0:1) it gets back to the first "edit settings" windows and it says "New Hard Disk: (adding)" Then it hangs there for an awfully long time (hours, if we don't cancel it before)

If we try to force the exit by saying "yes" before the hard disk has really been added, then the VM is able to see the disk but it gets frozen when trying to format it. It's hasnt been properly added.

 

All in all is quite a simple operation, but it's not working. We've tried to stick to best practices for vsphere (TR-3749) and we can't see what we're doing wrong...

 

Thanks for your attention,

2 REPLIES 2

GARDINEC_EBRD
4,927 Views

Hi,

Are there any relevant messages in the vcenter logs, or /etc/messages on the filer?

We have a very similar environment, except we use NFS for our datastores.  RDM's via FC also.  I've not seen this here, but all I can add here is that we use snapdrive in the guest to create and map the RDM's, which works well.  You may have another problem, so snapdrive may have similar issues.  However, in case it helps, the process we use is:

1. Create the volumes/qtrees, but leave them empty.

2. Install Snapdrive on the guest, make sure you add the vcenter server and SMVI hostname.  Use a service account which has access to the filer.

3. Start up Snapdrive GUI.  You should see the VMDK's you already have for C:\, etc under the disks list (or sdcli disk list)

4. Use the 'Create Disk' wizard to create the LUN(s).  When you come to select the igroup, use 'manual' and select your existing igroup for the esx hba's.

All being well, snapdrive will create the LUN, make sure the volume options are set correctly, present it to the guest, format it, etc.

Hope this helps!

ajuntamentprat
4,927 Views

Thanks!!! That workaround worked!!!

Best regards,

Public