VMware Solutions Discussions

FC RDM Luns in ESX4.1 virtual machines

tuppertrut
9,952 Views

unsupported initiator after migrating a virtual machine to an esx 4.1 server and upgrading the vm-tools the FC lun is still visible in snapdrive, after a reboot snapdrive doesn't see the FC lun anymore, however it is still visible/usable as drive. The eventlog shows the following.

Event Type: Warning
Event Source: SnapDrive
Event Category: Generic event
Event ID: 317
Date: 27-9-2010
Time: 15:52:21
User: N/A
Computer: SNAPDRIVE
Description:
Failed to enumerate LUN.
Device path: '\\?\scsi#disk&ven_netapprev_7340#40&000100#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
Storage path: '/vol/testvol/q_test/test.lun'
SCSI address: (2,0,1,0)
Error code: 0xc00402fa
Error description: A LUN with device path \\?\scsi#diskprod_lun&rev_7340#40&000100#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} and SCSI address (2, 0, 1, 0) is exposed through an unsupported initiator.

Event Type: Information
Event Source: SnapDrive
Event Category: Service execution status
Eve
nt ID: 100
Date: 27-9-2010
Time: 15:52:28
User: N/A
Computer: SNAPDRIVE
Description:
SnapDrive service (version 6.3.0.4601 (6.3), ESX Server Version:4.1.0 (1028)) started.

Event Type: Warning
Event Source: SnapDrive
Event Category: Generic event
Event ID: 317
Date: 27-9-2010
Time: 15:52:28
User: N/A
Computer: SNAPDRIVE
Description:
Failed to enumerate LUN.
Device path: '\\?\scsi#disk&ven_netapprev_7340#40&000100#{53f56307-b6bf-11d0-94f2-00a0c91efb8b}'
Storage path: '/vol/testvol/q_test/test.lun'
SCSI address: (2,0,1,0)
Error code: 0xc00402fa
Error description: A LUN with device path \\?\scsi#diskprod_lun&rev_7340#40&000100#{53f56307-b6bf-11d0-94f2-00a0c91efb8b} and SCSI address (2, 0, 1, 0) is exposed through an unsupported initiator.

After de-installing the vmtools, migrate the virtual machine back to an ESX 4.0 host and reinstalling the vmtools, everything works fine and the luns can be managed with snapdrive.

21 REPLIES 21

mitchells
9,473 Views

Are both hosts using the same VirtualCenter server?

watan
9,791 Views

This has recently been spotted internally and the dev team is working on a resolution.   Burt#447078

There seems to be a problem with the 4.0 to 4.1 upgrade and fcp initiators.  Not much else has been established but dev team is investigating and hope to have a fix out soon.

tuppertrut
9,791 Views

When can we expect a fix?

Regards

John

watan
9,793 Views

We have a public report that will be posted soon.  http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=447078

The following is the content that will be posted in the public report.

==============================

You may notice that after upgrading from ESX 4.0.x to 4.1, SnapDrive for Windows doesn't enumerate the FC
initiators within Windows 2003 Virtual Machines; 
as a consequence of that,you would not able to connect an RDM (Raw Device Mapping) LUN or create a new RDM LUN,

%%% WORKAROUND:
Workaround #1:
Please go to device Manager and disable the IDE channel.

Workaround #2:
From Virtual Center, Add a new CD-ROM drive on the free IDE free channel for that VM.

Workaround #3:
Go to registry : HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi
And You will see some entries like
SCSI Port 0
SCSI Port 1
SCSI Port 2

These must be expandable and contain sub-keys like for example "SCSI bus 0" -> "Initiator ID" and "Target Id"
So if Scsi Port 0 does not contain scsi bus, initiator id and target id, just delete this scsi port 0 reg entry.
(You may want to export it to your hard disk before deleting).
Same goes for SCSI port 1 and SCSI port 2: if they do not contain sub-folders, delete them.

Once you have done the step above, 
you need to restart the SnapDrive service and check if SnapDrive sees the FCP initiators.
This workaround is not consistent across VM reboots, as the keys will be regenerated."

Darkstar
9,790 Views

This is a workaround for that bug that we received today from NetApp for one of our customers' case. I didn't check to see if it works yet, but you might want to try it (remember to backup the relevant reg.keys first!)

-          Go to the registry in the vm HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi,

-          And you will see :

Scsi Port 0

Scsi Port 1

Scsi Port 2, etc.

Windows Registry Editor Version 5.00

[HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi\Scsi Port 0]

"DMAEnabled"=dword:00000000

"Driver"="atapi"

[HKEY_LOCAL_MACHINE\HARDWARE\DEVICEMAP\Scsi\Scsi Port 1]

"DMAEnabled"=dword:00000000

"Driver"="atapi"

-          You should delete the « SCSI Port <n >» which does NOT contain the subkeys : scsi bus, initiator id et target id.  First do an « export »  to your hard drive before deleting the keys.  Don’t do anything to the SCSI Port keys that contain subkeys.

-          You will need to restart the service SDW (SnapDrive for Windows) on the vm

-          You should then be able to see all initiators (iSCSI and FC) in SnapDrive (6.2 and 6.3) on the vm with 4.1

-          Note that you will have to re-delete these keys each time the vm is restarted…this is therefore just a workaround, and a fix is being worked on.

tuppertrut
9,790 Views

I tested it and can confirm that it works. Thanks for the input, hopefully a fixed is soon available.

Regards

John

m_schuren
9,790 Views

Hi, thanks for the valuable info.

it works perfectly.

A more permanent (reboot-consistent) workaround would be to:

- eliminate these registry keys (they typically are from unattached virtual atapi devices)

- remove the CD/DVD drive from the VM (this will make mounting a CD to the VM impossible; and of course affects automated VMware tools updates)

- Disable the atapi driver in the 2003 guest OS (disable the "Intel 82371AB/EB PCI Bus Master IDE Controller" in Device Manager)

For me this is a suitable permanent hack to get SnapDrive 6.3 work with both FCP and iSCSI RDMs in vSphere 4.1; at least until burt 447078 is fixed.

Thanks again for the hint.

Mark

jerimiah303
9,790 Views

It also works on ESXi 4.1 as well.

ishadmin1
9,790 Views

I was having a similiar issue where my Disks were not enumerating within SnapDrive.  My ESX environment was already 4.1 and it was working fine but i did run the latest updates.  I thought the updates were the issue.  It turns out that my issue had to do with Domain authentication against the Filer.  The Filer couldn't authenticate against any DC's therefore affecting Snapdrive from Enumerating the disks.  I'm not sure if this would be what you are experiencing but its worth a shot to check.

On the filer that your disks are not enumerating from run the command  : cifs domaininfo

See if that returns any type of connectivity to a site or DC.

If it bombs you are not authenticating to a proper DC.  You will need to add the preferred DC's that are local or affliated with the Site that the Filer's subnet is associated with.

To add the preferred DC's type : cifs  prefdc add DOMAINNAME x.x.x.x <--Domain Controller IP address.

Hope this works for you!!!

tuppertrut
8,871 Views

The snapdrive 6.3P1 resolved the problem of the enummeration of te luns, but only if you use Virtual Center. When using RDM on a single ESX host, the problem still exists.

jwhelan31
8,871 Views

FYI, this bug is not limited to FC LUNs.  We upgraded from vSphere 4.0 to 4.1 and our Windows 2003 guest running SDW 6.3 and using the ESX ISCSI Software initiator could no longer enumerate LUNs.  Upon finding this forum, I removed SDW 6.3 and installed SDW 6.3P1 and our problems went away.  Another telling thing was that during the create lun wizard in SDW, the FC initiators on our CNA were no longer present after the upgrade from 4.0 to 4.1.  Once we upgrade to SDW 6.3P1, that symptom also went away.  Thanks.

tuppertrut
8,871 Views

Any news on the problem when connection LUNs to a single ESX host (no use of VC). After a reboot of a VM the LUNs are visible but snapdrive doesn't see them.

Regards John

support_2
8,871 Views

I just wanted to post that we also experianced the same problem.

We had this happen on both 2003 & 2008 servers

SnapDrive 6.3P1 update did not fix the problem / we are attaching fc rdm luns to vms via vcenter.

The rdms are attached to the vms so we can use Snap Manager for SQL products.

I also tried th e work arounds with different degrees of success, to be honest it seemed to work and then stop, not work on other servers, and on the 2008 box we could see the snapinfo lun through

SnapDrive and not the rdm that held the database.

Once suggestion if the registry fix works for you I created and reg file to remove those keys and then a batch file to stop the netapp services , delete the keys , start the netapp services back up.  I ran the batch file at startup.  This was in case someone rebooted the server I knew that those keys would be deleted.

Again it was hit or miss and on one server the deleting the registry keys would work and then later that week stopped working (even when done manually).

In the end I downgraded esx to 4.0 update 2, reinstalled vmware tools (uninstalling vmware tools will can cause the machine to loose its ip), and downgraded to snapdrive 6.2.1 after doing this all of the rdm luns show back in snapdrive across reboots and are working great again.

I did also try esx 4.0 update 2 with 6.3P1 and the problem still exist.

We are still using esx vcenter 4.1also.

Hope this helps someone.

mitchells
8,871 Views

Do you have multiple virtual SCSI adapters presented to the VM?

support_2
8,871 Views

Yep, we have 2, LSI Logic SAS for the boot and then all other vmdk and rdm luns are connected to a Paravirtual SCSI controller.

mital_shah
7,300 Views

I experienced this error on 3-4 VMs (2003 and 2008 R2) on ESX4.1, on FC. I used the stated workarounds of disabling Primary IDE Channel within Windows - DeviceManager etc, the registry changes. The below link also mentions adding Virtual Disk Service as a dependency for SnapDrive, and this definitely helped too.: http://communities.vmware.com/message/1396349;jsessionid=4C61511F1938DE23C26EBF4F2956381E

Additionally, I found VMware Tools had become corrupted on 1 of the VMs, and reinstalling helped resolve the problem on 1 server. On 1-2 of the VMs, I also found the ESX/VCenter authentication credentials option in SnapDrive had strangely lost its setting (unchecked). After some of these steps, restart the SnapDrive service and boom... your disks will appear!

markopella
6,511 Views

My RDMs on a production VM SQL server disappeared from SnapDrive over the weekend.  SQL didn't care and kept on running.  SnapManager for SQL failed though.  Reading through this thread I found that the checkbox for the ESX/VCenter authentication credentials was unchecked.  Turned it back on, clicked OK and all the RMDs came back.  Don't know how you found this really obscure setting, but it got the backups running again without rebooting.

Thanks for the post.

parisi
7,300 Views

Just an FYI, if you have more than one SCSI controller in a VM, SnapDrive currently cannot handle it. You will see "exposed through an unsupported initiator" errors (bug 473592).

This will be fixed in a patch release of SDW, due out soon.

support_2
7,300 Views

Thanks,  well I guess that is our problem, I will say this it isn't as cut and dry as it just doesn't work, it works intermittently.  It does however works on 4.0 update 2 with 6.2.1 and the previous 6.x versions before that.  I already downgraded my enviroment and prob. won't ever move to FCS release again as this was a nightmare for me, a fairly new customer although I guess on plus size I learned a lot about what to verify for when updating these products.

I worked with netapp support for over a month on this back in September.  ( I got everything from them my favorite was that snapdrive is not support on 4.1, and that I should have checked the matrix before updating),  which brings me to my next question

Is there are way to look for these issues or way to for someone from netapp who happens to be reading this to tie it back to the product downloads page where it list the initial req. etc. Maybe just a list of thinks to the bug reports.

When I look at the bug report for the number you posted ,there isn't anything but the number and relation back to the bug reported listed above in this thread.  Just not sure how to check for this ahead of time when the problems are already known.

parisi
7,300 Views

Yes this was only broken in ESX 4.1 as VMWare made a change that was not accounted for/reported to us. There have been some residual issues, such as the issue that 6.3P1 fixed (no scsi controllers found).

A lot of bugs do not have public reports unless they are requested. And on the P-release download pages, we have a list of bugs that are fixed in that release, so it usually will tell you what you're getting.

Generally speaking, using the IMT is a good practice. However, I think ESX 4.1 is listed as supported with SDW 6.3. These issues were simply bugs that needed to be fixed.

If you run into issues with support or want to be sure you're getting the proper help, leverage your sales team to get that help.

Public