VMware Solutions Discussions
VMware Solutions Discussions
Is Snapdrive for Windows v6.1 supported with ESX 4 (vSphere?)
Thanks
Yep, it does - with the usual VMware caveats, i.e. FC being 'fully' supported (i.e. RDMs can go via ESX stack) & iSCSI requiring MS software iSCSI initiator inside guest OS (the latter will change with 6.2 intro).
And BTW - I just found the new compatibility matrix (the dynamic one) very awkward to find a prove for this!
Regards,
Radek
While it's taken some time to get used to, I do find I like the new Compatibility Matrix tool a good bit....it could be snappier but is an improvement on balance from the previous monster Excel spreadsheet (I heard of admins who opened that spreadsheet and never came back out ).
Yeah, OK.
Here is the actual reason I am moaning - try this:
- Pick & add Host OS / ESX 4.0 from Component Explorer
- Change Storage Solution to SnapDrive for Windows (SDW)
- Configurations Found = big fat 0
You may argue there is logical reason for that (SDW technically cannot run on ESX itself), yet getting to the bottom of the story is not that obvious...
And BTW - I just tried changing Storage Solution to SMVI: guess how many supported configs I got? (yes, zero)
Regards,
Radek
Ah....so I won't dispute that the data at times is....well....off (I've had some disputes over various items myself) but I'll call that separate from the interface....
Hi. I'm trying to create a Windows 2003 cluster within virtual machines across 2 physical vSphere 4 (update 1) hosts. My setup:
DataOntap 7.3.2
Windows 2003 Enterprise R2 SP2
Snapdrive 6.2
FC connectivity between the ESX hosts and filer
I am able to create the shared quorum disk resource via Snapdrive on the first Windows node. The trouble starts when I follow the Snapdrive 6.2 documentation to "connect" the shared disk resource (quorum) to the second Windows node - before I can add this second node to the Windows cluster.
I step through the Snapdrive wizard for connecting to the shared lun and it kicks off the process. The first odd thing is that although I have several igroups already created on the filer, Snapdrive does not pick up any of them. I'm forced to use "Automatic" igroup mapping (i.e. Snapdrive creates the mapping). After about 30 seconds I get an error that states:
Failed to create disk in virtual machine, Failed to Map virtual disk: File [PROD5_YYC] SYYC1CMVRMX101/SYYC1CMVRMX101_SD_xxxxxx-xx.ca.corp.xxxx.com_P3KgWJW9X5Bm_0.vmdk is larger than the maximum size supported by datastore 'PROD5_YYC.
Needless to say, the second node cannot be added to the cluster until this is resolved. The odd thing is that I can see that the ESX server that hosts the second node has it's FC initiator mapped to the shared lun (visible via Filerview - luns - manage). Another oddity is that if I do a rescan of the storage adapters of the ESX server that hosts the second node I can see the lun show up but it is listed as 0bytes.
Any help is appreciated.
Hi there,
1) go with SnapDrive v6.2P1, provides plenty of nice fixes
http://now.netapp.com/NOW/download/software/snapdrive_win/6.2P1/
2) For the igroup problem, its fixed in ONTAP 7.3.3, which is not GD yet tho, works like a charm still, vmware error should be gone then aswell
3) Try configuring the cluster in advanced mode (means it wont check for shared storage), enables you to create the cluster and quorum should be available as both virtual machines should be able to see it after cluster creation
4) good luck!!
Kind regards,
Thomas
Thanks for the info Thomas! I'll give those a try.
Hi Thomas,
I upgraded to 7.3.3 and Snapdrive to 6.2P1. When I attempt to create a lun and am presented with the screen to 'specify the igroups to be used for mapping this lun', it searches and returns, 'Igroups not found on the Storage System for all the selected initiators. Would you like to create igroups'.
I have created the igroup on the filer already, so not sure why I keep getting this.
Any ideas?
Make sure to have all esx servers, listed in that igroup, online.
Workaround:
Create new igroup with just that 1 esx and remap it to the other group with all esx servers manually after creating than lun, dont forget to rescan all hosts.