2011-09-30 07:46 AM
Firstly, I know this is also a VMware question but because my question is as a result of a (rather sensible) NetApp best practice I thought I would ask my question here in case anyone else has experienced the same scenario.
I have a small VMware ESXi installation, three hosts running in a HA cluster accessing NFS datastores on a FAS2040. All running great, no problems at all.
As per NetApp suggested best practice, I am running the VMs with the Windows system disk in one snapmirrored datastore and a second datastore that contains each servers' second disk which is used for the Windows page file and other temporary files. This second datastore is not snapmirrored to avoid having to replicate transient data across the WAN to our DR site. This does, however, mean that in the event of a failover, I have to manually edit each .vmx file and remove reference to the second disk before I can add it to the inventory as the referenced datastore does not exist.
There is also a third datastore for VM swap files but this does not present a problem.
A couple of weeks ago, some undesirables stole the external copper piping from the air conditioner condensers for our server room, requiring me to perform a full DR failover for all services.
It all worked great and the only hold up was having to edit each VM before powering it up. I guess there are ways to script the changes but I am not great with scripting and we don't have SRM so I can't use that, although having read the documentation I believe it would take care of this for me.
We started with around 20 VMs but now have around so it is becoming more onerous to have to manually edit each VM so I was wondering if there was any clever way round this.
Even if I replicate the temp datastore across to the DR site, when mounted it has a different UUID so it doesn't help as the .vmx file is looking for a disk with a specific path that does not exist at the DR site. I have read here http://communities.netapp.com/community/netapp-blogs/getvirtical/blog/2011/09/28/nfs-datastore-uuids-how-they-work-and-what-changed-in-vsphere-5 that the UUID is based on the IP, hostname or FQDN of the export and the export path and that it is possible to trick ESXi by using DNS or host entries in order to keep the export names identical and therefore generate the same UUID.
This sounds like it would do the trick apart from the fact that vCenter would override this in some scenarios.
Is there any way I can set things up so that I can simply power on the VMs at the DR site without having to do any editing or stumping up for SRM? The only way I can think of so far is to have all disks in the same datastore as the disks are referenced with a local path rather than with a UUID in the path, i.e.
scsi0:1.fileName = "serverdisk.vmdk"
scsi0:1.fileName = "/vmfs/volumes/dd7bff7b-3412562b/serverdisk.vmdk"
2012-11-08 10:18 PM
Why not just replicate these on a high schedule just so the files at least exist on the other side maybe say like once a week? It doesn't matter if they are old just need the vmdk's to exist on the other side so when you power on they are there. That way the files are on the DR side and when you do your vfiler dr its all the same and you don't have to make changes to the vmx files?