Just wondering who else is doing this? Because we have our swap in different NFS volumes that are not snapshot protected and not mirrored this can be fun when testing the VM via clones in the DR site as it gets the hump about initially not having swap around. We know how to manually fix this but without SRM does anyone have any neat ideas on automating the resurrection of a VM from a clone?
Actually this is an issue even with SRM. It might be helpful for you to read the SRM manual, because towards the end in the appendix there is a section on how to deal with VMs with swap drives. The issue is that the disk signature changes even if you have a ready and waiting swap drive out at the DR site, so to the best of my recollection Netapp recommends creating a gold master image of the swap drive, and then using the same one at the DR site so when the VM comes up the disk signature is the same and it doesn't complain.
On a tangent, in a separate post I talked about swap drives, and how I really question their usefullness. Supposedly you create them for bandwidth savings. However, the small amount of testing I have done have shown no bandwidth savings for creating a swap file for VMs that have their memory properly configured. That being said, my test only included 3 VMs. Are you running mostly Windows VMs? One one volume we have 40 VMs and the daily change rate is between 10-15G and I'm not separating swap or pagefiles. If your ratio is signifcantly less than that, I would be tempted to separate swap.
Re: Recovery of NFS based vm's from a remote site using flexclone and separate swap stores