@MJBROWN_COM_AU wrote:
Hopefully they post the session from Insight. I would be interested to see how they approach different environments.
Regarding the Subclient per volume idea and not adding VM's to the subclient; what about VM's with VMDK's across different filers SSD,SAS,SATA or even different datastores? If you use the volume approach then you may not have consistent restores and will also be taking multiple VM snapshots for the different subclient schedules.
I am at the point of considering this configuration but also need to ensure I’m not adding any complexities or adding load to the VM's
Also when dumping to tape I wonder how it will go trying to register the VM when the .vmx is on a different datastore. (Part of the restore or dump to tape registers the VM with _GX appended to the VM name. I wonder how this would work when you attempt to restore one of the additional disks not stored with the .vmx
Thanks,
Mike
Insight session slides are available for attendees and partners at https://www.brainshark.com/go/netapp-sell/insight-library.html?cf=6729&c=5 -- search for SnapProtect Best Practices. They'll be available to customers sometime in mid-December.
If you use a Datastore as the backup target (contents) for the Subclient - rather than individual VMs - and the VM has VMDKs in multiple datastores, then the backup job will identify all of the volumes on which the VM's VMDKs reside, and create ONTAP snapshots on all of them. The VM backup will be consistent because of the vSphere-level snapshots (i.e. at the VMDK level). When (if?) you subsequently run SnapVault and SnapMirror copies in your Storage Policy, you'll get a new set of SM/SV target volumes on your secondary storage system for each of the subclients.
Because each new subclient results in a new set of mirrors for the volumes that it references, the recommendation is to have one subclient per volume if at all possible. If your VMs span different datastores, it will still work; but you'll get multiple mirror volumes, i.e. one new volume mirror for each sublient that contains a VM which has storage on that volume
You can appreciate that if you have a lot of VMs using multiple, shared datastores; and many different subclients that end up using the same volumes, then you can end up with a lot of mirror volumes.
For the restores and dumps to tape, I think it's intelligent enough to be able to find all of the required volumes/snapshots for the VM, and mount those into ESX for the duration of the restore or dump.