In an iSCSI datastore based VMware environment, we are working with VM tags for backup in SnapCenter. There are datastores in an CG and some without the need for SM-BC. The backup only works if all VMs are either on volumes in CG or on the volumes without. Is the there a limitation in SnapCenter Plug-in for VMware with LUNs in CGs and some without? Do I have to maintain dedicated tags for VMs in CG? All VMs and datastores are mounted to the same VMware cluster. In SnapCenter we have one Resource Groups with the VM tag as selector.
... View more
Hello, we updated the plugin to v5.0P1 last week and now we see extremly long backup times like 4h instead of 10min for a datastore backup. A even bigger problem is that sometimes after a succefull backup the vmware snapshots created are not deleted. Does anyone face the same problems ?
... View more
Hi Team, I am trying to recover a file from a snapshot on the snapmirror volume. I am running following command from my primary volume snapmirror restore -destination-path SVM:primary_vol -source-path backup_SVM:backup_vol_gtbsl -source-snapshot daily.2024-04-07_2000 -file-list /largefile.txt,@/largefile_restored.txt Error: command failed: Original request was to restore the entire contents of a Snapshot copy. Attempting to restart request to restore a list of files or LUNs of a Snapshot copy. Could not find anything helpful on net. Any advice will be much appreciated.
... View more
we are testing the VServer snapmirror failover and failback process. we have multiple CIFS volumes in the VServer, each one has multiple millions files, we found the failback took a few hours to complete. According to NetApp support, the failover process will have to check the data integrity, this is why take so long to complete. it means our application will need additional hours down time. is there any way to improve the failover process? Such as to skip the data integrity check if we know the source side data is good, or kick off the data integrity check before kick off the fail over. Please share your experience if you were on same or similar situation. thanks,
... View more
Hello, have an old FAS3250 running ONTAP 9.1P20 - both of these (HW and ONTAP) are long past any support. It was used for secondary backups using SnapMirror (XDP, Async) from the primary system running newer ONTAP (started with 9.8 and was updated over the years to 9.13.1P7). Few days ago I have updated the primary system to 9.14.1P2 and since then the SnapMirror does not work. I cannot update the existing relationships and cannot initialize new ones. The error is reported on the 9.1 system smc.snapmir.update.fail: Snapmirror update from source volume 'xxx' to destination volume 'yyy' failed with error 'Failed to create Snapshot copy snapmirror.zzz.2024-04-21_090500 on volume xxx.(Failed to start operation on source.)'. Relationship UUID 'zzz'. There is no error on the 9.14 system (nothing in the event log, did not find anything in the other logs (maybe not looking properly). Weird thing is the "snapmirror list-destinations" on the 9.14 does not return anything (the old snapmirror snapshots exists in the volumes). The cluster and vserver peer relationships seems OK (available, healthy). I can replicate from the 9.14 system to another system with ONTAP 9.13.1P7 (tried just for the test). Found KB article and documentation page that states this is no longer supported (since a long time) - but according to that information it should not work even before. https://kb.netapp.com/onprem/ontap/dp/SnapMirror/How_to_check_SnapMirror_compatibility_between_ONTAP_versions https://docs.netapp.com/us-en/ontap/data-protection/compatible-ontap-versions-snapmirror-concept.html Funny thing is all the relationships (even between 9.14 and 9.13) state Relationship Capability: 8.2 and above Does anybody know what changed in 9.14 and if there is any workaround? (reverting the system to 9.13 would be really painful). Thanks Harry
... View more