2012-04-05 05:14 AM - edited 2015-12-18 03:00 AM
My customer has many, large databases on the primary NetApp system which backup by snapvault.
Before snavault backup starts on the primary system runs a script with SMO backup and then indicate a snapvault update from the secondary with this primary
consistent snapshot. The database size about 3.5TB. And it worked. I use a snapshot autodelete options but I forget to use defer_delete options to preserve snapvault baseline.
The snapvault connection failed. I did delete all the snapshots on the secondary volume.
1st Q: What data remain there which still use 3 TB
When I resync with snapvault start -s, the secondary volume grow above 6.5TB.
Then I delete the volume, recreate and start snapvault again.
It used 3.5TB space again.
2nd Q: How could I clean all the old data on the secondary volume to keep relevant data and snapshots?
3rd Q: This connections update managed by script but I've OnCommand Server 5.0 to handle another 60 OSSV connections. I've heard about DFM cleanup. Is it works for me?
2012-04-10 12:55 PM
For Q1... the data from the original baseline remained in the qtree on the secondary volume after deleting all snapshots. This is expected. You ran another baseline (because the base snapshot was deleted) which vaulted to another qtree, thus increasing the data in the volume to 6.5TB.