Hi folks,
My customer has many, large databases on the primary NetApp system which backup by snapvault.
Before snavault backup starts on the primary system runs a script with SMO backup and then indicate a snapvault update from the secondary with this primary
consistent snapshot. The database size about 3.5TB. And it worked. I use a snapshot autodelete options but I forget to use defer_delete options to preserve snapvault baseline.
The snapvault connection failed. I did delete all the snapshots on the secondary volume.
1st Q: What data remain there which still use 3 TB
When I resync with snapvault start -s, the secondary volume grow above 6.5TB.
Then I delete the volume, recreate and start snapvault again.
It used 3.5TB space again.
2nd Q: How could I clean all the old data on the secondary volume to keep relevant data and snapshots?
3rd Q: This connections update managed by script but I've OnCommand Server 5.0 to handle another 60 OSSV connections. I've heard about DFM cleanup. Is it works for me?
Thanks
Zed