ONTAP Discussions

Snapvault secondary space usage

zbrenner_1
3,232 Views

Hi folks,

My customer has many, large databases on the primary NetApp system which backup by snapvault.

Before snavault backup starts on the primary system runs a script with SMO backup and then indicate a snapvault update from the secondary with this primary

consistent snapshot. The database size about 3.5TB. And it worked. I use a snapshot autodelete options but I forget to use defer_delete options to preserve snapvault baseline.

The snapvault connection failed. I did delete all the snapshots on the secondary volume.

1st Q: What data remain there which still use 3 TB

When I resync with snapvault start -s, the secondary volume grow above 6.5TB.

Then I delete the volume, recreate and start snapvault again.

It used 3.5TB space again.

2nd Q: How could I clean all the old data on the secondary volume to keep relevant data and snapshots?

3rd Q: This connections update managed by script but I've OnCommand Server 5.0 to handle another 60 OSSV connections. I've heard about DFM cleanup. Is it works for me?

Thanks

Zed

3 REPLIES 3

crocker
3,232 Views

is this one that your team handles or should the question be asked in the NetApp Support Community?

zbrenner_1
3,232 Views

I don't know there is another Community forum for support:) till now. So are you suggest it?

bwood
3,232 Views

For Q1...  the data from the original baseline remained in the qtree on the secondary volume after deleting all snapshots.  This is expected.  You ran another baseline (because the base snapshot was deleted) which vaulted to another qtree, thus increasing the data in the volume to 6.5TB.

Public