ONTAP Discussions

Keeping the DFPM_base lean and fit

emanuel
3,214 Views

Hello everyone.

I am reseting a previous post but just on a singualr topic; controlling the size of the DFPM_base backup ( snapvault ) snapshot to the best of our ability.

Storage Admins cannot control the user activty on a volume / qtree so we want to try to do what we can to keep it as lean as possible.  This is probably going to be hard to do in most cases but it is worth a discussion.

In my previous post two suggestions were mentioned 1) execute a manual backup or 2) run a scheduled backup ... either action will create a new DFPM_base but like it was said before but other snapshots could have some blocks locked up and this may not yield as much free space.

With this in mind comes these thoughts:

1.     The manual snapshot creation in Protection Manager is the "Protect Now" function?  this creates a snapshot that is out of the schedule and is not part of the recycling rotation; you would have to manually remove this snapshot.  This manual snapshot appears on the destination?  The interesting thing about this protect now is you can select a type ( hourly, weekly, etc or manual ).  In the description field you can enter a name for this backup but when the backup is created, it will have a DFPM_snapshotname and PM will know that it is part of the rotation?

2.     Running a scheduled backup - can I force a scheduled backup to occur ahead of schedule?

3.     is there a benefit to running many backups during the day and only retaining a few to keep the deltas at smaller sizes and would this help keep the DFPM_base size in check?

4 REPLIES 4

smoot
3,214 Views

Hey Emanuel --

1.  That's not quite correct. Yes, using the "Protect Now" button is how you create an ad-hoc backup but no, you don't need to manually delete the resulting snapshot copy. When you create the backup, you specify a retention class (hourly, daily, weekly, monthly or unlimited). The resulting backup will be retained and deleted according to its class. Depending on how you've configured your protection policy, the snapshot copies may hang around for a long time.  If its hanging around too long, there is a way to delete backups from the secondary space management wizard.

2.  Using the "Protect Now" function is almost the exact same thing as running a scheduled backup.

3.  If you run more frequent backups, each individual backup will be smaller but if you add them all up, the total size will be larger.

-- Pete

emanuel
3,214 Views

I will disscuss with them their policies .. part of the problem is their storage has grown out of their control a little and hopefully with some education we can use a manual update less often because we really want the application to manage the backups for them.

emanuel
3,214 Views

Hello Pete, all

We are good on the destination side as far as using Protect Now to force the cycle and choosing the right "class" of snapshot and they are kept for the duration of the policy ... however, the other effect is that when Protect Now is used, a LOCAL snapshot is created on the source and it does not expire.

smoot
3,214 Views

Hey Emanuel --

Right, the "Protect Now" button creates a new backup on all dataset nodes. If you use "dfpm backup create", you can specify which node you wanted to update which lets you skip the snapshot creation on the primary node.

Anyway, if you created a backup on the primary that you don't really want, try setting the retention duration and count for that retention class to zero.  For example, if you created a weekly backup, try this:

  dfpm policy node set my-policy-name 'Primary Data' weeklyRetentionCount=0 weeklyRetentionDuration=0

That should cause any weekly backups to get purged. You can also manually delete the backups using the secondary space management wizard (althought that process is admittedly cumbersome).

-- Pete

Public