ONTAP Discussions

Default Snapshot reserve


Hi folks,

I have no idea whether this question is in the right location, but if not please relocate it or tell me where it should be.

I have always been told to set the default Snapshot reserve at 20% unless a specific environment requires a different setting.

Also it looks like Netapp's Synergy program views  20% as being the standard.

Can anybody tell me what the reasoning is for this 20% default value?

I am not questioning it's validity, I am just intrested how it came to be this value.






I set my aggregate snap reserve to be 0% from the default as it is just wasted disk, unless you can think of a time were you would want to roleback EVERY volume in the aggregate.

If you want to know about volume snap reserve you have many options.  ie It depends {sorry}

Have a look at this and the links off it.


Hope it helps



I've now read this same practice from a few different "trusted" sources, so I've done the same on our systems. The extra 5% of space is nice, thanks for sharing this tip Bren.

I held back on doing this previously because I remembered reading in the system admin guide that if there are technical problems, NetApp support may want an agg snap to go back to. Considering how incredibly stable our filers have been, I now consider it more as a precaution to be used only when making a major system-wide change. Besides with 5% reserve, the snaps end up getting deleted within a day or so.

From the system admin guide:

You use aggregate Snapshot copies in the following situations: 
  • If you are using MetroCluster or RAID SyncMirror and you need to break the  mirror, an aggregate Snapshot copy is created automatically before breaking the  mirror to decrease the time it takes to resync the mirror later.
  • If you are making a global change to your storage system, and you want to be  able to restore the entire system state if the change produces unexpected  results, you take an aggregate Snapshot copy before making the change.
  • If the aggregate file system becomes inconsistent, aggregate Snapshot copies  can be used by technical support to restore the file system to a consistent  state.


All good and well, but why then does Netapp's new Synergy config program then still indicate the 20% snapshot reserve space as being default?




I'm not familiar with that program, but, in general, 20% snap reserve is the default setting when you create a new volume. I think it is because new volumes created use the root volume as a template. If you left your root volume at its original settings, you'd have a 20% snap reserve.

Newer NetApp documentation is veering away from this practice, especially if you want to thin provision. With snap autodelete and vol autosize, it's possible to have no snap reserve, thin provision, take snaps, and just have the volume either autogrow and/or delete older snaps when space gets tight.

TR-3483 "Thin Provisioning in a NetApp SAN or IP SAN Enterprise Environment" has more information about this.



Hi Leif,

Thanks for your response.

However, documentation might be "veering away from this practice", but nowhere does it state this officially.

So the question still is, how did we ever get this value of 20%? It is not just a random number, there must be a though process behind it.




I found the TR on thin provisioning I mentioned before very helpful regarding this. It explains the traditional setup with lun reservations (or file reservations), snap reserve, and volume guarantee. It then moves into the purpose of fractional reserve. Finally, it looks into thin provisioning. With true thin provisioning, all of that is out the window, lun reservations are turned off, volume guarantee is set to none, and there is no snap reserve set.

I'd also recommend looking at the Data Protection Online Backup and Recovery Guide, if you haven't before. Specifically, I'd recommend looking at "What the Snapshot reserve is":


My take on it is that the snap reserve was created to isolate a space that only snaps could fill. No other files in the active file system can use that space. Also, it makes it very handy for finding snaps if you are using a NFS or CIFS client. Just look in the .snapshot folder for NFS or in the ~snapshot folder for CIFS.

Of course, snaps can overfill the snap reserve and take up regular volume space. This is why lun reservations were created to guarantee the space the lun was granted.


Recently came across this thread again and wanted to update it.

1. I've adjusted my view on the aggregate snap space and now set it to 3%. It's based on this blog posting from Aaron Delp, look in the comments section where I ask Aaron about this.


2. In my previous post regarding thin provisioning, I stated previously turning lun reservations off, volume guarantee to none, and snap reserve off. I made a mistake in those settings as, based on TR-3483, I usually turn lun reservations off, set volume guarantee to volume, set snap reserve off, and set the volume to autogrow to whatever is the maximum size I want it to grow. This is by no means the only way to thin provision, but happens to be the method I'm currently liking for our setup.

In such a scenario, I may present a 300GB lun with no space reservation, but if it is currently using 80GB of data, I may build it on a 100GB volume and allow it to auto-grow to 300+GB....usually a fair amount beyond the lun size, as the lun can be resized later on. I would also set the increment size appropriately based on estimated growth rate, so that the volume isn't autosizing too often.

Just some more thoughts...



I think its worthwhile to emphesize that we essentailly habe two threads here and Aggregate has Snapshot reserve and *also* Volume has Snapshot Reserve, so there are *two* snapshot reserve areas. The initiator of this thread only asked about Volume Snapshot Reserve (VSR). Aggregate snapshot reserve defaults to 5%, in many cases may be reduced but it's not an easy-gain as it looks to be at first glance. Please take a look at http://esatea.wordpress.com/2011/01/06/aggregate-snap-reserve-netapp/.

With regards to Volume it's Snapshot Reserve defaults to 20%, I am not so sure whether any any in-depth investigation had been put in to determining this value, perhaps its a rule of thumb... I have no idea. VSR generally depends on the data held in the volume, how the data is used (changed, deleted) It also depends on your requirements and how long history of snapshots you intend to keep. I am generally saying it's an adjustable value that should be determined on a case-by-case basis.


Hi Mark,

I guess its hard for us to answer on behalf of someone else why they chose 20% and not 15%. I think the value is there for legacy reasons maybe? Currently in our environment

we turn aggr. snap reserve to 0% as we have a shared service model where there is no way we d restore an entire aggr on top if itself. As for volumes most of the time

we turn snap reserve to 0 as we thin provision and push free blocks back to the aggr. to facilitate space monitoring.

That works for us, wont work for all and certainly does not answer your query Im afraid. I guess the point I was trying to make is that I guess 20% is from back in the olden days

and there is no need to stick to those settings at all.