My question is if snapshots are just going to overflow into the data volume, what's the point of increasing the snapshot reserve? Is it simply to alert you? We have NFS with default snapshot policies so they recycle every two weeks or something (no autodelete) but it's spilling over 40GB into the data volume. So I can increase it from 5 to 6% to rid ourselves of the alerts but my question is, what's the point of a reserve if it's not going to reserve anything and just spill over? Perhaps this is just an NFS thing or specific settings I missed?
The Snap Reserve is the amount that is masked away from the data protocols (NFS/CIFS)
If I have a FlexVol of 100GB and I set it to a reserve of 20%, the clients will only "see" 80Gb.
If I have a FlexVol of 100GB and I set it to a reserve of 10%, the clients will only "see" 90Gb.
If I have a FlexVol of 100GB and I set it to a reserve of 5%, the clients will only "see" 95Gb.
In any case, if the changed blocks reserved for snapshot fill up the reserved space, they will "steal" space from the visable portion.
If I have a FlexVol of 100GB and I set it to a reserve of 20%, the clients will only "see" 80Gb. But if the Snapshots take up 40Gb, then from ONTAP, you would see Snapshot at 200% (100% =20GB) and the clients would see 60GB
If I have a FlexVol of 100GB and I set it to a reserve of 10%, the clients will only "see" 90Gb. But if the Snapshots take up 20Gb, then from ONTAP, you would see Snapshot at 200% (100% =10GB) and the clients would see 80GB
If I have a FlexVol of 100GB and I set it to a reserve of 5%, the clients will only "see" 95Gb. But if the Snapshots take up 10Gb, then from ONTAP, you would see Snapshot at 200% (100% =5GB) and the clients would see 90GB
By increasing the reserve space, you ultimately hide the space from the clients and provide more for the changed blocks in snapshots.
I understand everything you just said and thanks. Since I can't actually reserve space and prevent spillover, is my only option to take less snapshots by modifying the snapshot policy or turning on auto-delete if I don't want spillover to consume the rest of my volume?
And is this behavior in regards to NFS only since clients see volumes and not LUNs? I don't remember any of my iSCSI volumes being eaten by snapshot spillover.
The point of reserve is to prevent application data from stepping on space for snapshots and causing them to fail. The point of spill-over/overflow is to prevent the snapshots from failing because change rate greater than allotted.
The 5% setting is just a default value and not magic number for all workloads. If your volume is used an archive, i.e. low change rate, its ideal reserve maybe 1%, or your volume may have change rate, i.e. files and directories constantly being added and delete, would need higher reserve of 20% or more to prevent spillover/alerts.
I have volumes that have with 30% reserves and volumes with 0% reserves (some use snapshots & some don't have snapshots so why have a reserve). As with all configurations, reserves should be based on observations, requirements and risk.
You can use '-triggersnap_reserve' to override the the spillover and have the system delete snapshots until the snap reserve trigger threshold is reached. Using the option '-triggervolume' will allow spillover until the application usage pushes the volume to the autodelete trigger threshold.
Scheduled snapshots will be taken if there is space available. If there is a snapshot that needs to be rolled off, say example schedule is 3 hourly only, it will look to roll-off/delete the 4th/oldest hourly. Now if the 2nd and 3rd hourly snapshot were delete early by autodelete, the scheduled snapshot process does not care there is nothing to roll off, as there is not a 4th hourly snapshot.
There are many options on choosing when and what to autodelete, you should have a look and customize to your requirements.