I've come across an interesting error on my filer, not quite sure which limit is nearly reached.
It is a FAS3240 running ONTAP 7-Mode 8.1.4P1. The aggregates are 64-bit and quite large (62TB). So far I have only been able to find dedupe TR3505 which refers to older ONTAP versions, so those limits aren't really applicable. Hardware universe doesn't really go into specifics for the dedupe limts. TR-3505 did mention that if more data is put into the volume than can be deduped, the remainder of the data will simply remain undeduplicated.
My main questions are
- Which limit is being approached (i.e. is it the limit of total logical data which can be deduped?) 13% (6TB) seems quite a low amount of space saving if we are already hitting this limit. This is the largest of the volumes on this system, so perhaps it is a high utilisation
- What are the implications of this limit being reached (i.e. will the rest of the data simply no longer be deduplicated)?
Tue JulTue Jul 29 14:33:47 EST 29 14:[TOASTER:sis.logical.limit.near:notice]: Deduplication engine's logical data limit is nearly reached on volume abc_02_02.
33:47 EST [TOASTER:sis.logical.limit.near:notice]: Deduplication engine's logical data limit is nearly reached on volume abc_02_02.
TOASTER> df -S abc_02_02
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
See tr-3958 for more up-to-date version. Deduplicated data limit is max volume size, which is 50TB for 8.1.4 on FAS3240. It has nothing to do with deduplication rate - if you have 50TB of data that cannot be deduplicated, you still have 50TB of data that must be processed by engine.
When deduplication limit is reached, extra data won’t be deduplicated (or compressed) until logical amount of data falls below limit.