ONTAP Hardware

Deduplication's Logical Data Limit nearly reached

parityerror
4,748 Views

Hi,

I've come across an interesting error on my filer, not quite sure which limit is nearly reached.

It is a FAS3240 running ONTAP 7-Mode 8.1.4P1. The aggregates are 64-bit and quite large (62TB). So far I have only been able to find dedupe TR3505 which refers to older ONTAP versions, so those limits aren't really applicable. Hardware universe doesn't really go into specifics for the dedupe limts. TR-3505 did mention that if more data is put into the volume than can be deduped, the remainder of the data will simply remain undeduplicated.

My main questions are

- Which limit is being approached (i.e. is it the limit of total logical data which can be deduped?) 13% (6TB) seems quite a low amount of space saving if we are already hitting this limit. This is the largest of the volumes on this system, so perhaps it is a high utilisation

- What are the implications of this limit being reached (i.e. will the rest of the data simply no longer be deduplicated)?

Tue JulTue Jul 29 14:33:47 EST  29 14:[TOASTER:sis.logical.limit.near:notice]: Deduplication engine's logical data limit is nearly reached on volume abc_02_02.

33:47 EST [TOASTER:sis.logical.limit.near:notice]: Deduplication engine's logical data limit is nearly reached on volume abc_02_02.

TOASTER> df -S abc_02_02

Filesystem                used       total-saved    %total-saved    deduplicated    %deduplicated    compressed    %compressed

/vol/abc_02_02/     46378500288        6825984656             13%      6825984656              13%             0             0%

TOASTER> df -Vg abc_02_02

Filesystem               total       used      avail capacity  Mounted on

/vol/abc_02_02/        46079GB    44353GB     1726GB      96%  /vol/abc_02_02/

snap reserve               0GB        0GB        0GB     ---%  /vol/abc_02_02/..

TOASTER> aggr show_space t02_aggr02

Aggregate 't02_aggr02'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape

  72784624640KB    7278462440KB             0KB   65506162200KB             0KB     569484220KB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee

abc_02_02                   47259792528KB   47096419876KB            none

Aggregate                       Allocated            Used           Avail

Total space                 47259792528KB   47096419876KB   17674614816KB

Snap reserve                          0KB             0KB             0KB

WAFL reserve                 7278462440KB     804435756KB    6474026684KB

TOASTER> df -S

Filesystem                used       total-saved    %total-saved    deduplicated    %deduplicated    compressed    %compressed

/vol/vol0/             9164856                 0              0%               0               0%             0             0%

/vol/abc_vfiler02_root/      46964                 4              0%               4               0%             0             0%

/vol/abc_02_04/     41661782260        1530436996              4%      1530436996               4%             0             0%

/vol/abc_02_06/     46772132572         831851744              2%       831851744               2%             0             0%

/vol/abc_02_02/     46528623924        6825665872             13%      6825665872              13%             0             0%

/vol/abc_02_08/     5337547956                 0              0%               0               0%             0             0%

Many thanks

1 ACCEPTED SOLUTION

aborzenkov
4,748 Views

See tr-3958 for more up-to-date version. Deduplicated data limit is max volume size, which is 50TB for 8.1.4 on FAS3240. It has nothing to do with deduplication rate - if you have 50TB of data that cannot be deduplicated, you still have 50TB of data that must be processed by engine.

When deduplication limit is reached, extra data won’t be deduplicated (or compressed) until logical amount of data falls below limit.

View solution in original post

2 REPLIES 2

aborzenkov
4,749 Views

See tr-3958 for more up-to-date version. Deduplicated data limit is max volume size, which is 50TB for 8.1.4 on FAS3240. It has nothing to do with deduplication rate - if you have 50TB of data that cannot be deduplicated, you still have 50TB of data that must be processed by engine.

When deduplication limit is reached, extra data won’t be deduplicated (or compressed) until logical amount of data falls below limit.

parityerror
4,748 Views

Thanks for the updatd TR and clarification on what occurs when this limit is reached.

Good to know we aren't at risk of preventing filesystem writes, rather we will lose dedupe benefits beyond the 50 Tib mark.

Public