It all depends on what version of DATA ONTAP you are running and the model of the filer that you own. For instance we run DOT 7.2.6 with a FAS3020 and the dedupe limit is 1TB volume however if we were to upgrade to 7.3.1 (which isn't in General Deployment yet) we could dedupe a 2TB volume.
NetApp deduplication is the #1 implementation of deduplication on primary storage, meaning it is being used on production systems to deduplicate active data. With well over 30,000 licenses installed, it is a proven technology.
A key factor in the success of NetApp deduplication for primary storage is the fact that the system resources of each storage system model are being considered so that they are not oversubscribed, such as system memory. NetApp deduplication for FAS uses different max volume sizes for different models to help ensure resource availability so that the performance of the primary storage system is maintained.
It is worth noting that this max volume size is a delimiter of the physical size of the volume only. That is to say that even though a volume size may be limited to 3 TB in size, it is still capable of storing greater than 3TB of deduplicated data. For example, you might see 5 TB of data being stored, but it would only be using 2TB of storage thanks to deduplication.
Below are the max vol sizes by model and version of Data ONTAP.
Data ONTAP 7.2.X (Starting with 18.104.22.168) and Data ONTAP 7.3.0
Where do these limits come from and why do they vary from system to system? For example, why is the limit 4TB on the FAS 3140 and 16TB on the FAS 3170?
Is there any work-around for this? For example, if I have a bunch of VMware boot images that would de-dupe down to 5TB of real disk space is there any way to do that (on a FAS 3140)? If the data that's common between images is about 1TB each, it would be a shame to have to duplicate that data a bunch of times due to this limit.
NetApp deduplication for FAS uses different max volume sizes for different models to help ensure resource availability so that the performance of the primary storage system is maintained.
Does it relate to system resources during actual de-duplication or outside of this process? The former may or may not be a problem, as in a non-24/7 environment hammering the system for say 8 hours only to de-dupe the data could be 100% feasible. The latter is a subject to discussion in two separate threads here & I am yet to hear a firm answer to this.
For example, you might see 5 TB of data being stored, but it would only be using 2TB of storage thanks to deduplication.
Well, nice. The problem is A-SIS is a post-process deduplication, so if above scenario happens on one of the smaller filers, it may mean repetitive adding the un-de-duped data to the volume, running A-SIS against it, adding more data & so on, so forth. A bit tedious & not every admin will have enough time / patience to actually do this.
Do not get me wrong - I love A-SIS, but what I am saying is that capping volume sizes can make people's life harder, so the question is whether there is a good reason behind that.
Why then can't you run A-SIS on a volume that was ONCE over the limit. We migrated from 3020 with a main volume of c. 4.5Tb to a 3140 with new shelves, and in the process I split the data across two different volumes, using Snapmirror, so as to be able to use A-SIS, unfortunately as you can't retrofit Qtrees we had to Snapmirror an entire volume and then delete the unwanted parts. All resulting volumes are well below 3TB, but A-SIS will not work on the volume that was briefly over 4TB or indeed a completely fresh copy of it!
Unfortunately I have no good answer to your question (hopefully someone else does). The only thing on my mind would be to come back to square one & use QSM instead of VSM, which will definitely drop behind all legacy characteristics of the original volume.
What particular error message are you getting when trying to run A-SIS on the new volume?