Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
De-Dupe volume sizes
2009-06-19
09:36 AM
14,030 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
My understanding is that there are limits for the size of volumes that can be de-duped. Is that right? What are those limits and where do they come from?
Thanks!
Dave
28 REPLIES 28
2009-06-19
09:46 AM
11,581 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It all depends on what version of DATA ONTAP you are running and the model of the filer that you own. For instance we run DOT 7.2.6 with a FAS3020 and the dedupe limit is 1TB volume however if we were to upgrade to 7.3.1 (which isn't in General Deployment yet) we could dedupe a 2TB volume.
2009-06-19
09:56 AM
11,946 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Information on volume sizes for all supported platforms with various Data ONTAP versions is available in Dedupe DIG: http://media.netapp.com/documents/tr-3505.pdf.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
NetApp deduplication is the #1 implementation of deduplication on primary storage, meaning it is being used on production systems to deduplicate active data. With well over 30,000 licenses installed, it is a proven technology.
A key factor in the success of NetApp deduplication for primary storage is the fact that the system resources of each storage system model are being considered so that they are not oversubscribed, such as system memory. NetApp deduplication for FAS uses different max volume sizes for different models to help ensure resource availability so that the performance of the primary storage system is maintained.
It is worth noting that this max volume size is a delimiter of the physical size of the volume only. That is to say that even though a volume size may be limited to 3 TB in size, it is still capable of storing greater than 3TB of deduplicated data. For example, you might see 5 TB of data being stored, but it would only be using 2TB of storage thanks to deduplication.
Below are the max vol sizes by model and version of Data ONTAP.
Data ONTAP 7.2.X (Starting with 7.2.5.1) and Data ONTAP 7.3.0 | |||||||
FAS2020 | FAS3020 N5200 FAS2050 | FAS3050 N5500 | FAS3040 FAS3140 N5300 | R200 | FAS3070 N5600 FAS3160 | FAS6030 FAS6040 N7600 FAS3170 | FAS6070 FAS6080 N7800 |
0.5TB | 1TB | 2TB | 3TB | 4TB | 6TB | 10TB | 16TB |
Data ONTAP 7.3.1 or higher | |||||||
FAS2020 | FAS3020 N5200 FAS2050 | FAS3050 N5500 | FAS3040 FAS3140 N5300 | R200 | FAS3070 N5600 FAS3160 | FAS6030 FAS6040 N7600 FAS3170 | FAS6070 FAS6080 N7800 |
1TB | 2TB | 3TB | 4TB | 4TB | 16TB | 16TB | 16TB |
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks, Carlos!
A couple of follow up questions:
Where do these limits come from and why do they vary from system to system? For example, why is the limit 4TB on the FAS 3140 and 16TB on the FAS 3170?
Is there any work-around for this? For example, if I have a bunch of VMware boot images that would de-dupe down to 5TB of real disk space is there any way to do that (on a FAS 3140)? If the data that's common between images is about 1TB each, it would be a shame to have to duplicate that data a bunch of times due to this limit.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
> Where do these limits come from and why do they vary from system to system? For example, why is the limit 4TB on the FAS 3140 and 16TB on the FAS 3170?
The limits come from the available resources on the systems (CPU, memory, etc...).
> Is there any work-around for this?
The limits are volume based so one can break things into multiple volumes. That obviously has some trade-offs in terms of deduplication as well as other areas but is certainly a possibility.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Carlos,
Two things:
NetApp deduplication for FAS uses different max volume sizes for different models to help ensure resource availability so that the performance of the primary storage system is maintained.
Does it relate to system resources during actual de-duplication or outside of this process? The former may or may not be a problem, as in a non-24/7 environment hammering the system for say 8 hours only to de-dupe the data could be 100% feasible. The latter is a subject to discussion in two separate threads here & I am yet to hear a firm answer to this.
For example, you might see 5 TB of data being stored, but it would only be using 2TB of storage thanks to deduplication.
Well, nice. The problem is A-SIS is a post-process deduplication, so if above scenario happens on one of the smaller filers, it may mean repetitive adding the un-de-duped data to the volume, running A-SIS against it, adding more data & so on, so forth. A bit tedious & not every admin will have enough time / patience to actually do this.
Do not get me wrong - I love A-SIS, but what I am saying is that capping volume sizes can make people's life harder, so the question is whether there is a good reason behind that.
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The volume size limits help ensure that the actual process of deduplication does not oversubscribe the system resources.
Working with a post-processing deduplication process means that you must take the initial size of the un-deduped data into consideration.
You will need to consider best practices for each specific scenario, as described in TR-3505, mentioned in a previous reply.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Why then can't you run A-SIS on a volume that was ONCE over the limit. We migrated from 3020 with a main volume of c. 4.5Tb to a 3140 with new shelves, and in the process I split the data across two different volumes, using Snapmirror, so as to be able to use A-SIS, unfortunately as you can't retrofit Qtrees we had to Snapmirror an entire volume and then delete the unwanted parts. All resulting volumes are well below 3TB, but A-SIS will not work on the volume that was briefly over 4TB or indeed a completely fresh copy of it!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Interesting stuff - we are entering a wild area!
Unfortunately I have no good answer to your question (hopefully someone else does). The only thing on my mind would be to come back to square one & use QSM instead of VSM, which will definitely drop behind all legacy characteristics of the original volume.
What particular error message are you getting when trying to run A-SIS on the new volume?
Regards,
Radek
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
hi Nigel,
Did you consider using QSM to mirror qtrees into volumes that would maybe solve this? I know I have not got enough information about this to make
a qualified statement, and its probably too late as well. but for the future maybe QSM would be good for you?
Cheers,
Eric
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Erm...what version of Data ONTap are you running? This restriction was specifically lifted in 7.3.1+ (related to moving the fingerprint hashes from the volume to the aggregate I believe).
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Perfect graph....I've found this information in multiple places but that's the nicest representation so far.
And with 7.3.1+, it's ever so much less painful given being able to shrink a volume back down under the limit to turn on dedup.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi:
What is the max flexvol size for deduplication for a GF960 system?
I do not see any guidelines published for this platform.
Thanks,
Chris
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am on leave till Monday, 7th September 2009. Please contact support@tek-tools.com for any assistance in the meantime, and I will attempt to respond to your email as soon as possible upon my return.
Regards,
Aravind
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I don't believe A-SIS is supported on the 900 series.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
chriskranz wrote:
I don't believe A-SIS is supported on the 900 series.
Correct, but the R200 is supported which basically is 900 series hardware. Could it perhaps be that ontap software can't handle the nearstore personality license, which is required for the a-sis license, on the 900 series?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello,
I am on leave till Monday, 7th September 2009. Please contact support@tek-tools.com for any assistance in the meantime, and I will attempt to respond to your email as soon as possible upon my return.
Regards,
Aravind
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
A couple of things. Dedupe is not SUPPORTED on the 7.2 code line but it will run. I have a 940 in my lab and it works fine. As of the 7.3 code line the functionality was disabled.
As to why you can't dedupe a volume once it goes over the limit. You can't (and the reason why it doesn't make any sense) is because it is a bug. I think that the bug was fixed in 7.3 but I'm not sure. I know that bug existed in 7.2.
Regards,
Aaron
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hmm...I think I might be missing something here but dedup is definitely supported in the 7.2.x code line (7.2.5.1 minimum with 7.2.6.1 greatly preferred...maybe even just one of the P releases).
As to not being able to dedup a volume if it goes over the limit in 7.2, my understanding is that this wasn't as much a bug as an architectural limitation around how the dedupe metadata was stored in 7.2 (in the volume). Once the metadata was moved to being stored in the aggregate in 7.3, that wasn't an issue anymore (so could shrink the volume down under the limit and enable dedup).