ONTAP Discussions

Volumes too large

lmarincic
6,853 Views

So, I am fairly new to NetApp and just found out that the limit for my 6080 cluster

per volume is 16TB.  Does the 16TB mean that a 16TB volume can be deduped, or, does it mean that there can only be 16TB of deduped data per volume?

If it's the latter, will there be any negative effect if I go into each volume and shrink it to a more ideal size?

Should I re-create the volumes from scratch?  It's all VMware and I can migrate the virtual machines to another volume so there is no worry about data loss.

Thanks

10 REPLIES 10

radek_kubka
6,796 Views

Hi & welcome to the forums!

What version of ONTAP are you running - 7.3.2? If that the case, your numbers are:

Maximum size of volume with deduplication (TB) = 16

Total data size of volume with deduplication (TB) = 32

(http://now.netapp.com/NOW/knowledge/docs/ontap/rel732/html/ontap/onlinebk/GUID-2FD76C49-08CE-47BA-972D-E0F578DF6575.html)

Hope it helps!

Regards,
Radek

bkoopmans
6,796 Views

I may be confused here, but isn't the absolute maximum size for aggregates 16TB in 7.x? Doesn't this mean that the maximum volume size is also 16TB? So what's the deal with that 32TB limit?

radek_kubka
6,796 Views

Doesn't this mean that the maximum volume size is also 16TB? So what's the deal with that 32TB limit?

I thought that would be clear by now

Imagine 16TB volume with two identical files, 1TB each. Say you deduped them, all blocks were truly identical, so you are consuming just 1TB of disk space in the volume - having 2TB of 'front-side' (un-deduped) data. If you take it further, you may add another and another identical file and after de-dupe scan you'll still use just 1TB of disk space. After adding the 32nd file you'll hit this 32TB limit, yet there will be still 15TB of free space available in the volume.

I know this is extreme (& rather unrealistic) example, but hopefuly it describes the idea!

Regards,
Radek

anthonyfeigl
6,796 Views

If you are NOT running 7.3.2, keep this in mind.

Anthony

http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=308043

lmarincic
6,796 Views

Ah, thanks for the links.

I'm currently running 7.3.1

I guess the next question is can I shrink the volumes, or should I re-create them from scratch?  I know I can shrink them generally speaking, but I'm not sure if the fact that dedupe was already turned on will have any impact.

radek_kubka
6,796 Views

The bug in question is related to the amount of un-deduped data exceeding 13TB, so shrinking volumes will make no difference whatsoever.

Having said that, it's a tad unclear to me whether this bug remains unfixed for 7.3.1 specifically??

  • Data ONTAP 7.3P1 (First Fixed) - Fixed
  • Data ONTAP 7.3.2 (GD) - Fixed
  • Data ONTAP 8.0RC3 (RC) - Fixed

Anyone?

Regards,
Radek

lmarincic
6,796 Views

There is currently barely anything in any of the volumes I've created.  The problem is that the 3 volumes are between 6TB and 7TB each.  I will definately be able to hit the dedupe limit if it's saving me 75% of space.

How do you suggest I proceed?

I'm also still unclear as to what the 32TB actually means.  Does it mean that it can hold 16TB of RAW data + 16TB of deduped data and neither can be exceeded?

radek_kubka
6,796 Views

First of all, according to this document - http://media.netapp.com/documents/tr-3505.pdf - ONTAP 7.3.1 caps for 6080 are identical as for 7.3.2.

Maximum total data limit in a deduplicated volume of 32TB means you can store that much of un-deduped data. So using simple maths, if your saving is 75%, then you will need 8TB volume for storing this after de-duplication.

And BTW - you are perfectly OK to shrink or grow volumes as you see fit.

pascalduk
6,796 Views
Having said that, it's a tad unclear to me whether this bug remains unfixed for 7.3.1 specifically??

On the bug page Anthony refers to "fixed in version" section has a link to a page showing all releases were this bug is fixed. Direct link: http://now.netapp.com/NOW/cgi-bin/bugrellist?bugno=308043

anthonyfeigl
5,834 Views

You should be able to confirm the FIX using the Release comparison tool.

I expect that would be highly accurate.

I did do a ton of research for my company on Dedupe, and the 13TB was not fixed under 7.3 or 7.3.1 at the time (7.3.2 was not available then).

I am sure NetApp has created a P fix on 7.3.1 or you just have to go with 7.3.2


Anthony

Public