ONTAP Discussions

Volume's usable space

osp
12,607 Views

Hi all,

 

I am an end-user of Netapp storage.

 

At my company, my area purchased 432TB of Netapp storage.

 

Our storage team has taken the disks, and now deployed/installed, and now of the 432TB storage, 37 has been set-aside for spare disks, and 70TB has been reserved for 'Netapp Health'. They say it is industry standard for array health and performance to carve out ~20% to provide capacity for background and OS processes.

 

This question is coming from me -- an end-user.  I am not a storage expert. I did try to understand this:

 

https://kb.netapp.com/support/s/article/ka21A0000000gB5QAI/faq-how-is-space-utilization-managed-in-a-data-ontap-san-environment?language=en_US

 

question please:

 

20% space reserved for Netapp health, to me, seems like a sizeable chunk of space that unfortunately my area cannot take advantage of as useable space. What is Netapp doing behind the scenes to require this amount of disk? I mean, is this space being used as scratch/temp space, during Netapp compression executions? Or perhaps for Netapp deduplication?  I am simply curious as to the details here. Again I am not a storage expert, I hope you can break it down for me!! 🙂  Thanks all!!

5 REPLIES 5

Jeff_Yao
12,539 Views

netapp doesnt take that much of space. i think ur 20% is a general rule of thumb to keep the filer perform well as it's WAFL file system which does need more free space to write "any"where. but of course, you still are able to use that space too. and as now there're so many features about compression etc, 20% doesnt quite apply anymore.

sgrant
12,509 Views

Hi, as already mentioned leaving 20% free in the aggregate is a best practice to maintain optimal performance and ensure availability. You can however use this space, it is not reserved. Although, just like any filesystem that comes close to filling, additional work may be required to restore performance once free space becomes available again.

 

However, if you are referring to the raw capacity i.e. the capacity of the physcial disks you purchased vs. the actual usable capcity available to be used, then yes, the filesystem (WAFL) does reserve some space. It is mentioned in the link you provided:

 

How is the space in an aggregate allocated?

  • WAFL reserve WAFL reserves a 10% of the total disk space for aggregate level metadata and performance. The space used for maintaining the volumes in the aggregate comes out of the WAFL reserve and it cannot be changed.
  • Aggregate Snap reserve is the amount of space reserved for aggregate snapshots

 

By default, there should no longer be any space in the aggregate reserved for snapshots, except for the dedicated root aggregate on each node, so from the diagram you should only "lose" 10% to WAFL.

 

And to confirm, the WAFL 10% is automatically reserved meaning the usable space you see displayed on any output in System Manager etc already excludes this figure, therefore you can use all the aggregate space if necessary - however please see the first paragraph.

 

Hopefully sheds a little more light for you, otherwise please ask.

 

Thanks,

Grant.

osp
12,411 Views

Thanks Grant for the info.

 

 

so as mentioned we have 432TB storage, 37 has been set-aside for spare disks, and 70TB set aside for aggregate level metadata and performance.

 

the 70TB represents 17.7% of our disk space (432-37 = 395.  70/395 = 17.7).

 

is it reasonable for us to lower this from 17.7 all the way down to 10%?  Is there any pitfall or risk? We are desperate to add more user disk space, but at the same time we want to adhere to best practices (within reason!! =). Thank you kindly!

 

 

sgrant
12,359 Views

I'm not 100% sure where your figure for 70TB for NetApp Health comes from? In my previous post I tired to explain how much space in the aggregate you can use.

 

When you create an aggregate, you define the number of disks and how many RAID groups (i.e. how many disks will be used for parity i.e. non-data disks). The 10% WAFL reserve is behind the scenes and not negiotable. You may however use the entire rest of the aggregate space for data, if you so desired - but strongly not recommeneded - see previous post. There is no element of NetApp Health, except the best practice of not trying to exceed 80-90% full because of availablity and performance.

 

The output of df -A -t will give you the amount of space (in TB) that is available to you for creaitng volumes and making available for data use. You cannot change this amount, it is dictated by the number of disks you have available and your RAID group layout.

 

Have you considered thin provisioning where you can over allocate the aggregate by creating volumes larger than the actual capacity available. However this comes with it's own warning, that the agrgegate capacity must be constantly monitored and plans in pace to address any lack of space, either purchase new disk or free up space by moving volumes into different aggregates ordeleting old/unused volumes.

 

Also, storage efficiencies will allow you to store more data in the same space. Depending upon your version of controllers and ONTAP version, you can implement deduplication, compression and compaction. These can offer great savings, especially for virtual environments and CIFS data.

 

For both thin provisioning and storage efficiencies please see the Logical Storage Management Guide for the version of ONTAP in use - ONTAP 9 link here: https://library.netapp.com/ecm/ecm_download_file/ECMLP2492715

 

Also, TR-3966: Data Compression and Deduplication Deployment and Implementation Guide for Clustered Data ONTAP: https://www.netapp.com/us/media/tr-3966.pdf

 

Hopefulyl this helps you maximise the space you have.

 

Thanks,

Grant.

 

 

 

 

osp
12,324 Views

So we have 395TB total disk. My IT department reserved a full 70TB for Netapp health. This represents 17.7% of the total.

 

I am more of an end-user with some technical knowledge, but not an expert.  I know for a fact we use dedup and compression features.

 

There is a severe strain on disk space, and I desperately need more useable disk space.  To me as an end-user, 70TB was a huge amount to not be able to use.

 

This is why I was asking the question. Thank you so much for your replies.

Public