ONTAP Hardware

WAFL volume usage > 90% ok?

ralfgross
11,521 Views

Hello,

we store large amounts of video data on a FAS3140 filer (Ontap 8.0.1). For that I create large aggregates and volumes up to 8 TB size. There is one 37 TB agg. with 4x 8 TB volumes (no space reservation, 24 of 37 TB used atm) the usage of the volumes is 60-90%. Now I need to create an additional volume. I know that I could create the volume with 8 TB and overcommit. But the growth is a bit hard to predict, most of the time a volume fills up really fast. So I fear that the agg. may be full and I can't expand or shrink the volumes if needed.

Is a volume usage of 90-100% a problem with a WAFL filesystem? I know other filesystems where the usage should not exceed 80-85%. The data that is stored comes in batches which can not be split, so at some point the user tells me that volume xyz will not grow further. If it's ok to fill a volume to >90%, I will later shrink the other volumes that have some space left and resize the new volume.

7 REPLIES 7

shaunjurr
11,521 Views

Hi,

From personal experience and benchmarking in this area in the past, I would not recommend trying to run with aggregates over 90% full where you need optimal performance.  I run a lot of CIFS shares on filesystems that are a little too full, but you will experience a significant performance degradation when you cross the 90% barrier.  For things like video streaming, such an occurrence could be very disruptive.

If the older data is not so active and premium performance isn't a requirement, you might get some saving out of deduplication, but I'm not sure how much, if any savings, can be achieved with video formats...

Hope this helps...

ralfgross
11,521 Views

shaunjurr schrieb:


From personal experience and benchmarking in this area in the past, I would not recommend trying to run with aggregates over 90% full where you need optimal performance.  I run a lot of CIFS shares on filesystems that are a little too full, but you will experience a significant performance degradation when you cross the 90% barrier.  For things like video streaming, such an occurrence could be very disruptive.

If the older data is not so active and premium performance isn't a requirement, you might get some saving out of deduplication, but I'm not sure how much, if any savings, can be achieved with video formats...

I hope that the aggregates will not exceed the 90% mark. But what about the volumes?

shaunjurr
11,521 Views

Hi,

If you "thin-provision" then you basically just have one place to watch: the aggregate filling.  Basically, the rule has been to either keep all of the volumes in an aggregate under 90% or the aggregate itself under 90% full.  NetApp will often recommend 80%, but at 90% you start to see problems.

jb2cool01
11,521 Views

I believe that volume usage can go above 90% and not cause any issues unless it fills up completely and the volume will go offline. The aggregate should not be allowed to fill up this much though. I believe that ~80% is a line that shouldn't be crossed when it comes to the aggregate.

Do you thin provision? that migh free up some space back to the aggregate.

ralfgross
11,521 Views

Yes, I use thin provisioning. But this filer does not store typical end user data and Dedup only safes 1-3%. I'll have a look at the agg. space usage, but I guess we will fill up 90% of it's space.

Darkstar
11,521 Views

From my experience, the problem is worse with >90% full volumes than it is with >90% full aggregates. If the volume is too full, the background reallocation task can't do it's job correctly, and manual defragmentation (with "reallocate start") will also be impossible. This is the only reason why I always suggest keeping the volumes below 85-90%. Reallocation runs in each volume and needs free space to work with

-Michael

jasonp
11,521 Views

I have found the 90% recommendation for aggregate usage in TR-3647

http://www.netapp.com/us/media/tr-3647.pdf

Page 3

"The Data ONTAP data layout engine, WAFL®, optimizes writes to disk to improve system performance and disk bandwidth utilization. WAFL optimization uses a small amount of free or reserve space within the aggregate. For write-intensive, high-performance workloads we recommend leaving available approximately 10% of the usable space for this optimization process. This space not only ensures high-performance writes but also functions as a buffer against unexpected demands of free space for applications that burst writes to disk. "

The flexvols within the aggregate can be safely run at 100% (beware of Bug 156577)

~Jason

Message was edited by: Jason Palmer Regarding the dedupe part of the discussion; https://library.netapp.com/ecm/ecm_download_file/ECMM1277794 (Page 246) The deduplication metadata can occupy up to 6 percent of the total logical data of the volume, as follows: •     Up to 2 percent of the total logical data of the volume is placed inside the volume. •     Up to 4 percent of the total logical data of the volume is placed in the aggregate

Public