ONTAP Discussions

DS4243 (2TB SAS x 24)...only yielding 21TB?!?

bobby_gillette
8,436 Views

Created aggregate with default raid group = 16, showing 22 data disks, 4 parity... yet the aggregate is only showing 21TB. I'm aware of WAFL overhead, size averaging, etc all reducing space, but somehow I figured I'd be able to present more than 21TB. One thing I'm curious about... I'm showing 8 spares on a vol status -s...

Any ideas?

1 ACCEPTED SOLUTION

fjohn
8,435 Views

Let's step though it.  One thing not mentioned is Data ONTAP version and if this is a 32bit or 64bit aggregate.

Start with the drive.  2TB is in base 10 used by disk drive suppliers.  The first step is to convert from base10 to base 2.  This is the same across all storage vendors.  2,000,000,000,000 bytes = 1.8.19 TB.  We've lost nearly 10% off the top.

Next comes parity overhead.  Since you chose the default RAID group size of 16, you get 16 drives (the other 8 does not equal a whole RAID group, although you can add drives later to the aggregate to fill a partial).  Given 24 drives, I would have personally gone with a number like 22.  The max RAID group size without overriding it for SAS on RAID DP is 28.  With a RAID group size of 16, you have 2 parity spindles and 14 data spindles.  With 22, you have 2 parity spindles and 20 data spindles.  So, it's either 25.466 TB or 36.38 TB with either 8 or two spares.  In that both of these are above 16TB,  I'll assume large aggregates in ONTAP 8.0 or 8.0.1 7-Mode.

Since these are SAS drives, they are formatted with 520 bytes per sector.  the extra 8 bytes in each sector are used to store checksum data.  If this were SATA, the sector size would be 512 bytes and the checksums would take additional blocks. They're not SATA, they're SAS, so no loss here.

Another thing that happens is that drives are sourced from more than one vendor.  Due to slight differences in the geometry and hence the number of sectors, drive are typically "right sized" so that they are interchangable across the vendors from which they are sourced.  This typically consumes about 2% of the space, and that's across the storage industry.  25.466 becomes ~ 24.95, and 36.38 becomes ~ 35.65.

After that, we reserve 10% of the space for WAFL to do it's thing.  You pay 10% of space to optimize the write performance.  How much?  Check out http://blogs.netapp.com/efficiency/2011/02/flash-cache-doesnt-cache-writes-why.html where I present the results of 100% random write workload tests over time.  That leaves you with 22.45 TB or 32 TB.

Last but not least, from the usable space there is a default 5% aggregate reserve.  If you are not using MetroCluster or Synchronous Snapmirror, then you can remove the reserve to recoup that 5%.  (see the link).    With the aggregate reserve, 22.45TB becomes 21.325 TB, what you obtained, and 32.085 TB becomes 3.05 TB.

In light of this, I'd recommend using a RAID group size of 22, and removing the aggregate reserve (unless you are using Metrocluster or Syncronous SnapMirror).  This would give you 32.085 TB in an aggregate consisting of 22 spindles, and two hot spares for a total of 24 drives.

I hope that helps explain where the space goes.

JohnFul

View solution in original post

11 REPLIES 11
Public