I have a 2-node FAS8200 cluster that already has a couple stacks of DS212c and DS224c shelves. I've been approved by management to acquire a DS46c shelf chassis with 20x16TB FSAS drives.
I'd like some help to figure out what the RAID layout should be for the initial disks, with the expectation of adding about 30 more FSAS disks over the next year or two.
We currently have a bunch of shelves and aggregates that are nearly maxed out and were created without any spares(!), so I'm hoping to migrate volumes into the new aggregate(s) on the DS460c and then clean up the existing aggregates likely by redefining them so that there will be spares.
TL;DR: Getting 20x new 16TB FSAS drives, will be adding more - need thoughts on RAID-TEC group sizing. Thanks!
For my understanding, NetApp engineers/partners doing the RAID sizing via internal tools these days, rather manually, so I will suggest engaging them.
At the moment I don't have access to HWU.netapp.com, but I reckon the recommended number of any disks in raid-group and spares listed there against any disk model and RAID type (see the TR below explaining a bit what "recommended" means and by how much you can deviant).
In general, my take on it RAID-TEC (I can't back it up by DOC at the moment - just the podcast below) - is that it mainly good for very large RAID groups, high-capacity and low-performance.
That sounds like your use-case by the disk capacity you ordered. But I don't know your actual workload - so can't really give actual advice (along with the lack of up-to-date NetApp documentation with information about these type of drives).
@TMAC_CTG As he has an existing array with other disks, he may not really need to spread the new disks across the two controllers. can he do 15 data, 3 parity, 2 spares. total 240TB AGGR? (again, don't have HWU at the moment).
Thanks all - this is where I'm really not great with sizing. If I was able to press management for 30 drives instead of 20, would that allow for more efficient RAID-TEC raid groups or aggregates? The goal is low-IOPs demand high space archival type storage. Generally we see data being written once and then pretty well forgotten, or read back infrequently.
If you were to get 30 drives and you wish to equally spread the capacity between two nodes, you would end up with:
10 Data + 3 Parity + 2 spare per controller = 128.31TB (256.62 total usable)
If you wanted to totally maximize capacity: 1 Aggregate on 1 controller with:
25 Data + 3 Parity + 2 Spares (thats all 30 drives) = 320.77TB
Personally, I am a fan of using *both* controllers to utilize all CPUs and SAS controllers whenever possible.
Additionally, if your data is unstructured (like home directories and NOT like VMware datastores or databases) you could also use FlexGroups to help spread the load across the controllers and the disks.
Don't be afraid to ask for a quote to get twice as many 8TB drives (instead of 16TB). In fact, only 60-drives @ 8TB gets:
25 Data + 3 Parity + 2 Spares (per controller) = 160.64TB per controller (321.8TB across both).
What I am saying is, maybe instead of focusing on a number of large spindles, think about, how much capacity you need/want and see where the price breaks down on 8TB, 10TB and 16TB drives. You may end up with more capacity for less money (usually due to Parity overhead)
RAID-TEC does not really slow anything down. It was developed due to the significantly longer rebuild times of larger drives. ONTAP requires that any drive 8TB or larger utilize RAID-TEC. Anything smaller, then RAID-TEC is optional.
I have used it on smaller drives on occasion. One of the benefits of RAID-TEC is a 29 disk raid group. Instead of using 4 parity drives and 14-disk raid groups, I can create a 28 disk raid group with three parity drives, squeaking out one more data drive.
Thanks very much for the input folks - we pulled quotes for the 10TB and 16TB FSAS drives and the cost per GB is fairly comparable, and you point out a lot of upside to going with more disks at the cost of lower overall chassis capacity.