in our environment we have a 6280 san and we use Commvault to back data from the 6280 to our v6210. We recently purchased a lot of 2TB disk that are not assigned to any aggrs yet and I am anxious to get my mits on those disks and start re-engineering our current setup and was curious what raid group size I should do. I think (not 100% sure) that Commvault requires at least 7 disks per raid group.
My assumption - is more spindles the better ? As the i/o this helps the i/o reads.
Anyhow I have a 132 spare 2tb disks on controller a.
and 156 spare 2tb disks on controller b. these disks are right sized at 1.62 TB
and these sans are on 10GbE network. So what are your guys thoughts for best performance and usage of disk for nice sized aggrs as I would create 5tb volums with qtrees and cifs share for Commvault to write the data to.
please refer to TR 3838 Storage Subsystem Configuration Guide about the general limitations considering raidgroup size, max aggr size of your ontap etc.. Commvault usualy does not have a direct requirement how the underlying raid layout should be.
As i imagine Commvault is causing a large sequential load on your disks you can go for bigger raidgroup sizes.
For S-ATA, default is 14, max is 20. You might want to stay between 16 and 18. On a NetApp machine its usualy the best idea to create the biggest aggr possible or, if not all disks fit into 1 aggr, create 2 even layouted aggregates and create 2 volumes for Commvault to backup into.
you're talking V-Series on the one hand and "disks" on the other.
Are you trying to create aggregates from
a) physical NetApp disks attached to a V-Series controller, or
b) LUNs from a 3rd-party array that happen to be 2TB in size?
Generally go with the defaults.
For a) it's 14 disks per RAID-DP Raid Group
This can be increased to 16 so as an example you could go with the following:
- for 132 disks that would result in 8 Raid Groups with 16 disks and 4 spares (~181 TB usable)
- for 156 disks that's 10 RGs with 15 disks and 6 spares (~210 TB usable)
For b) the default RG size is 8 but has no special meaning as there is no raid calculation for 3rd-party LUNs, just a RAID0 striping. The RG size doesn't have any effect on this and spares are not required - well, at least not of that size.
Thanks for your reply . After I send my post off to the forum I went into the vseries and balanced out the disks so they are now even on both controllers. My request has to do with recently new purchased netapp disk on the filer. There only used to be one disk shelf but now many 2tb drive shelves.. Current aggrs are from an IBM San that are presented via LUNS.
Since I balanced the disks of 144 available on each controller I can create a big aggr which would leave me 8 spares in total. Always good to have spares. I just want to make sure from a read/write performance that this setup would be optimum. Also this would be a 64bit aggr with RAID-DP
Since I balanced the number disks on each controller of the v6210 I have 144 on each at my disposal.
To maintain a balanced raid group design these seem to be my options unless someone can correct me. Bear in mind this san is used for D2D purposes and the disks are 2TB SATA Netapp disk which are right sized to ~1.62 TB.
Even though I like option B but it uses all the disks leaving me with no hot spares. So that leaves me with option A or C. Option C provides more usable space but what about the r/w performance? Perhaps I am splitting hairs in this case. Option A or C leaves me with a total of 8 hot spares when combining both controllers. I just need to go with one of the two options and move on… Thoughts ☺
Since you mentioned I can do 112 spindles that still didn’t work - so I figured perhaps the total size had to be under 100tb.. so here is another try.. I wonder what the heck I have going on that is wrong.
Big RAID group is absolutely fine performance-wise when serving data. However, RAID rebuild will take longer than for a smaller group. On the other hand, with minimum 2 hot-spares per head, you have so called Drive Maintenance, which will aim to replace a drive before it fails (by monitoring a number of errors) - in this case there is no RAID rebuild, just a copy operation.
At the end of the day, 2x 20-disks RAID group aggregate is what NetApp recommends in this particular scenario as the best practice.