We have a FAS2020 HA system (all SAS) in place already. We want to add a SATA shelf to move less-critical, and less speed demanding, data to the SATA shelf. What are the requirements re disks for the shelf. We'd want the shelf to be dual-controller too, so I guess the minimum number of disks per aggregate (so per controller) are still relevant. I can't remember the minimum per aggregate, but my brain is saying 3?
Also, what are the requirements for adding disks to a shelf after the minimum. So say we started with 6 disks in a DS4246 shelf, and wanted to add more capacity in 3 years time, can we do 1-by-1, or is there a recommended number? On this subject, how does this work with Thin Provisioning a LUN, i.e:
Day 1: Install DS4246 with 6 disks, 2 aggregates each with 3 disks. Total usable space in each aggregate 100GB. Create a 1TB thin provisioned LUN
Day 1096: Capacity reaching 98GB, so add another disk and make the aggregate 200GB allowing the thin provisioned LUN to carry on working
Day xxxx: Capacity reaching 198GB, so add another disk and make the aggregate 300GB allowing the thin provisioned LUN to carry on working
Since you currently have SAS disks and you're adding SATA disks, you will need at least one spare SATA disk per controller (it looks like you're going to split the disks evenly between two controllers). So I would say you need 8 disks minimum, not 6 (if you're using raid-dp).
BTW, on 1TB SATA drives, you will see approx. 840GB useable.
Hello. Please allow me to clarify a few things, and I do apologize for the brevity and lack of clarity in my original response.
First, there is no rule or best practice that states you have to keep the same number of disks per controller. Your workload, business requirements, business practices will dictate that. I stated in my original response that "it looks like you're going to split the disks evenly..." because that was implied in your original post. At the risk of muddying the waters further, I would say, if it were me, and only allowed to purchase 6 to 8 SATA disks (where I only had SAS before), I would create just one large aggregate on only one of the controllers as that would be a more efficient use of your limited number of disks. This would give me better disk performance and more useable space. To give an example (using raid-dp): if I had 8 disks, I would create one 7-disk aggregate, where 5 of the disks would be data disks (and 1 spare disk on one controller and 2 parity disks). If I created two aggregates (one for each controller) I would end up with two 3-disk aggregates and only a total of 2 data disks (2 disks used for spares, 1 per controller, and 4 disks used for parity). But like I stated before, your business needs, workload, etc. would dictate what you could do.
Second, to clarify the parity issue... using raid-dp (which is NetApp best practice), you need 2 parity disks per raid group. So the minimum number of disks you need for one aggregate is 3. If using raid-4 (one parity disk), you can create an aggregate with a minimum of two disks (one data and one parity). The reason I stated you would need 8 disks (4 per controller) instead of the 6 you mentioned, is because you implied raid-dp and stated your existing drives are SAS and you're adding new SATA drives. This means that you will need 1 spare SATA disk per controller (or Ontap will get very upset).
In terms of future addition of disks, you can add disks one at a time (subject to availability of hardware slots and room in the raidgroup size), although I must confess I know no one who buys/adds one disk at a time. When you grow your existing aggregate by one disk, be aware that you could subject yourself to performance issues and may have to do a reallocate to balance the data in the aggregate. The NetApp System Manager software recommends adding disks to aggregates three at a time minimum. I don't know if it still does that with the latest version.
Lastly, I am unclear about your question on how this all affects thin-provisioning a lun so I will defer giving any opinions there. Please feel free to ask for clarification if I succeeded in confusing you more.
What I meant about thin provisioning (TP) is imagine I started with 8 disks of 1TB each, and let's say I put them all on one controller as per your advice. After parity (raid-dp) and hot-spare that leaves me with 5TB (forget about formatted capacities), so I create a 15TB TP LUN and give that LUN to a virtual Windows file server. Once our real capacity reaches 5.1TB that's more than the underlying hardware so we'll obviously have a problem.
The LUN is created on a volume, and the volume is created on an aggregate, as a fixed size (no TP on volumes), so initially the aggregate and volume will be the 5TB, based on what's physically available. So my question comes from can additional disks be added to an existing aggregate and volume, to make use of the allocated space to the LUN. Or do extra disks need to be put in their own aggregate with their own volums, and in turn their own LUNs?
I think it would make more sense for you to thin provision the volume instead of the lun. Over time, the lun (from WAFL's point of view) will fill up to 100% even though your host might not be using all the space and you'd have to do space reclamation on the lun using snapdrive. I strongly advise you to thin provision the volume instead of the lun.
Additional disks can be added to existing aggregates, and as I mentioned previously, you can add one at a time (subject to restrictions/best practices like being same speed, type, etc.). NetApp recommends adding a minimum of three. After you add disks to the aggregate you can resize your volume to grow or shrink it. You can then grow your lun (dynamically depending on the host). Hope that helps.
I thought about this some more and for what I think is your situation, whether you thin provision the volume or the lun or both is not really that important. I think what's more important for you is the lun space reclamation that you'll probably want to do in the future because your lun will tend to grow towards 100% even if there's more free space on the host side. The only way I know how to fix this is with snapdrive. If you need more info on this you can search the community or the NetApp support site. There are lots of posts, etc. about this.