ONTAP Hardware

newbie questions re FAS2040

dclark_uk
2,548 Views

Hello, we have just had a FAS204 installed and I have a few questions. I'm completely new to Netapp devices so please bear with me!!

Our FAS2040 has 2 filers and 2 shelves, 1 for SATA and one for SAS. We have 24 x 600 SAS and 12 x 1tb SATA.

1) Is  the FAS2040 truly active/active in that both controllers can read/write to a LUN concurrently or is it active/active as each controller can manage I/O to a LUN but only 1 controller can "own" a LUN at any time?

2) How can I best allocate the disks?

The 2040 has come pre configured from Netapp with root on aggr 0 taking 3 SATA disks per filer, so that now takes us down to 6 SATA remaining. I guess out of those I need to leave 2 as spares so we now have 4 remaining. With RAID-DP I'm looking at a really small useable size - any ideas how I could configure this better?

Perhaps I could move the root to SAS disks?

Thanks

1 REPLY 1

brendanheading
2,548 Views

First off, welcome to the forums. Plenty of experts here. I got into NetApp about a year ago, but feel duty bound to pass on my knowledge. E&OE ..

Firstly, when you say "two filers" do you mean two physical 2040 boxes, or one 2040 with two controllers inside ? I think you mean the latter but it helps to be clear.

When you have two controllers, a given disk can only be assigned to one controller at a time. Storage repositories (raid groups which are built to create either aggregates, or traditional volumes - you'll probably be using aggregates - the term "LUN" doesn't apply here) are built out of disks and, accordingly, are only accessible to one controller. One important upshot of this is that each controller must have a root volume and therefore an aggregate. It is not necessary to have a dedicated root aggregate. Spare disks are also owned by controllers so you must have enough spares for each controller. You can't pool spares across controllers. There must be at least one spare of each type - ideally there should be two.

When a cluster failover happens for whatever reason, the remaining node  takes on the personality of the failed node. So the disks remain  assigned to the failed node, but the active node can access them  "pretending" to be the failed node, if that makes sense.

The "best practice" is to create a dedicated root aggregate for each controller, which means assigning one disk, two parity disks and at least one hot spare. That's why you're burning up disks fast. Having a dedicated root aggregate wastes disks so it is common on smaller configurations to have There may be a better way to do things but that depends on your storage requirements.

Without knowing what your requirement is, to maximize space I think I would assign the 12 SATA disks to one of the controllers, and the 24 SAS disks to the other controller, creating two aggregates. One of your controllers, the one with SAS disks, will be dramatically faster than the one with SATA disk (as well as faster speeds you have more spindles as well). So you will want to assign your workloads accordingly.

On the SATA side you will have 12 disks; 2 parity, 2 spare leaves 8 drives which comes out somewhere around 6TB space. On the SAS side, you could put all the disks in a single RAID group which, with two hot spares and two RAID-DP would yield around 10TB of space (or thereabouts). However, there are questions around the optimal RAID group size .. which is supposedly 16. IT may therefore be more appropriate to create two 11-disk RAID-DP raid groups which would require an extra two parity disks. You'd then have two spare disks and four parity disks leaving 9 data disks per aggregate, or a total of around 9TB.

It's best to leave two spare disks per controller so that Maintenance Center works. This allows drives showing spurious faults to be stress tested nondisruptively.

Brendan

Public