We've just powered up our FAS2020 box and run through the initial CLI config. Now moved to System Manager and we're having trouble configuring the storage.
We have a dual controller model with 12x 450GB disks on RAID-DP with HS. If my understanding is correct, we lose 2 disks per controller for RAID-DP and 1 disk per controller for the spares. That should leave us with ~2.7TB of storage (6x 450GB). However, the default aggregate is over 250GB in size and it's telling us we can't create more as we don't have any disks available.
So what has happened to the other 2.5TB or so of storage?
Also, what is the recommended controller IP config for an active-active implementation? All on one subnet or all on seperate subnets? We are putting each controller port on dedicated switches.
Quite likely some of your disks are still unowned. You can check this & do assignment via CLI, disk command. Then the next step will be to add disks to your aggregates.
Bear in mind though that these days GUI tools enforce two hot spares per controller (e.g. to enable 'silent' replacement of drives before they fail), so if you want to have just one hot spare per head, you need to use CLI again.
It should not happen that way. It wil be helpful if you can paste the actual error message thrown by system Manager.
One thing u can validate is that check the disks have been assigned correctly to the controllers. Go to the Disks page for the controller u trying to create another aggregate and check that at least 3 disks (for DP) are coming in the state "spare" for that controller and rest may be in the state "partner".
As against the point of controller IP config, in order to have successful failover, each network interface on a filer must have an equivalent interface on its partner which means must be physically connected to the same subnet and of same technology. So each primary network address of a filer must have an identical secondary address on an equivalent interface on its partner.
In case of multistores only, i guess there may be an exception and special ip addresses may be necessary, doubtful about the exact provision in such case.
Cool. It means that only 8 of your disks are assigned, 4 per each controller.
If you want to have an even number of disks per controller, then you should add out of four remaining disks a couple to each controller (type "disk assign ?" in the CLI for more detailed instructions).
And in a case it is still confusing - whenever you access any of the controllers, you can mainipulate only with disks owned by it, so you have to repeat all the tasks on both controllers.