2010-05-18 02:54 AM
We've just powered up our FAS2020 box and run through the initial CLI config. Now moved to System Manager and we're having trouble configuring the storage.
We have a dual controller model with 12x 450GB disks on RAID-DP with HS. If my understanding is correct, we lose 2 disks per controller for RAID-DP and 1 disk per controller for the spares. That should leave us with ~2.7TB of storage (6x 450GB). However, the default aggregate is over 250GB in size and it's telling us we can't create more as we don't have any disks available.
So what has happened to the other 2.5TB or so of storage?
Also, what is the recommended controller IP config for an active-active implementation? All on one subnet or all on seperate subnets? We are putting each controller port on dedicated switches.
Solved! SEE THE SOLUTION
2010-05-18 03:59 AM
Hi and welcome to the Communities!
Quite likely some of your disks are still unowned. You can check this & do assignment via CLI, disk command. Then the next step will be to add disks to your aggregates.
Bear in mind though that these days GUI tools enforce two hot spares per controller (e.g. to enable 'silent' replacement of drives before they fail), so if you want to have just one hot spare per head, you need to use CLI again.
2010-05-18 04:06 AM
It should not happen that way. It wil be helpful if you can paste the actual error message thrown by system Manager.
One thing u can validate is that check the disks have been assigned correctly to the controllers. Go to the Disks page for the controller u trying to create another aggregate and check that at least 3 disks (for DP) are coming in the state "spare" for that controller and rest may be in the state "partner".
As against the point of controller IP config, in order to have successful failover, each network interface on a filer must have an equivalent interface on its partner which means must be physically connected to the same subnet and of same technology. So each primary network address of a filer must have an identical secondary address on an equivalent interface on its partner.
In case of multistores only, i guess there may be an exception and special ip addresses may be necessary, doubtful about the exact provision in such case.
2010-05-18 04:10 AM
Saw radek's update later. .
But surely that can be the case as Radek said, if some disks are still unowned, all the 12 disks will not be listed over the Disk page in System Manager and u have to go to CLI to assign them.
2010-05-18 04:25 AM
I'm looking at System Manager now and this is what its reporting about the disks:
0c.00.0 present 15000 410.73GB aggr0
0c.00.1 partner 15000 410.73GB
0c.00.2 present 15000 410.73GB aggr0
0c.00.3 partner 15000 410.73GB
0c.00.4 present 15000 410.73GB aggr0
0c.00.5 partner 15000 410.73GB
0c.00.6 spare 15000 410.73GB
0c.00.7 partner 15000 410.73GB
I can not do anything with these disks, ie choose ownership etc.
2010-05-18 04:32 AM
Cool. It means that only 8 of your disks are assigned, 4 per each controller.
If you want to have an even number of disks per controller, then you should add out of four remaining disks a couple to each controller (type "disk assign ?" in the CLI for more detailed instructions).
And in a case it is still confusing - whenever you access any of the controllers, you can mainipulate only with disks owned by it, so you have to repeat all the tasks on both controllers.
2010-05-18 06:45 AM
OK, just playing with the CLI and assigned all unowned disks to one controller to see how it works.
I know want to assign 2 of those disks to the other controller, but it says it can't as they're owned by the partner.
2010-05-18 08:01 AM
Oh I have also put all disks in aggr0 on controller 1 to see how the system in general works. I cannot see a way of taking these disks out to then be put in aggr0 on the 2nd controller.
Is there a way to reset to unit back to factory defaults? We have no data on there as this is entirely a getting to grips excercise at the moment!