we have a used FAS2220 with 12x600 and NFS licence
It is installed using 7-mode
We'd like to use it for NFS storage purpose, using ESX for tests and maximizing capacity.
We erased it using boot menu, choice 4
We used to use "System Setup" but here we have a problem after setting up first screen with network setup a weird message telling "unable to reboot NAME-OF-CONTROLLER" on controller 1 (but in fact it reboots !). Then system setup won't go further.
I wanted to go further to be able to configure the filer in active/passive mode where all disks are in one aggregate (i never did it but i remember that there is this choice into "system setup")
By the way, if you have no idea about what is the "system setup" problem, can you tell me how i can achieve this using CLI ?
How can i maximize capacity.
Thanks a lot
Solved! See The Solution
4 REPLIES 4
Re: FAS2220 - Erase and reuse
2020-01-20 08:35 AM
I did some more readings and please correct me if i'm wrong :
- no way to use both controllers without assigning drives to each... (at least i'll have to assign 3 disks to C2)
- then 6 disks per controller is a good solution to have max space (in two chunks, with raid4 + spare i have 2x4 data disks)
- the only way to maximize capacity : do not use C2 (remove from chassis?), assign all 12 disks to C1 with Raid4 + spare, i get 10 data disks
This is not a production enviromment, any more advices ?
2020-01-20 12:08 PM
Yes, it's possible to turn HA Pair into single node controller.
It's your choice, if this is for personal use then you probably don't care about HA or sinlge point of failure.
If you need HA functionality with maximum storage capacity:
1) Controller1 - Assign all disks here
2) Controller2 - Assign just 3 disk (For root aggregatre)
But you lose : 3 disks for controller2 root aggr.
Non-HA Single node controller:
Controller1 : Assign all disk shelfs in a sinlge loop. Like you mentioned, you can use raid-4, and have single aggregate with 11disks (10 data + 1 Parity) and 1 spare.
All the steps are covered here for hardware reconfigration for single node:
Reconfiguring nodes using disk shelves for stand-alone operation: