FAS2020/2040 disk issues

Having implemented many small storage projects, it is common to find a client who has purchased a FAS2020 or a FAS2040 with all twelve disks and with dual controllers. so i always come to the age old question, division of disks...assuming the client has purchased 12*500GB=6000GB raw.

the issue comes with having to share the disks between the two controllers...

assuming Raid DP with one spare...if i share the disks by half....I will get 6*500=3000GB, then remove two disks for Raid DP and one for  the (6-3)*500=1500GB raw per controller.

Assuming the client had initially fathomed atleast getting 4TB out of the initial 6, i find that quite some few GB is lost

so the question comes

do I convert from Raid DP to Raid 4 and loose out on the advantages of raid DP??

do i give 8 disks to one controller and 4 to another ?? (not that it changes the configs alot with regards to space)

do i give 9 disks to controller 1 on raid DP and 3 to controller 2 on raid 4?

do i just leave out one controller and use a single controller??

has anyone faced a similar deliberation and how did you overcome it???

Re: FAS2020/2040 disk issues

Hi James,

I recognize this dilemma, be it on a somewhat different scale. It's hard to make recommendations, because it all depends on the business needs and SLA's. For example: what is most important to them: capacity, redundancy or performance?

If capacity is the most critical factor, you may end up using only one controller, which I think is more a waste of money then buying an additional disk shelf. using RAID 4 could be an alternative, especially for the root aggragate. In that scenario you could assign 3 disks to controller B, which is just enough to create a RAID 4 aggregate to place your root volume on for that controller. You can then assign the remaining 8 disks to controller A and create a RAID DP or RAID 4 aggregate there. That way you can at least save one disk without compromising redundancy.

Still, as I said, it's all a matter of business needs in my opinion.

Hope it helps,


Re: FAS2020/2040 disk issues


Is there any document on what disks to assign to each controller? I believe that the 2020 came preconfigured with disk 0,2,4,6 (controller A) and 1,3,5,7 (controller B). Is there any reason why to have odd and even disks going to each controller? Or can I assign 0,1,2,3,4,5 to controller A and 6,7,8,9,10,11 to controller B.

Re: FAS2020/2040 disk issues


on a FAS2020, you usualy need both heads to have enough processing power so id split like this:

1 raid4 Parity

1 Spare

4 Data

for each head. In very storage sensitive cases, you might even use the spare on one head and dynamicaly assign the spare if needed, makes it less flexible tho.

on a FAS2040, which has a whole lot more horsepower, you can do sort of active/passive concept:


1 raid4 parity

1 data

NO spare


2 raid_dp parity

1 spare

7 data

( there is no use for raid4 as it has a 8 disk raid group limit, so we prefer a more secure raid_dp active head over a passive head with a spare disk and using raid4 on the active head)



Re: FAS2020/2040 disk issues

I almost always recomend an active passive configuration with 9 disks on 1 head and 3 on the other.  All raid_DP but no spare on the passive head, and 1 spare on the active head.  with only 6 data disks on the active controller, even on a 2020, you will most likely hit a disk IO limit before a cpu/memory limit on a 2020.  When its time to expand, and add a shelf, then you can move to an active active configuration where I usually move the original 3 disks over to head 1 and use the entire new shelf for head 2.

Re: FAS2020/2040 disk issues


Thanks for your post. It seems to validate my thought about "clustering" the controllers for failover but retaining the maximum amount of disk space and keeping redundancy. In the setup you describe, would it be fair to say that we will lose performance? Why do you think that disk IO would be the first ceiling bumped up against, as opposed to CPU/memory?

The one question I have is this: If, in this scenario, the active controller fails, can the passive controller pick up the data drives attached to the failed active controller? It would have to, it seems, to make this a worthwhile config. Otherwise, no sense ion having the second controller.