2012-04-23 04:01 AM
Currently I have a FAS 3170 in active/active configuration. 4 trays of disks, each tray has 14 disks that make up a RAID-DP group. Therefore there are a total of 4 raid groups. 2 trays of disks have been assigned to each controller. Each of the controllers disks have been combined into an Aggregate, making a total of 2 aggregates in the array consisting of 28 disks per aggregate. Each aggregate contains 3 volumes for a total of 6 volumes in the array.
What I would like to achieve here is to combine all 56 disks (the 4 raid groups/trays) into a single aggregate. The aim of this is to improve read/write performance of the array by essentially doulbing the number of spindles available for io operations.
The array is soon to be taken out of production and available for a teardown/rebuild so it would be good to know if this setup is:
a) worthwhile doing
b) actually technically possible and if so, the best way to do this
c) a supported and/or recommended practice by NetApp (this isn't a big thing but good to know regardless)
Some other information that may be of use:
- The array is used only for VM's in a VMware environment.
- Each controller has 2 physical nics.
- Current disk io is very low and usually 60-70% read, 70-80% random.
Any feedback on the above would be great, thanks!
Solved! SEE THE SOLUTION
2012-04-23 07:13 AM
Thanks for the reply, I am aware that 2 controllers cannot share an aggregate and there is no additional 5th shelf.
I believe that in the current active/active setup I have, if a controller fails, the other controller has the ability to takeover control of the other aggregate/volumes so that everything remains accessible. What I'd like to do is maintain this setup, but assign all the disks to one controller to form a large single aggregate, then if this controller fails the other controller will have the ability to takeover the aggregate and ensure storage continuity. So technically, the active/active setup will remain, except one of the controllers just won't have much to do during array operation. Is this possible?
2012-04-23 07:16 AM
Understood. Yes this is possible. You need root on the one passive like node. So 3-5 drives there. 3 for root aggr and 2 spare but you could go less possibly. Then reassign all other disks to the other node or more disk on that node. Would require zeroing of the one node but you can assign disks assymetric like this. We have some customers who don't require the controller performance and do this for failover. I like leveraging both controllers but if spindles are the bottleneck, I can see doing this.
2012-04-23 07:16 AM
No, that’s not possible. The bare minimum is 2 disks for root aggregate/volume. And this goes against all best practices (raid4 without spares).
Why do you want to throw away half of computing power you have?
2012-04-23 08:20 AM
Thanks for the responses guys, always happy to get constructive feedback. I would assign 3 disks to one controller in a RAID4 group to comply with NetApp best practices.
I look at this two ways:
1. I make use of both controllers with 28 disks assigned to each. I get 15TB of capacity and decent disk IO capability. 28 disks is not alot for one of these controllers to manage, so both controllers sit around with minimal CPU activity or any signs of stress.
2. I assign 53 disks to one controller, and 3 disks to the other. I render a controller virtually unused and use it purely for HA purposes (still active/active though). I still get 15TB (close to) capacity but much greater disk IO capability! A single controller is capable of managing 420 disks so 53 should not be pushing a controller to the point of becoming a bottleneck.
If there were in excess of 10 shelves and the controllers were being moderately pushed by storage activity, then of course I remain with the current setup (option 1 above). But I don't want to actively use two controllers just "because they are there" if there are no real gains to be had.
I do wonder if I'm missing some key piece of information as to why this setup wouldn't be feasible but at the moment I'm leaning toward option 2.