2010-11-03 05:33 AM
Just out of interest, on the FAS2020 the common issue which crops up is with the available "useable storage capacity".
I have lots of recommendations around the FAS2020 depending on what the customers requirements are.
Typically if the customer has just the FAS2020 with 12 internal disks with no additional disk shelves. They typically want to maximize the capacity available from the controller/s.
Depending on the customers performance requirement from the Fibre/Ethernet ports/controllers CPU. If one controller is able to handle the work load I would recommend the following setup:
An Active/Passive Configuration.
This is to maximize the useable storage capacity and Spindle performance from the internal disks.
Once an additional disk shelf has been added. I would recommend creating a new aggregate consisting of all the disks in the new disk shelf.
I would migrate controller 2's root volume to the new aggregate.
Destroy the RAID-4 aggregate to re-claim the internal disks for the internal aggregate on controller 1.
I have added the suggestion above to be useful to others. Any additional recommendations to improve this would be appreciated. Else I hope it helps others achieve the best they can with their storage device
Thanks in advance,
2010-11-03 05:37 AM
This is the same advice I give our customers when they are "worried" about the capacity of the 2000 series. Also feel you need to stress the active passive bit with them but capacity over performance is the usuial driving factor at this end of the market.
I believe there is also an option you can turn off on ctrl2 so it stops error messages about no hot spares.
2011-01-10 11:29 AM
Great information guys. I'm fairly new to the storage arena and could use some recommendations. I'll be migrating data from an older EMC box to a FAS2020 HA with 12x600 15K drives. The eventual load is roughly 1TB of VM storage and 2TB of CIFS shares (before any dedup, etc). Your post seems to be the right way to go and leave the second controller for failover. However, the VM and CIFS networks are on different VLANs and I'd like to have redundancy on the physical links. So I guess the question is:
A) Active/Passive and setup a vif with VLAN tagging for both VLANS (Cisco 3560 switch)?
B) Active/Active and VM's on one controller and CIFS on the other (obviously losing some disk space)?
Thanks in advance.
2011-01-10 01:00 PM
Both setups up would be suitable. It may depend and be determined by the capacity requirement. Option 1 would provide a higher useable capacity.
If the capacity isn't too critical yet bandwidth is then you may want to opt for options 2 to enable you to achieve a higher throughput i.e. 2GB total from Controller 1 and 2GB total from controller 2 as well.
Overall it is a trade of depending on your customers requirement/s. If you intend to upgrade in the future adding additional disk shelves and performance isn't exceeding 2gb throughput then I would personally go with options 1 so you can add an additional disk shelf at a later day and get the storage up in an Active/Active congiuration with higher capacity across the board and utilising all the ethernetports.
Hope this helps
2011-01-10 01:01 PM
p.s. don't forget you can just apply VIF's on the ports on both controllers to protect the physical port redunancy.
2011-01-22 01:16 PM
Your help has been much appreciated and I hope I can ask for little more advice.
I've setup an Active/Passive setup with most of the disks on Controller 1.
e0a and e0b are vif lacp and I've created vif-a, vif-b VLANs on top.
I've setup Etherchannel on the Cisco side (3560G) and all is well. If I unplug one interface, there's no downtime. Awesome (and there is a similar setup on Controller 2 so it fails over).
However, it appears that it is not aggregating the links. On Cisco while I'm transferring some massive data, the 2nd port doesn't pass traffic, only the first.
Is this to be expected? Does the 2nd adapter only kick in when the 1st is maxed?
2011-01-23 12:46 AM
yes, this is expected. 3560 supports packet distribution by MAC or IP - in both cases all traffic between the same pair of systems will always use the same physical interface (unless interface failed).
On NetApp side you could additionally set distribution to round-robin (which is not recommended) or port which will take in account TCP/UDP port. The latter may offer better distribution across aggregate members if traffic is multithreaded.
Do not forget that load distribution for incoming traffic (from NetApp point of view) is configured on switch - there is nothing Netapp can do about it; and load distribution for outgoing traffic is configured on NetApp - again, switch does not do anything here.
In general load distribution is effective only with large number of clients; but every single client will normally run over dedicated interface.
2011-01-27 09:43 AM
On top of the previous recommendations you can also apply IP aliasing. Typical 1 IP alias per the # of physical interfaces per VIF. i.e. for a FAS2020 it would be 1 VIF IP plus 1 additional IP alias. This is if the VIF is set to IP based and not MAC based of course :-D
This enables the system to load balance a bit better when serving data due to the IP pairing when getting requests in from multiple hosts.
Does this help answer your question :-)
I hope it helps,