Network and Storage Protocols

Best configuration for FAS2020A single enclosure with 12 x disks

yoyong222
5,515 Views

Hi,

I would like to ask the best possible configuration for a FAS2020A that comes with 12 x 600GB SAS disks.  The appliance will be mainly used for SQL Database  and File serving purposes.  The intention is to run iSCSI for SQL and NFS and CIFS for file serving for Linux and Windows clients.

I have a few questions on best possible setup for a scenario such as this.

1) The netapp active/active configuration is not a traditional dual controller serving a single array, so I believe it means I need to split up my disks into a minimum of 2 aggregates to host the root volume for each controller.  How do i split it best?  Is it better to have it at 9 disks for the first controller (8 RAID DP + 1 spare) and have the second aggregate use the remaining 3 disks?  I heard that i can go to a minimum of 2 disks as well if I use RAID4.  Also an option is to split it 6 disks each per controller but i am concern that I am not able to maximize performance on this scenario.  My original intention is to serve iSCSI volumes on the 1st controller and NFS/CIFS on the 2nd controller if I split it to 6 disks each per controller.  I am fine to host iSCSI Luns and volumes, CIFS and NFS all served by the 1st controller if that is a viable option to increase performance.  Capacity wise total useable capacity we are aiming is at 1.5TB to 2TB.

2) Also what is the best way to configure the network interfaces for this scenario for redundancy.  I can create a vif for both controller ports which gives me 1 vif per controller.  Will this mean that traffic for CIFS, NFS and iSCSI will all be using the vif.  If i go with a FAS2040A which has 4 Ethernet ports per controller, what is the best way to split the NICs if i want to split iSCSI with NFS and CIFS traffic?

Also how does a CIFS/NFS failover occur in an active/active controller?  If I have my CIFS on a qtree in controller 1, I can access it via the controller name/share or IP controller1/share, when controller1 fails, will controller 2 take the IP (identity) of the failed controller (does this include the hostname if I am using hostname of controllera to access the CIFS share)?

Same question goes to NFS.

Sorry for all the questions, but I am quite used to a traditional Dual controller SAN serving Single array such as IBM DS series.  The concept of active/active controller in Netapp is a bit confusing at times.

Thanks,

Ron

3 REPLIES 3

radek_kubka
5,515 Views

Hi Ron,

Re disk layout:

First have a look at this neat summary from Andrew:

http://communities.netapp.com/message/6776#6776

Then you may do a more thorough investigation by reading this thread:

http://communities.netapp.com/message/20805#20805

Regards,

Radek

nigelg1965
5,515 Views

Hi

Personally, I think it's best to balance things.

i.e split it 6 disks in each, with each having one spare and two parity you should get around 1.25TB per head (the root volo can shrunk to something like 40Gb at most)

Depending on what you're doing with each applicaton, you may be best having one do the CIFS/NFS, one do the ISCSI.

Network config is going to depend on the capabilty/configuration of your switches, but for failover to work you'll probably need the network interfaces to be in the same subnet. We are putting 2040 pairs in to our medium size sites configured as two vifs of the four each using LACP.

Hope this helps

yoyong222
5,515 Views

Hi Guys,

Thanks for the response.  Ok based on the responses I feel inclined to use a configuration of loading one of the controllers e.g. (RAID DP 9disks +1spare)  and have the other controller with RAID4 2 disks with no hot spare.  I know I can manually asign the spare disk between the 2 controllers if needed to.  For the first controller (RAID DP 9 disks + 1 spare) - will you recommend having it as (8 disks + 2 spares) to take advantage of maintenance center?

Also in event of scaling, when additional shelves are added to the setup, can the root volumes be moved to a proper dedicated aggregate? say the RAID4 2 disks that holds the root volume of controller 2 be moved to a different aggregate from different RAID group such as RAID-DP?  How complicated and how much downtime does that procedure incur?

Also regarding the network setup, let say my switches are LACP capable, what is the best way to setup a FAS2020a dual controller with 4 ports total?  Shall i create 1 vif for each controller?  If i have 2 switches, can i connect controllera etherchannel to switch 1 and controllerb etherchannel to switch 2 and achieve failover if I put them on same subnet?  This configuration does mean though i will be running NFS/CIFS/iscsi all on the single VIF per controller.

Same as above but let say I use a FAS2040a.  I can have 1 vifs bundled with 4 ports per controller or 2 vifs bundled with 2 ports each per controller.  Any recommendation on connection if I want to connect it to 2 switches for redundancy and segregate CIFS/NFS from iSCSI?  I am quite open on suggestions on this.

I am also interested during failover how the different protocols will behave, i read that for iscsi, the standby controller assumes identity of the other filer so if my iscsi target is controllera IP if controllera fails, controllerb assumes controllera IP to serve the iscsi lun?

Is there any difference for NFS and CIFS? If I am connected to controllera via \\controllera\share for CIFS, if controllera, fails how does controllerb emulate and serve the CIFS share from his path ( does it emulate controllera 's hostname ?) or i need to connect using ip addresses so the standby controller takes in the ip address of the failed controller?

Thanks

Public