ONTAP Hardware

Initial configuration FAS2040

_NETWORX_
11,351 Views

Hello,

I am new to NetApp products and recently purchased a FAS2040 with a DS4243. I have two controllers. The FAS2040 has 12 1TB SATA drives and the DS4243 has 24 450GB SAS drives. I have placed the units in my server rack and I am trying to learn NetApp. I have a few questions that I am hoping someone can help with so I can learn how to do this.  My first question is how do I cable the units. Reading through the manuals I came up with the following:

|FAS 2040| (Connect 1 SAS cable to 0d on Controller A)

                     (Connect 1 SAS cable to 0d on Controller B)

|DS4243 with 4 PSU's|

(Connect the SAS cable from Controller A to the "square" SAS port on the top unit of the DS4243)

(Connect the SAS cable from Controller B to the "square" SAS port on the bottom unit of the DS4243)

Power up the two units and connect with the console cable to start software configuration.

Does that sound correct?  Also where do I connect the cables for ACP? I think the units came with 4 cables that look like ethernet cable but the plugs are metal.

Once I have the unit cabled and powered on I will connect to it using the console cable. Is there a default aggregate or do I need to create one to start?  I would like to carve the SAS drives up into 1TB volumes, but at first just create 2 of those and the rest later as needed. This will be for our VM vSphere 5 environment. As for the SATA drives, this will be used for Windows servers to store data so I would like to carve it up in volumes of 500G-1TB as needed.  From what I have read it appears the NetApp creates the Raid group for you once you define the volume, am I correct? How can I accomplish this correctly and keeping within best practices? 

Thanks

1 ACCEPTED SOLUTION

jodey
11,359 Views

Personally this is what I would do since this is a 2040 (small config):

1. Controller 1 with all of the SAS drives assigned to it

2. Controller 2 with all the SATA drives assigned to it

Set both sides up the same via RAID-DP and a single aggregate made up of 16 drive raid groups -- let the data vols and the root vol reside in the same aggregate since this is a small config.

View solution in original post

6 REPLIES 6

jodey
11,351 Views

Download the Universal SAS and ACP cabling guide from the NOW site:

https://now.netapp.com/NOW/knowledge/docs/hardware/hardware_index.shtml#Disk_Shelves

Regarding the configuration -- you will need to have an instance of Data ONTAP for each controller. DoT should come preinstalled with the current shipping version and all you will have to do is tip into the console and run the setup command. Depending on how you want to run the config you have a couple of options:

  1. Use the 2040 in an active/active configuration where both controllers are actively serving data and you balance the load across controllers.
  2. Build an active/passive type config where one controller owns the majority of the disks and the secondary controller is simply standing by in case the primary controller fails.

Keep in mind that each NetApp controller has ownership of the disks you assign it (unless there is a failover scenario). Also, even though you might only use the second controller as a standby it is still an active controller and is capable of serving data. You are only limited by the amount of drives that you assign to it.

You have two RAID options; RAID-DP (recommended) and RAID4. If you are going to go with the active/passive type config you may want to consider the RAID4 option for the rootvol (Data ONTAP OS) on the secondary controller. This will allow you to assign more drives to the single active side should you choose this configuration method.

If you plan to balance the load across controllers (ex. SAS drives on one controller and SATA on the other, etc...) then use RAID-DP for the rootvol on both controllers.

NetApp pools disks together in raid groups and these raid groups make up aggregates. You create your volumes (LUNs or file level) on top of the aggregates and the volumes are what get presented to the host.

Hope this helps,

J

_NETWORX_
11,351 Views

So if you had 12 1TB SATA drives in the 2040 and 24 450GB drives in the DS4243 how would you assign those? What is the best practice?

jodey
11,360 Views

Personally this is what I would do since this is a 2040 (small config):

1. Controller 1 with all of the SAS drives assigned to it

2. Controller 2 with all the SATA drives assigned to it

Set both sides up the same via RAID-DP and a single aggregate made up of 16 drive raid groups -- let the data vols and the root vol reside in the same aggregate since this is a small config.

_NETWORX_
11,352 Views

Hi Jodey,

Thanks for the reply. Here is where I am:

I now have both the FAS2040 and the DS4243 powered up. I created a vif on the FAS2040 for e0a and e0b and created an etherchannel port on my Cisco switch. I can access the FAS2040 through the web interface and the System Manager. I then connected to the console of the other controller and went through the setup. In System Manager. I can only see the one controller, the first one which is SATA drives. I cannot see the 24 SAS drives on the DS4243. There is a root aggregate on the 2040 which has 3 drives and one spare, however I have 12 SATA drives in the 2040. How do I go about creating an aggregate and ustilizing my SATA drives in the best way possible? Should the root aggregate be seperate and what size should it be?  I want to use RAID-DP. Would 11 drives and 1 spare be a good configuration for the SATA drives?  How do I get the SAS drives configured to where I can see them in System Manager and configure the shelf? Any help would be great, thanks again. 

dimitrik
11,351 Views

Hi Robert,

I'd keep the SATA drives in one aggregate on one head, and the 24x SAS in the other head.

The RG size depends on your future plans... read here:

Maybe something like:

Head 1:

1 RAID group, 2 parity, 9 data, 1 spare

Head 2:

2 RAID groups:

12 (10 data 2 parity)

11 (9 data 2 parity)

1 spare

Thx

D

_NETWORX_
11,352 Views

Thanks for the reply D. Please see below

Public