ONTAP Discussions

FAS2240-4 Best Practices

CGONZALES78
9,107 Views

I have a FAS2240-4 system with two controllers in the head with 12x2TB drives and one shelf of 24x600GB drives. My questions is how should I setup the raid groups to achieve HA and maintain the most usable space? Does each controller need to control a raid group on each shelf to maintain HA? Can the controllers be setup in an active/passive configuration and also maintain HA? If each controller needs to have a raid group on each shelf will I loose 3 drives of usable space per shelf one for spare and two for data?

9 REPLIES 9

paulstringfellow
9,107 Views

Hi Chris,

So this is a bit of a trade off between capacity and performance…

In “smaller” systems this is normally the case. So some things to consider…

Your 2240 HA box, doesn’t do active/passive…both controllers are active and because of this will both need disks assigning to them to allow them to boot, so it comes down to how much you want to assign and how you see them working…

So as long as they have at least one disk to boot from you’re OK…(in reality this will be at least 2 (RAID4 as a minimum) and probably 3 (either RAID DP or RAID-4 with a spare) in terms of raid groups, this is not that relevant, the amount of RAID groups normally are based on whether you want performance or resilience.

There are some limits 16 disks max size for SATA and 24 I think it is for SAS…normally you can go for the default RG size, however bear in mind with SAS, if you assign them all the one controller with a RG size smaller than 24 disks…it will create multiple RG’s which will as you say, use of extra parity drives…so if you set RG of 12 and assigned all disks to one controller and all in one aggregate you’d end up with 4 parity drives in the single aggregate…

So as I say comes back to what you’re doing, if you want to lay out how you think you’re going to use it, I’ll happily give you some ideas of at least how I’d do it.

Regards

Paul.

CGONZALES78
9,107 Views

I'm going to use this system for shared storage to run my ESXi VM's from. So I understand that each controller has to have drives assigned to it so the controllers can boot. Do the controllers need to have drives assigned (A raid set) on each disk shelf? The 600GB drives are SAS and the 2TB are SATA so can I create two or three raid sets configured as follows:

Head unit with 12x2TB drives. 1 controller assigned with one raid group that includes all 12 SATA drives. in that config i would loose three 12 TB drives. 2 for parity and 1 hot spare.

Self with 24x600GB SAS drives. 1 controller assigned with one raid group that includes all 24 SAS drives. in that config i would loose three 600GB drives. 2 for parity and 1 hot spare.

OR Self with 24x600GB SAS drives. 1 controller assigned with two raid groups that includes 12 SAS drives each. in that config i would loose six 600GB drives. 4 for parity and 2 hot spare.

paulstringfellow
9,107 Views

Hi Chris,

So you are spot on…

The only question is whether you want to balance the disk use across the controllers and that will depend on the size and requirements of your esx organisation.

But the idea that one controller does SAS and one SATA is fine and pretty common in my experience, as I say, the only question is would you balance some disks on each controller, if you do, as you said you’ll lose the disks that you have suggested.

Just be aware if you are going to run all the disks on one controller in one aggregate, just be wary of RG size, if the RG is only 12 for example, you’ll end up with 2 in the one aggregate, giving you more parity drives that you expect.

So just think about aggregate layouts as well…for me, single aggregate would be the option (one for SAS and one for SATA)…again be aware of limits though 16tb on 32bit aggregates…so 9 usable 2 TB drives should just about fit in the one aggregate, otherwise look at 64bit aggregates for SATA particularly…again just be aware of performance implications of that…

But your ideas seem spot on…just about making sure it’s the right fit for what you want..

CGONZALES78
9,107 Views

So if I setup the system this way will I loose HA? Is there any performance hits by running 64bit aggregates and what are the space limits for 64bit? We plan to run all of the VM's from the 600GB shelf so if 32bit is better for performance can I use a 32bit aggregate for the 24x600GB drives and 64bit for the 12x2tb drives?

Also can I setup the HA pairs in the system manager or does that have to be done from the console? I have the controllers setup and can access them separately from system manager.

Also since the controllers are active/active will i have to create the NFS shares on each controller. Basically will they show up as two separate NFS servers? So when I configure the ESXi servers in order to access the 600gb drives i will go to one controller and the 2TB drives i will go to the other controller?

paulstringfellow
9,107 Views

Hi Chris,

Mathieu gave some good advice there…but just to confirm on your points below…

You certainly don’t lose HA, HA is enabled through system manager…all it really means is that should one controller fail, the other one takes over its personality and workload, so if cont1 fails, cont2 presents itself as cont1 and presents all of its LUNS and shares… you can toggle this feature on and off on the fly…as it only has an effect in the event of a controller failure.

So as I say, can be done in system manager and the CLI as well…

So the 64bit aggregate questions is a good one… 24 x 600gb drives is only 14Tb raw anyway, so you can easily put them all in one aggregate, the only thing to think about is the performance impact with more memory consumed and more processing required on 64bit aggregates, I suppose the rule I’ve always used is actually slightly different to Mathieu’s, I only use 64bit if I’m going to need it…i.e. big aggregates, big volumes, if not I stick with 32 for performance…however bigger aggregates mean more spindles.

The other thing to think about is if you are going to snapmirror…32 bit aggregates only currently mirror to 32bit aggregates…so if you’re plan is lots of SATA in 64bit aggregates in DR, if you are going to use that, then you will need 64bit at both ends…there may be some plans to allow this in a future version on OnTap…if its not there already…

Finally yes in a HA environment then the two controllers are independent really…they only interact in the event of a controller failure… NetApp cluster mode (which is a whole different thing) would be where the controllers scale out and pool together as one controller… so in your case, they will be different NFS servers.

Hope all that helps…

mathieu_dewavrin
9,107 Views

Hello Chris,

Like Paul said I would put all SATA on one head ans the SAS on the other head. Concerning RAID configuration, it would depend if you plan to add more disks in the future. If so I would keep two Raid groups as it will be easier to balance RAID groups if for example you had a half-full disk SAS shelf in the future.

You won't lose HA and there is a performance hit with 64bit aggregates (if you have lots of small files or random workload such as OLTP). But especially with SATA you will have more disks in the aggregate so you could also gain performance. With the 64bit aggregate you get the compression feature that is an additional storage efficiency gain. I would go 64bit all the way especially on the FAS2240 that has more memory than the previous 2xxx platforms.

Maximum 64bit Aggregate Size for 2240 is 60 TiB (that's 126 x 600 GB SAS Drives or up to 43 x 2TB drives).

Normally at factory the cluster is set up. You can check it through the HA configuration in System Manager or through the console ( cf status ) .

You will have to configure NFS datastores on each head as they are two different NFS servers.

There is a free tool from NetApp called the Virtual Storage Console that (among other things) integrates with vCenter to check ESXi configs, apply best practices on the host and create and manage the datastores directly from the vCenter Client.

http://support.netapp.com/NOW/download/software/vsc_win/4.0/

CGONZALES78
9,107 Views

I think I understand the 64bit vs 32bit performance issues but I have a few more questions.

What is the difference in raid groups and aggregates? Do aggregates have any bearing on parity drives or is this only for raid groups?

Does the aggregate bit level affect the number of drives you can have in a raid group?

Can I have more drives in a single raid group if I went with 64bit?

Can I have multiple raid groups in an aggregate that spans disk shelves? So if I add another shelf of 600gig SAS drives and create another raid set can my aggregate expand to include those drives?

We are only using these devices for shared storage to run vmware VM'S. Taking that into account which bit level would you recommend?

DR questions.

Thanks for mentioning DR. The netapp we have been talking about is going to be our DR COLO netapp but also production. We have another netapp that we are going to use at our main office that we will run vmware vm's from. We want to be able to replicate the vm's back and forth from our main office to our COLO site for DR. We will also be bringing up additional netapp devices that we will want to DR to the COLO.

Taking that into consideration our initial thoughts were to have the SAS drives run the VM's and use the SATA drives for snapshots. then replicate the snapshots to the COLO.

Would we be able to boot the VM's at the COLO with designing it this way? Or do we have to replicate the live data on the SAS drives?

So taking that into consideration would you go with 32bit or 64bit aggregates?

Thanks

Chris

paulstringfellow
9,107 Views

Hi Chris,

Ok Try to answer these one at a time…!

Ok Aggregates are your storage pool!... so basically you put disks together into an aggregates, you then create your volumes in aggregates, so every volume you create is spread across the aggregate, which helps with performance…even the smallest volume spread across many spindles… aggregates are built up of RG’s… so in many cases an aggregate can be made of a single raid group, however there are rules around how big raid groups can be…however in your case, single aggregate made up of single 24 disk SAS RG and a single SATA aggregate should be fine… the decision on RG sizes is the balance over performance versus resilience…

So the last thing to consider is every time you build an aggregate you decide if its raid 4 or DP…this will dictate how many parity drives you create…so every aggregate created will have parity drives in it… so if you create 3 aggregate in your 24 disks…6 of them at least will be parity drives… that’s before you take into account RG size…

So a 24 disk aggregate with a 24 disk RAID Group size would create one aggregate with 22 data disks and 2 parity drives…however a 24disk aggregate with a 12 disk RG size would have 20 data drives and 4 parity drives…hope that makes some sense…

So does the aggr bit level effect number disks in a RG…no not that I’m aware of..also this doesn’t effect the RG size… RG size limits are based on disk types rather than aggr types… so SATA disks are a max of 16 I think…while SAS is 24…

So yes an aggregate can have multiple RG’s in it, so yes if you had the current shelf as one RG…you can add another shelf with another RG in it and this could be added to the aggregate however you’d just have to think of the size of the aggr (16Tb on 32bit…60Tb on 64bit on a 2240) but yes in theory and practice this is fine…this is also why you try to get RG sizes to match shelf sizes best you can…this stops RG’ s spreading oddly across disk shelves…which can have a performance hit…

Again 64 bit is more performance intensive, however you can balance this on the amount of drives in an aggregate…the more disk spindles can over ride the impact of the heavier memory requirement at the controller level…all needs sizing properly if you are really concerned over it…not a job for the forum really!

Ok as for DR…

Thing to realise is that the snapshots you do are volume level…so if the volume lives on SAS its snapshots will reside within the same volumes, hence on the same disks…so SAS disks snaps…remain on the same SAS disks…you can’t snapshot them to a different aggregate..SATA or otherwise…

If you want to do that, then SnapVault is you answer, this is the same technology, but operates slightly differently…so think through your requirement…

You can of course replicate your data from production to DR site, no problem at all, using SnapMirror (either with snapshots or vaults, works the same way) and yes…you will be able to mount them at the DR site should you have a problem…however there is work around your ESX design to do this and consideration around things like site recovery manager for managing the VM DR process.

The DR build is more complex as things like recovery time and recovery point, as well as a whole host of other things need to be considered…and difficult to advise on in the forum…

If you want DR rather than backup…then consider snapmirror of the SAS volumes to SAS or SATA at DR… as SnapMirror is really the NetApp DR technology…if you want to back data up from production SAS to production SATA, then consider SnapVault…you can of course do a hybrid of this, as well as consider technology such as SyncSort to bolt onto your filer environment to give you a whole bunch of different options…

I’d be tempted to speak to your NetApp partner or rep a bit here, the forums are difficult to cover all of the design consideration you have as you may need to share more detail that you’d like to put on here…

Regards

Paul.

BBALLENGER
9,107 Views

Chris,

Here are some things  to keep in mind. Anyone else please correct me if I am mistaken.

1)Easiest way to describe a aggregate is --- a group of disks grouped together for performance AND capacity. I find this is the easiest way to explain to my peers.

2) Each of your controllers will require a "hot spare". So that will more then likely bring the amount of disk you to allocate to a aggregate into a odd number.

3) Best practice for Raid-DP pools are for them to be even numbers 4,6,8 etc... So simply assign two hot spares from the get go.

Thanks,

Brett

Public