ONTAP Hardware

Initial FAS2040 Setup

DOYONKELLY
4,939 Views

Hello,

Just received our FAS2040HA with a DS4243 in the mail, and I'm in the process of the initial configuration.  I have the server racked, powered, and cabled, and am beginning to configure aggregates, networking, and volumes.  However, I have a few questions about how to best configure this, and about what the 2040 is capable of withstanding as far as hardware/software failures that I'm hoping someone can help with:

  • Disk configuration - My goal is to maximize storage capacity.  Each contoller has 4 disks pre-assigned for aggr0 and vol0.  There are 16 unassigned disks.  My plan is to assign 8 disks to each controller, and add 6 of those to aggr0.  Each contoller would have a 10 disk aggregate with RAID-DP, and 2 spares.  Is there any taboo against having vol0 on your primary production volume?  Any other thoughts?
  • HA Capabilities - I noticed that the controllers only have one SAS port each, and I read that the cluster is actually connected through internal hardware in the chassis.  So i'm assuming that means I can lose one controller's software, but I couldn't physically pull the controller out of the chassis and still have a connection to the disk shelf.  Is that correct?  So, in a nutshell, I have failover in the case of lost network connectivity on one controller, lose a controller's network, or have a software failure on a controller, but what else?  What happens if I lose a disk-shelf controller?

Any help here would be greatly appreciated!

Kelly

1 ACCEPTED SOLUTION

columbus_admin
4,940 Views

HI Kelly,

     Thanks for the diagram.  The way you had described it, it sounded like you only had a single IOM module in each shelf.  Assumptions are a bad thing!  You are set up in the best HA configuration you can be without more ports.  With what you have posted, the only way you would lose access to any data would be if one controller failed, then the other head lost the SAS port.  This would be highly unlikely to happen.

Your setup is truly redundant because the IOM contoller in the shelf can pass data between modules.  As you state disks are owned by each head, but during a failover, the entire personality of the failed head is passed over to the functioning one.  Controller A actually becomes controller A/controller B  If you were to remove the head, you can still have the partner take over the disks and run with them, but that is a whole different discussion!

You don't need to pull hardware to test, simply setup your test volumes on both controllers, with your shares/exports.  Then from one controller type "cf takeover", watch the process and ensure the takeover finishes.  Then check each of your shares/exports to make sure they stay online.  Watch the console on the other head(because you won't have any other connectivity), and ensure it boots up to "waiting for giveback....".  Once you know the state of everything, then on the partner that is still running, type cf giveback.  This resets the cluster back to normal operations.  I would do the same from the other controller now to ensure it works both ways.

At this point you can start pulling cables and hardware to test.

-  Scott

View solution in original post

5 REPLIES 5

ajeffrey
4,940 Views

Hi Kelly,

IMO there is a taboo against having a large aggr0/vol0 root volume that contains user data.  Essentially you want to avoid this, especially in very large aggregates because if there is ever a problem with that aggregate/volume you will not be be able to get your controller up quickly, resulting in a longer than necessary outage for your users. So with a small root aggr/vol you hopefully keep that part simple and can get your controller up so that other operations can be done online while any wafl checking, ironing, etc is done only on the affected objects, meanwhile the other data is available.  In a small environment with a relatively limited expectation for growth you might be able to get away with it, but understand the downside.  The upside of course is fewer disks allocated for parity and such similar overhead.  You ultimately have to decide based on your needs / environment.  If others feel I am off base please chime in.

Thx

Jeff

columbus_admin
4,940 Views

Hi Kelly,

   - Disk Configuration:

     Having managed over 600 filers for a large enterprise, we have never created an aggregate solely for vol0.  Losing at least 3 disks, per head(and that is assuming no spare), to contain a volume which even when logging is turned way up, rarely hits 100G is wasteful.  And you lose the spindle benefits if you reduce the count by all those disks.  If you didn't build it the way you state, you lose 4 disks for parity to each head alone!  2 RAID-DP aggregates, requires 2 Parity per RG.

  - HA capabilities

     This is rough, but yes the limitation of one port is a problem.  Now, you could create an active/passive configuration by setting up both heads, but only running storage from one at a time.  In this case, you would create one large aggregate or two similarly size aggregates on head A, with all the disks, except those needed for the root vol on head B.  Configure the cluster setup for both heads, BUT cable the primary IOM connections from one head, and the secondary from the other.  The drawback here is that all processsing and load runs off of one head, but you have greater HA.

For your HA needs, you are stuck between two not so great options on the 2040.  If your workload on the 2040 is low, then the pseudo active/passive config will work.  But if you need the IOPS from both heads, you are stuck with a reduced HA setup.

- Scott

DOYONKELLY
4,940 Views

Thank you both for the answers.

Scott - I'm a little confused about the cabling options for the HA that you're referring to.  I don't understand how  HA is better with the Active/Passive controller than with the Active/Active, I guess because I don't entirely understand where the points of failure are with the 2040HA.   Each head controller can see all the disks in the shelf, even though each head controller is only connected to one disk controller.  As I understand it, disk access between each controller is provided by the filer itself, so if I physically pull one of the heads from the filer, the remaining head will not be able to see the other filers disks.  The only way this wouldn't be true is if the shelf controllers were truely redundant (they may be, I couldn't find information about this).  So my understanding is that if my Active/Active configuration has a software failure, the cluster failover will initiate a takeover and all will continue working.  But any other hardware failure will result in a downtime regardless of whether I have an Active/Active or Active/Passive configuration.

For the active passive, well each of my controllers have one SAS port, and my disk shelf has two controllers each with two SAS ports each.  How would I configure a primary channel and secondary channel?  Do I link the two disk controllers together?

Actually since this is a non-prod system right now, I might just get the networking configured and start pulling hardware to see what happens

Thanks,

Kelly

columbus_admin
4,941 Views

HI Kelly,

     Thanks for the diagram.  The way you had described it, it sounded like you only had a single IOM module in each shelf.  Assumptions are a bad thing!  You are set up in the best HA configuration you can be without more ports.  With what you have posted, the only way you would lose access to any data would be if one controller failed, then the other head lost the SAS port.  This would be highly unlikely to happen.

Your setup is truly redundant because the IOM contoller in the shelf can pass data between modules.  As you state disks are owned by each head, but during a failover, the entire personality of the failed head is passed over to the functioning one.  Controller A actually becomes controller A/controller B  If you were to remove the head, you can still have the partner take over the disks and run with them, but that is a whole different discussion!

You don't need to pull hardware to test, simply setup your test volumes on both controllers, with your shares/exports.  Then from one controller type "cf takeover", watch the process and ensure the takeover finishes.  Then check each of your shares/exports to make sure they stay online.  Watch the console on the other head(because you won't have any other connectivity), and ensure it boots up to "waiting for giveback....".  Once you know the state of everything, then on the partner that is still running, type cf giveback.  This resets the cluster back to normal operations.  I would do the same from the other controller now to ensure it works both ways.

At this point you can start pulling cables and hardware to test.

-  Scott

DOYONKELLY
4,940 Views

Scott,

Thank you for the clarification.  I feel much better about the setup now.  I'll perform testing as you specified and let you know what I find out.

Kelly

Public