ONTAP Hardware

fas2600 config: active/active vs active/passive pros and cons?

nhwanderer
13,124 Views

I'm getting a VAR to help me set up our new FAS2620, but want to leverage the knowledge of this forum to get a head start and sanity check the config. We'll be running Ontap 9.2 with this system. We're transitioning from EMC, and the last time I used a NetApp was in 2011, so my knowledge is quite out of date. 

 

Our fas2620 is setup with 4x960gb SSD drives, and 20x4tb NL-SAS drives. Our needs are mostly capacity (file servers) rather than transactional (database), and at this stage the system will almost exclusively share out to our VMWare hosts by NFS. 

 

I intend to setup the SSD as a raid4 flash pool with 1 spare, so about 2TB of usable capacity of flash to cache the slow NL-SAS disks, which wil be RAID-DP. 

 

The question I'm struggling with is active/active vs active/passive for the controllers. With ADP, if I add all the NL-SAS disks together in one aggregate in active/passive, I can get some more iops plus more usable space. However, I'd like to understand is what I'm giving up by doing that. 

 

In active/passive, from what I've read, the passive controller sits there and does basically nothing until the active fails, or we fail over for an upgrade.

 

My primary question: If I go active/active, even if one controller has the absolutely the minimum size aggregate assigned to it, can I somehow leverage the compute power of that minimally provisioned controller in cluster mode to access data on the much larger aggregate owned by the other controller? If that is the case, how? By putting a SVM on the minimally provisioned controller and have it access the a volume on the big aggregate owned by the other controller? Or am I completely off base here?

 

Thank you!

1 ACCEPTED SOLUTION

AlexDawson
13,083 Views

Hi there!

 

Your last paragraph picks up the challenge with active/passive, especially on the smaller systems we have, but there's a couple of ways of looking at it.

 

With active/active, you shouldn't let the CPU usage of either controller exceed 50%, or else failover will result in degregaded performance. With active/passive, you can push the active one as high as you want.. so effectively, by going active/active, you get the "same" amount of CPU capacity across the HA pair, but double the number of processor cores. 

 

The downsides, especially on smaller disk count systems, is that each system needs its own aggregate, and if all you have is 4TB SATA drives, the number of drives you need for spare and RAID-DP represent a significant amount (and percentage) of your raw capacity, and splitting into two aggregates means forgoing backend IOPs.

 

So, in general, for your environment, I would suggest active/passive, to enable greatest capacity efficiency and backend IOPs. 

 

Hope this helps!

View solution in original post

13 REPLIES 13

AlexDawson
13,084 Views

Hi there!

 

Your last paragraph picks up the challenge with active/passive, especially on the smaller systems we have, but there's a couple of ways of looking at it.

 

With active/active, you shouldn't let the CPU usage of either controller exceed 50%, or else failover will result in degregaded performance. With active/passive, you can push the active one as high as you want.. so effectively, by going active/active, you get the "same" amount of CPU capacity across the HA pair, but double the number of processor cores. 

 

The downsides, especially on smaller disk count systems, is that each system needs its own aggregate, and if all you have is 4TB SATA drives, the number of drives you need for spare and RAID-DP represent a significant amount (and percentage) of your raw capacity, and splitting into two aggregates means forgoing backend IOPs.

 

So, in general, for your environment, I would suggest active/passive, to enable greatest capacity efficiency and backend IOPs. 

 

Hope this helps!

nhwanderer
12,947 Views

Thank you! That's very helpful guidance. I only expect to support a maximum of about 150 users with this system, and with the much higher processor count and memory of the FAS2600 systems compared with the FAS2500 series, I'm hopeful active/passive will do the job. We have enough flash pool (2TB) to hold the entirety of our database data (only a few hundred gig there) so once it warms up iops for that shouldn't be a problem. 

 

I'm excited to get this rig setup. I'll be using VEEAM 9.5 along with the FAS which should work really nicely together. 

 

I've heard a rumor in 9.2 that the Flash Cache works in conjunction with Flash Pool, rather than being replaced by the flash pool. Is that the case?

nhwanderer
12,930 Views

To answer my own question this podcast confirms that flash cache and flash pool work together in ontapp 9, which is just tremendous news. 

AlexDawson
12,839 Views

Good stuff! Glad you found the reference

ruffwise
11,288 Views

Hello, 

Sorry to badge onto this post but can someone please explain to me how to configure active/passive?

We just got a FAS2620 with 4x960GB SSD and 8x2TB along with a DS212C disk shelf populated with 12x2TB.

I want to get more usable space by doing active/passive. I would appreciate if you can tell me the command I would use to configure active/passive to make one controller active while the other is passive.

 

Thank you

Tas
11,275 Views

Active/Active versus Active/Passive is a logical configuration rather than a physical one.

 

With disc ownership, you assign all of you available discs to one controller.  The other controller still sees them and has access.

You then create you aggregate on the disc owning node. The only aggregate on the non-owning node is your mroot/aggr0.

 

I think this is usefull in a light write environment with a limited number of discs.  It allows you to create a single larger aggregate;  all of the discs are dedicated to one node.  Of course, you need at a minimum three discs per node for aggr0.  If you partition your root discs, three will be assigned to one node and three to the other, which kind of defeats the purpose of having a larger aggregate.

 

Unless you are dealing with virtual disk images for a virtual environment, I would recommend you look at FlexGroups.  A FG creates constituent volumes, in other words multiple volume buckets across both nodes, and presents it via NFS or CIFS as one single volume.  This way, you simply split your discs in half and although you will have two aggregates, one per node, your FG will be spread across both aggregates, and both nodes.  Better utilization and possibly faster performance.

ruffwise
11,238 Views

Thank you for the reply. Now i understand it better.

 

Regards,

 

aborzenkov
11,227 Views

@Tas wrote:

Of course, you need at a minimum three discs per node for aggr0.


FAS26xx is using ADP by default, there are no dedicated root disks.

ruffwise
11,183 Views

Yes ADP is enabled by default and no dedicated root disk.

I have another question please. 4 disk are in use by ADP hence there's a total of 6.42TB free capacity from the disk used by ADP on each Node. My question is How can i make use of this free space??

I cannot create aggregate from the free space of the diks used by ADP, When i try i get the error "There are not enough spare disks of the selected disk type to create an aggregate for RAID-DP. Minimum required: 5".


@aborzenkov wrote:

@Tas wrote:

Of course, you need at a minimum three discs per node for aggr0.


FAS26xx is using ADP by default, there are no dedicated root disks.


 

Tas
10,093 Views

That is the easiest and most obscure thing to do.

 

First, create your aggregate and set the raid_type and raid_group size to what you want it to be.

 

Add the partitions assigned to that node to the aggregate, and finish creation.

Once done, Add Disks to the aggregate, select the non-partitioned disks, and select Create a New Raid_Group.

 

So your first raid_group will consist of the slightly smaller partitions, and your second raid_group of the full partitions.  But because they are separate raid_groups, ONTAP will handle it just fine.

What you don't want to do, is add a mix of the short and long partitions in the same raid_group, because then ONTAP will only use the short size on all drives, and you will lose space.

ruffwise
10,078 Views

Thanks for the reply.

 

Like i said earlier, i'm doing active/passive. All the disk have been assigned to Node-1 and i've created my aggregate. I can add the 4 disks used for ADP (owned by Node-1) to the aggregate but I'm still unable to reclaim free space on the disks used for ADP owned by Node-2. How can i claim or add this space to the aggregated i created?

 

 

aborzenkov
10,075 Views
I failed to parse your message, sorry. With ADP physical disk ownership is irrelevant. You are using partitions, not disks. Show your disk configuration and exact commands you used to “reclaim space”.

ruffwise
10,040 Views

@aborzenkov wrote:
I failed to parse your message, sorry. With ADP physical disk ownership is irrelevant. You are using partitions, not disks. Show your disk configuration and exact commands you used to “reclaim space”.

Thank you, i've sorted it out. Your information that "With ADP physical disk ownership is irrelevant" helped me figure it out. hence i used the "storage disk assign -disk disk_name -owner owner_name -data true" command to reassign ADP disk owned by Node-2 to Node-1. My active/passive setup is complete now.

Thank you all for your contribution.

 

Public