Data Backup and Recovery

Disk/aggregate configuration on new FAS2020A

esquared1
4,257 Views

I have a FAS2020A bundle with 12 disks. There was some pre-configuration done by a consultant but it's not yet been implemented.  I'm looking over the system and am having trouble recalling my last NetApp uses from 4 years ago on how disks are handled.

The unit is configured as active/active and only 8 of the 12 disks where assigned (so you won't see all 12 listed below).  These 8 disks are split 50/50 across the 2 heads per 'disk show'.

When I look at disks in system manager, it shows as follows:

name, state, RPM, Size, Container

0c.00.0, present, 15000, 410.31gb, aggr0

0c.00.1, partner, 15000, 410.31gb

0c.00.2, present, 15000, 410.31gb, aggr0

0c.00.3, partner, 15000, 410.31gb

0c.00.4, present, 15000, 410.31gb, aggr0

0c.00.5, partner, 15000, 410.31gb

0c.00.6, spare, 15000, 410.31gb

0c.00.7, partner, 15000, 410.31gb

aggr0 is configured RAID DP with disks as follows:

0c.00.0, dparity, rg0

0c.00.2 parity, rg0

0c.00.4 data, rg0

Question:

1) What is the explanation of the disks showing as partner?  These disks are not available to assign to a new aggregate or expand the existing aggregate.  Only disk 0c.00.6 is available for this purpose.

2) Would changing this system to active/passive provide me different availability of disks?  If so, what trade-off is there compared to active/active?

3) System Manager shows this storage info under filer2, but filer1 shows unconfigured, why is this?

4 REPLIES 4

esquared1
4,257 Views

Ok, I answered questions 1 and 3 for myself.  System Manager was not logging into filer1 properly and thus wasn't letting me access the storage system to see the disks owned by that filer.

So that leaves question #2.  What happens if I disable active/active configuration?  Do the heads operate completely independantly and that's it, or does one of the heads become passive and is never used unless the active one fails?

I've seen reocmmendations to do a "quasi active/active" setup by doing 9 disk (6D+2P+1S) on one filer, and 3 disks (1D+2P or 1D+1P+1S) on the other filer.  What's the benefit of this config vs. 50/50 split?  If I did this kind of split, where should I put the root vol0 ?

sanderbreur
4,257 Views

Hi,

if you disable cluster (cf disable) both heads are operating independantly but you'll always get cluster warnings/errors in your messages log. One of the disadvantages of a FAS2020 cluster is that you "lose" half of your 12 disks to parity and spares if you chose raid-dp ofcourse but in production environments i would always go for that. I would always use both heads for production active/active instead of active/standby which is just a waiste of resources imho (not to forget licenses paid for).

esquared1
4,257 Views

How about aggregate config?  In researching other posts, I see recommednations for 2 different types of setup

50/50

Filer1: 3D, 2P,1S - RAD DP

Filer2, 3D, 2P, 1S - RAID DP

Uneven ("Active/Passive")

Filer1: 6D, 2P, 1S - RAID DP

Filer2: 3D - RAID4 or RAID DP

-Recommednation here is to use Filer2 strictly as an "backup" filer so to speak, hence a quasi active/passive config

If I look at sizing standpoint, fi I've calcualted correctly, the 50/50 split yields 1047GB per Filer and the un-even split yields 2049G on Filer1 and 349G on Filer 2.  So there is some extra data gained in the un-evens plit, but I end up with a max usable volume on Filer2 of around 340G (after root vol0 is considered).  But, the recommednation was to not really use that at all for anything production since there is no hot-spare.  I could throw some test VM's on it or something to gain some of the extra space benefit.


The un-even split means 2x as many spindles for the production use storage, so I see that as a benefit.  But what's the trade-off?  Is it realler just filer CPU?  Is that going to be a huge concern on a system this small?

sanderbreur
4,257 Views

If you want to go for active/standby config be aware of the following:

  • be careful with your snapshot schedules (local and those initiated by snapmanagers/snapmirror/snapvault)
  • it really depends on the kind of applications that are going to use the lun's if you need more then 3 datadisks in aggregate.

All snapshot create and delete functionality will consume cpu load and unless you know what you're doing it can be a showstopper.

So resume, it depends on what you're going to connect and what protocols you use what config is the best solution for you.

Public