ONTAP Hardware

FAS2220 disk allocations

ANDREC4601
4,564 Views

HI there,

I just picked up a FAS2220 and I don't have any disk shelves.  From what you see below, it looks like it has got two different sets of aggr0.  Do I need both?  I am trying to maximize the disk size.  Does it make sense to remove one of the aggr0 and add all disks to the aggr0 to the other controller?  I would appreciate some input.  Thank you.

Andre

titan0> aggr status -r

Aggregate aggr0 (online, raid_dp) (block checksums)

  Plex /aggr0/plex0 (online, normal, active)

    RAID group /aggr0/plex0/rg0 (normal, block checksums)

       RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      ---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

      dparity           0a.00.1           0a    0   1   SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      parity            0a.00.3           0a    0   3   SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      data              0a.00.5           0a    0   5   SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

Spare disks

RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

Spare disks for block checksum

spare             0a.00.7           0a    0   7   SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

spare             0a.00.9           0a    0   9   SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

spare             0a.00.11          0a    0   11  SA:B   -  BSAS  7200 2538546/5198943744 2543634/5209362816

Partner disks

RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

partner           0b.00.10          0b    0   10  SA:A   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.0           0b    0   0   SA:A   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.4           0b    0   4   SA:A   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.2           0b    0   2   SA:A   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.8           0b    0   8   SA:A   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.6           0b    0   6   SA:A   -  BSAS  7200 0/0               2543634/5209362816

titan1> aggr status -r

Aggregate aggr0 (online, raid_dp) (block checksums)

  Plex /aggr0/plex0 (online, normal, active)

    RAID group /aggr0/plex0/rg0 (normal, block checksums)

      RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      ---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

      dparity           0a.00.0           0a    0   0   SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      parity            0a.00.2           0a    0   2   SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      data              0a.00.4           0a    0   4   SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      data              0a.00.6           0a    0   6   SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

      data              0a.00.8           0a    0   8   SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

Spare disks

RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

Spare disks for block checksum

spare             0a.00.10          0a    0   10  SA:A   -  BSAS  7200 2538546/5198943744 2543634/5209362816

Partner disks

RAID Disk          Device            HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------          ------            ------------- ---- ---- ---- ----- --------------    --------------

partner           0b.00.9           0b    0   9   SA:B   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.11          0b    0   11  SA:B   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.3           0b    0   3   SA:B   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.5           0b    0   5   SA:B   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.1           0b    0   1   SA:B   -  BSAS  7200 0/0               2543634/5209362816

partner           0b.00.7           0b    0   7   SA:B   -  BSAS  7200 0/0               2543634/5209362816

1 ACCEPTED SOLUTION

resqme914
4,564 Views

The reason you have two aggr0's is because there are two controllers in a HA config.  If you really need to maximize the disk space of one aggregate (and still leave both controllers running, albeit one will be pretty much idle), you can leave the "secondary" controller with three disks (one raid-4 aggr0 composed of two disks, and a spare disk) and move the rest of the disks to your "primary" controller.  That will give you 9 disks on that controller, 8 of which you can use for your aggr0,  One disk has to be left as spare.

Personally, I always find myself running out of CPU cycles on my controllers, so I prefer to distribute my workload between the two controllers, which means I need to have disks on both controllers.  So I tend to sacrifice some disks to get more CPU power.

View solution in original post

4 REPLIES 4

resqme914
4,565 Views

The reason you have two aggr0's is because there are two controllers in a HA config.  If you really need to maximize the disk space of one aggregate (and still leave both controllers running, albeit one will be pretty much idle), you can leave the "secondary" controller with three disks (one raid-4 aggr0 composed of two disks, and a spare disk) and move the rest of the disks to your "primary" controller.  That will give you 9 disks on that controller, 8 of which you can use for your aggr0,  One disk has to be left as spare.

Personally, I always find myself running out of CPU cycles on my controllers, so I prefer to distribute my workload between the two controllers, which means I need to have disks on both controllers.  So I tend to sacrifice some disks to get more CPU power.

ANDREC4601
4,564 Views

Thanks for the input from resqme914.  It is painful to distributed workload evenly between the 2 controllers, my 12 disks will end up with only 6 data disk in 2 aggregate.  How about the following design? 

controller #1 - aggr0 - rg0 (RAID-DP)

9 x data disks + 1 x parity disk + 1 x dparity disk + 1 x spare

controller #2

0 x data disks + 0 x parity disk + 0 x dparity disk + 0 x spare

This is an extreme case where I left no disk with controller #2.  So, what happens when a controller doesn't have any disks?  When controller #1 experience problems, can controller #2 take over?  Can I have some input for this design? 

Thank you so much.

Andre

WILLIAM_LORENZO
4,564 Views

you need to have the 3 disks on your second head for the root volume (The OS) to live. If you don't expect high workloads migrating all disks to one head minus the root volume will work fine but once CPU/IO/CP and other metrics start to overload you will be in a bad situation.

ANDREC4601
4,564 Views

Thanks for the input from both of William and resqme914, I have decided to go with the balanced workload model to save myself some grief.  Thank you again.

Andre

Public