VMware Solutions Discussions

building an Aggr on FAS2240-2

TPARKERNA
6,052 Views

I'm really new to the NetApp way of doing things.  We have a 2240-2 self with 24 10K 600GB SAS drives.  I upgraded the OnTap version to 8.2

Since we have 2 controllers we lose 3 drives to each it appears. So we are left with 18 drives.

I want to expand the existing aggr0 instead of creating a new aggr and losing 2 more drives to DP.

When I go into Controller 1 and try to expand it tells me that I only have 8 spare disks that I can assign.

Controller 2 has another 8 disks that I can assign..

This leaves me with 2 questions:

1. Why do my spares add up to 16 when I should have 18.

2. Most importantly - why can I not add all 16(18) disks to aggr0 on the 1st controller and leave the 2nd controller alone?  I'd like to have a larger pool of space than 4.xxTB on each controller. 1st Controller would have 9.xxTB and the 2nd controller would have its 3 disks.  Is this bad practice?

I know I'm missing something but research online and documentation I've read doesn't seem to point out why this can/can't be done or if this is better or not.

Thanks in advance and sorry for the newb question.

1 ACCEPTED SOLUTION

nigelg1965
6,052 Views

Hi

By default what has happens is the disks are split across the heads with each head owning half.

To change the owner of a disk you're going to need to get down and dirty at the command-line.

Connect via SSH/Telnet/serial/SP and type

disk show

You should get some that looks a bit this.

  DISK       OWNER                      POOL   SERIAL NUMBER         HOME

------------ -------------              -----  -------------         -------------

0a.00.14     myfiler01-c1(1234567414)    Pool0  Z1N2L1JM              myfiler01-c1(1234567414)

0b.00.15     myfiler02-c1(1234567663)    Pool0  Z1N2L1ZF              myfiler02-c1(1234567663)

0b.00.3      myfiler02-c1(1234567663)    Pool0  Z1N2LA4X              myfiler02-c1(1234567663)

0b.00.23     myfiler02-c1(1234567663)    Pool0  Z1N2L23X              myfiler02-c1(1234567663)

0a.00.4      myfiler01-c1(1234567414)    Pool0  Z1N2LGYN              myfiler01-c1(1234567414)

0b.00.19     myfiler02-c1(1234567663)    Pool0  Z1N2L2FE              myfiler02-c1(1234567663)

0b.00.13     myfiler02-c1(1234567663)    Pool0  Z1N2L1XJ              myfiler02-c1(1234567663)

0b.00.1      myfiler02-c1(1234567663)    Pool0  Z1N2LH25              myfiler02-c1(1234567663)

0b.00.17     myfiler02-c1(1234567663)    Pool0  Z1N2LBG8              myfiler02-c1(1234567663)

0b.00.21     myfiler02-c1(1234567663)    Pool0  Z1N2L9DJ              myfiler02-c1(1234567663)

0a.00.22     myfiler01-c1(1234567414)    Pool0  Z1N2L3W2              myfiler01-c1(1234567414)

0a.00.0      myfiler01-c1(1234567414)    Pool0  Z1N2L1Y9              myfiler01-c1(1234567414)

0a.00.12     myfiler01-c1(1234567414)    Pool0  Z1N2LGRM              myfiler01-c1(1234567414)

0a.00.10     myfiler01-c1(1234567414)    Pool0  Z1N2L2TV              myfiler01-c1(1234567414)

0a.00.18     myfiler01-c1(1234567414)    Pool0  Z1N2LBJS              myfiler01-c1(1234567414)

0a.00.20     myfiler01-c1(1234567414)    Pool0  Z1N2L9H9              myfiler01-c1(1234567414)

0a.00.16     myfiler01-c1(1234567414)    Pool0  Z1N2LHKW              myfiler01-c1(1234567414)

0b.00.9      myfiler02-c1(1234567663)    Pool0  Z1N2LGQM              myfiler02-c1(1234567663)

0a.00.6      myfiler01-c1(1234567414)    Pool0  Z1N2L91R              myfiler01-c1(1234567414)

0a.00.2      myfiler01-c1(1234567414)    Pool0  Z1N2L1DV              myfiler01-c1(1234567414)

0b.00.5      myfiler02-c1(1234567663)    Pool0  Z1N2LGE3              myfiler02-c1(1234567663)

0b.00.11     myfiler02-c1(1234567663)    Pool0  Z1N2L8VH              myfiler02-c1(1234567663)

0b.00.7      myfiler02-c1(1234567663)    Pool0  Z1N2L92W              myfiler02-c1(1234567663)

0a.00.8      myfiler01-c1(1234567414)    Pool0  Z1N2L94C              myfiler01-c1(1234567414)

You then need to identify the spare disks on one head you want to move to the other and type

disk assign diskid -s unowned -f

disk assign 0c.00.16 -s newfiler

Remember once a disk is allocated to an aggregate you won't be able to move it to another filer.

Shame you reseller / NetApp weren't more help at purchase.

View solution in original post

7 REPLIES 7

nigelg1965
6,052 Views

1. Because it wants to keep one spare in case of a drive failure.

2. Unlike other company's solutions the second controller isn't just a hot spare sat there doing sweet FA until things go wrong. With a typical Netapp HA pair you have two independent storage devices (heads) capable of seeing each others disks and in failover situation one can takeover all the functions of the other including it's IP / DNS name / LUNs etc. You could allocate more disks to one head than the other, but you need a minimum of 4 disk per heads (1 data, 2 dual parity, and a spare), it's not really bad practice as such it depends on your needs. If the desire is to really have as large a single volume as possible then a 20 / 4 split could be done.

TPARKERNA
6,052 Views

Thanks Nigelg1965,

I understand what you're saying.  I cannot see how the 20 / 4 split could be done (?)  As soon as the heads are started up they are shipped with 6 disks already gone for Ontap. 3 / head.  Also, when I try to grow Aggr0 on the 1st controller it can only see 8 disks, When I try to put in 16 disks it tells me there are only 8 disks available.  how can I get it to see all disks available?

Thanks for help

nigelg1965
6,053 Views

Hi

By default what has happens is the disks are split across the heads with each head owning half.

To change the owner of a disk you're going to need to get down and dirty at the command-line.

Connect via SSH/Telnet/serial/SP and type

disk show

You should get some that looks a bit this.

  DISK       OWNER                      POOL   SERIAL NUMBER         HOME

------------ -------------              -----  -------------         -------------

0a.00.14     myfiler01-c1(1234567414)    Pool0  Z1N2L1JM              myfiler01-c1(1234567414)

0b.00.15     myfiler02-c1(1234567663)    Pool0  Z1N2L1ZF              myfiler02-c1(1234567663)

0b.00.3      myfiler02-c1(1234567663)    Pool0  Z1N2LA4X              myfiler02-c1(1234567663)

0b.00.23     myfiler02-c1(1234567663)    Pool0  Z1N2L23X              myfiler02-c1(1234567663)

0a.00.4      myfiler01-c1(1234567414)    Pool0  Z1N2LGYN              myfiler01-c1(1234567414)

0b.00.19     myfiler02-c1(1234567663)    Pool0  Z1N2L2FE              myfiler02-c1(1234567663)

0b.00.13     myfiler02-c1(1234567663)    Pool0  Z1N2L1XJ              myfiler02-c1(1234567663)

0b.00.1      myfiler02-c1(1234567663)    Pool0  Z1N2LH25              myfiler02-c1(1234567663)

0b.00.17     myfiler02-c1(1234567663)    Pool0  Z1N2LBG8              myfiler02-c1(1234567663)

0b.00.21     myfiler02-c1(1234567663)    Pool0  Z1N2L9DJ              myfiler02-c1(1234567663)

0a.00.22     myfiler01-c1(1234567414)    Pool0  Z1N2L3W2              myfiler01-c1(1234567414)

0a.00.0      myfiler01-c1(1234567414)    Pool0  Z1N2L1Y9              myfiler01-c1(1234567414)

0a.00.12     myfiler01-c1(1234567414)    Pool0  Z1N2LGRM              myfiler01-c1(1234567414)

0a.00.10     myfiler01-c1(1234567414)    Pool0  Z1N2L2TV              myfiler01-c1(1234567414)

0a.00.18     myfiler01-c1(1234567414)    Pool0  Z1N2LBJS              myfiler01-c1(1234567414)

0a.00.20     myfiler01-c1(1234567414)    Pool0  Z1N2L9H9              myfiler01-c1(1234567414)

0a.00.16     myfiler01-c1(1234567414)    Pool0  Z1N2LHKW              myfiler01-c1(1234567414)

0b.00.9      myfiler02-c1(1234567663)    Pool0  Z1N2LGQM              myfiler02-c1(1234567663)

0a.00.6      myfiler01-c1(1234567414)    Pool0  Z1N2L91R              myfiler01-c1(1234567414)

0a.00.2      myfiler01-c1(1234567414)    Pool0  Z1N2L1DV              myfiler01-c1(1234567414)

0b.00.5      myfiler02-c1(1234567663)    Pool0  Z1N2LGE3              myfiler02-c1(1234567663)

0b.00.11     myfiler02-c1(1234567663)    Pool0  Z1N2L8VH              myfiler02-c1(1234567663)

0b.00.7      myfiler02-c1(1234567663)    Pool0  Z1N2L92W              myfiler02-c1(1234567663)

0a.00.8      myfiler01-c1(1234567414)    Pool0  Z1N2L94C              myfiler01-c1(1234567414)

You then need to identify the spare disks on one head you want to move to the other and type

disk assign diskid -s unowned -f

disk assign 0c.00.16 -s newfiler

Remember once a disk is allocated to an aggregate you won't be able to move it to another filer.

Shame you reseller / NetApp weren't more help at purchase.

TPARKERNA
6,052 Views

Thanks so much nigelg1965.  This is what I was looking for.  Is this against a best practice? is there a performance or storage hit for doing it this way?  will there not be full HA?

Thanks

HAMMERTECHIE
6,052 Views

Hi Tony,

It all depends on how you want to set it up..Appreciate the fact that you want to max the capacity from your existing drives but load balancing your load across both the controllers give you proper utilization of your resources CPU/Memory rather than setting up all your controller load on one controller -The current set up in the way you mentioned is generally what they call Active- Passive configuration - It is HA ..

According to the Storage subsystem guide there is a minimum root volume size for each controller which varies depending on controller model ( root volume contains the configuration information of your controller )

So in most installs we add data drives to the root aggregate and make the configuration as below:

FAS01 RG11 9D+2P

1 Hot spare

FAS02 RG11 9D+2P

1 Hot spare

The total useable capacity would be 8.65 TiB and obviously depending on the root volume size you would set apart capacity -

Best practices are guidelines to work around to get the best for your environment ( from a present and future perspective ) 

I would suggest having a quick go through the Storage Subsystem guide which would help with regard to the practices

Kind regards,

Bino

TPARKERNA
6,052 Views

This is the reason I was looking for.  I don;t want a bottle neck on one Processor or RAM.  I will be dividing these disks up across both controllers as I don;t want to find that bottle neck 6 weeks form now in production. 

Thanks everyone for all the help and information.

nigelg1965
6,052 Views

Happy to help.

I agree with Bino, in general you'd split the disks across the two heads evenly, especially with "only" 24 disks.

We've several 2240-4 (same brains different disks) they seem to barely touch 5% on the CPU serving CIFS to sites around 80 users from one head and NFS for VMware from the other.

Public