ONTAP Hardware

New install FAS2020

l_augeard
12,340 Views

Hello, i'm french, sorry for my bad english !

I receive a FAS2020 with 12 SAS 450go and 2 controller.

I have some question :

-controller 1 assign 4 disk,(1 spare and 3 for root aggr0)

-controller 2 assign 8 disk (5 spare and 3 for root aggr0)

its a probleme for me !

they are 6 disk used for root ???

if i want used more space, how to do this ???

1 ACCEPTED SOLUTION

mechatronic
7,734 Views

Spares allocation

RAID4 minimum global spares 1 disk

RAID-DP minimum global space 2 disks

so if you configure you environment as previous recommendatations you require 3 spare disks minimum in total. 1 for node 1 (RAID4) and 2 for node 2 (RAID-DP)

View solution in original post

23 REPLIES 23

scottgelb
11,513 Views

It is a best practice to keep root on a separate aggregate (wafl_check , wafliron), however on smaller systems like this we often use one aggregate to contain all volumes.  Just make sure you have a volume guarantee on all volumes (default is volume guarantee)  so the root volume never runs out of space.  So you could do something like below to get about 2TB usable across both nodes (6x 450GB data drives split across 2 controllers)..

controller1

   1x  spare

   3x  aggr0  (rg0: 1D+2P)

controller2

   1x spare

   7x aggr0   (rg0: 5D+2P)

Or you could make a more even layout to use both nodes symmetrically (not required) to get ~1TB usable per node.

controller1

   1x  spare

   5x  aggr0  (rg0: 3D+2P)

controller2

   1x spare

   5x aggr0   (rg0: 3D+2P)

l_augeard
11,513 Views

yes, if is it :

controller1

   1x  spare

   3x  aggr0  (rg0: 1D+2P)

controller2

   1x spare

   7x aggr0   (rg0: 5D+2P)

-on controller1it's possible to delete spare and affect it to controller 2 ? but if in controller 1 there are only root, why this ?

-it's not possible to delete root on controler 1 and if controler 2 is brake, controler 1 relay ?

I think with 2 root I lost very memory !

scottgelb
11,513 Views

Spares are not global, so each controller needs a dedicated spare.  Disk ownership is used to assign disks between controllers.  The layout below is for aggregates, not for root.  The root volume will be a flexible volume and can share aggr0 with other volumes.  For a larger system we don't prefer doing this, but for smaller systems like this it is common.  The root volume in aggr0 can be as small as 10GB for a 2020, but I wouldn't go that small...maybe 50 or 100GB or whatever is enough depending on if you are running cifs auditing or other logging.  Out of the 2TB, you will subtract the root volume size then have the rest of the aggr0 aggregate free for other flexible volumes.

l_augeard
11,513 Views

ok, thus,

in controller 1

3 disk + 1 spare for root,

vol0 : root total capacity (i didnt use 350go only)

in controller 2

7disk + 1 spare

vol0 aggr0 : root reduce to 50go

vol1 aggr0: for ESX, rest of capacity ???

it ok for this ???

-if a buy later extension disk, i could move root (controler 2)to a futur aggr1 ???

-as I asked, it's not possible to delete root on controller 1 ? and use controler 1 for spare ? not in cluster ? (active/passive)

radek_kubka
11,120 Views

You always need a root volume on a controller - even if there is nothing else on it.

Basically you can move the root volume around. So potentially you may squeeze few extra gigabytes:

- create new aggregate containing 2 disk in RAID-4 (not DP)

- move the root volume from 3-disk aggregate to 2-disk aggregate

- destroy old 3-disk aggregate & assign disks to the other controller

- stick to one hot-spare per controller

Regards,
Radek

l_augeard
11,120 Views

Yes I see !

-for Raid 4, only 2 disk ? not 3 ? not spare ?

-what is the interest of active/active controller ?since it is independent

danielpr
11,513 Views

Hi,

Andrew's response in http://communities.netapp.com/message/6776#6776  should help you a bit about the design.

Thanks

Daniel

l_augeard
11,513 Views

On controller 1 I change aggr0 on RAID 4 !

2disk + 1 spare, Spare is util ?

on controller 2 I think I do the same, ok ?

RAID 4 with 2 disk is suffisant for root ?

radek_kubka
11,513 Views

You don't need a dedicated root aggregate - it may be recommended for bigger systems, but makes no sense with just 12 drives.

So you can assign 3 drives to controller 1 - assuming it will be in fact passive (containing only root volume), and all remaining drives to controller 2 - keeping there its root volume plus all your actual data (in separate volumes, but within the same aggregate)

Regards,
Radek

l_augeard
10,592 Views

-Are sure for separate root ? even with ESX ??

-Raid 4 for controller 1 with root ? 2 disk, not spare util ? (it just for crash controller 2)

-and in controller 2, aggr 0 with 10disk (root and ESX) ?

radek_kubka
10,592 Views

Exact disk layout I'd use if there is a single production workload:

controller 1 - hosting aggr0, RAID-4, with vol0 (root) only:

1. data

2. parity

3. hot-spare

controller 2 - hosting aggr0, RAID-DP, with vol0 (root) & vol1 (production):

4.-9. data

10. parity

11. dual-parity

12. hot-spare

l_augeard
10,592 Views

Thanks !

in controller 1, it possible to recuperates hot spare and affect to controller 2 ?

if root disque on controller 1 is break, i replace with the hot spare of controller 2 ?

Just 2 disk not break at the same time !

radek_kubka
10,592 Views

Scott explained that above already:

Spares are not global, so each controller needs a dedicated spare.

l_augeard
10,592 Views

Yes I agree but if a force spare to controller 2, what is the problem ?

Actually for test in controller 1 there a not spare and it work

shail_usi
10,592 Views

Filer even function witout an spare disk, However in case of Aggregate disk failure, Filer will work in degraded mode and automatically get rebooted after 24 hours.

l_augeard
10,053 Views

yes ok !

So if any problem with the backup controller ??


same in raid 4 (2disk) ?

mechatronic
10,053 Views

Hi Laugeard,

What I would you is  the following

Node 1 RAID4 (only requires 1 spare)

Node 2 RAID-DP (requires 2 spares, otherwise system complains about low spares all the time)

Node1 (1Data+1Parity+1Spare)

Node2 (5Data+2Parity+2Spare)

Node1, would hold data that has lower performance requirements

Node2, would hold all other data.

Regarding spares disks, since you will have autosupport enabled, if a disk fails on Node1, you can always remove disk ownership from a current spare on node2  and assign it to node1, then assign the replacement disk to Node2. This way you will always be ensure of ability to rebuild a failed disk automatically.

Also you can changed the system not to shutdown in 24 hours using the options raid.timeout (this however is not recommended by NetApp, but the option is available if needed).

l_augeard
10,053 Views

Yes but node 1 it just for backup of node 2,

if I understead correctly, if node 2 crash node 1 relay ?

baselinept
10,053 Views

Hi Laugeard,

Yes, if any of the nodes crashes the remaining one will assume the identity of the failed node. FAS is a dual-head system, so each controller is actually a separate brain if you will.

Example:

Node 2 fails --> Node 1 assumes node 2 identity as well as it's own. So, Node 1 would control the disks from Node 2 also.

Node 1 fails --> Node 2 does exactly the same as explained above but this time it also assumes the disks from Node 1.

You can read more in-depth info about this on the "Data ONTAP 7.3 Active/Active Configuration Guide" available on NOW.

Cheers,

l_augeard
6,940 Views

Thanks,

Thus, for a little FAS 2020 with, one spare is sufficient for all node ! ? no ?

Public