ONTAP Discussions

Quick Suggestion on a aggregate layout

Prawa
3,330 Views

Recenty I got a setup to configure (2 nodes cdot cluster) with total 48 disks

 

Disk layout ->

 

Shelf 0

 

1.0.0,1.0.2,1.0.4,1.0.6,1.0.8,1.0.10,1.0.12,1.0.14,1.0.16,1.0.18,1.0.20,1.0.22 (12 disks) - shared - node 1 (owner)

 

1.0.1,1.0.3,1.0.5,1.0.7,1.0.9,1.0.11,1.0.13,1.0.15,1.0.17,1.0.19,1.0.21,1.0.23 (12 disks) - shared - node 2 (owner) -

 

Shelf 1

 

1.1.0,1.1.2,1.1.4,1.1.6,1.1.8,1.1.10,1.1.12,1.1.14,1.1.16,1.1.18,1.1.20,1.1.22 (12 disks) - spare (non shared) - node 1 (owner)

 

1.1.1,1.1.3,1.1.5,1.1.7,1.1.9,1.1.11,1.1.13,1.1.15,1.1.17,1.1.19,1.1.21,1.1.23 (12 disks) - spare (non shared) - node 2 (owner)

 

so both the node aggr0 on shared drives.

 

Now normally in smaller system like this we create an active - passive ADP data aggregarte by assigning all the disk to node 1.

In this case, not all the disks were shared so I tried creating one big aggregate of two raid groups (20+2).

Since not all disks were shared (in ADP) , I really faced hardtime in planning on aggr - I did manual allocation of disks to aggregate so as both  the aggregate can have shared disk and then tried to add spare (non ADP ) disks so that those disks can also convert to shared -------- but this doesnt seems to be working fine

 

I did'nt want to perform the initial setup again to make all the disks as shared

 

Finally , I tried below configuration :-

 

 

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
netappds010_n1_aggr0
           368.4GB   17.86GB   95% online       1 netappds010-n1    raid_dp,
                                                                   normal
netappds010_n1_aggr1
           28.47TB   28.47TB    0% online       0 netappds010-n1    raid_dp,
                                                                   normal
netappds010_n2_aggr0
           368.4GB   17.86GB   95% online       1 netappds010-n2    raid_dp,
                                                                   normal
netappds010_n2_aggr1
           29.41TB   29.41TB    0% online       0 netappds010-n2    raid_dp,
                                                                   normal

 

that is -

 

1) both node aggr0 on ADP config.

2) assigned data partition of all node 2 disks to node 1 and created ADP data aggregate (one raid group 20+2,)

3) assigned all the non - ADP disks to node 2 and created normal data aggregate (one raid group 20+2,) 

 

The spare disks are like :-

 

netappds010::> storage aggregate show-spare-disks

Original Owner: netappds010-n1
 Pool0
  Root-Data Partitioned Spares
                                                              Local    Local
                                                               Data     Root Physical
 Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
 1.0.20           SAS    performance  10000 block            1.58TB  53.88GB   1.64TB not zeroed
 1.0.22           SAS    performance  10000 block            1.58TB  53.88GB   1.64TB not zeroed

Original Owner: netappds010-n2
 Pool0
  Spare Pool

                                                             Usable Physical
 Disk             Type   Class          RPM Checksum           Size     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- --------
 1.1.22           SAS    performance  10000 block            1.63TB   1.64TB not zeroed
 1.1.23           SAS    performance  10000 block            1.63TB   1.64TB not zeroed

Original Owner: netappds010-n2
 Pool0
  Root-Data Partitioned Spares
                                                              Local    Local
                                                               Data     Root Physical
 Disk             Type   Class          RPM Checksum         Usable   Usable     Size Status
 ---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
 1.0.21           SAS    performance  10000 block                0B  53.88GB   1.64TB zeroed
 1.0.23           SAS    performance  10000 block                0B  53.88GB   1.64TB zeroed
6 entries were displayed.

 

My question are :--

 

1) Is the above kind of configuration (with both node aggr0 on ADP aggr , node 1 data aggr on ADP and node 2 data aggr as non -ADP) fine, o is there anything problemtic here

2) does the spare disks covering aggregates looks fine

 

Please share your comments

 

2 REPLIES 2

dbenadib
3,252 Views
Hi,

Personally and dependy on the system u are working with I would have recomand to partition all drives so u ll use better number of drives per aggregates.

GidonMarcus
3,205 Views

i thnk your configuration is solid.

 

good luck

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
Public