ONTAP Discussions

Root Volume Sizing on 2650 with ADP

JCutter03
6,525 Views

Hey everyone we just got a new 2650 in and we are working on setting it up. We will have a 2 node cluster and on one node we are wanting to assign an all ssd flash aggregate to host our VMware environment on NFS and on the other node we will host NFS, iscsi, and cifs on sas drives with a SSD flash pool assigned to it.

Here comes our problem. When we initialize the 2650 it's taking 10 sas drives for node 1's root volume using ADP. All that space will be wasted because we have no intentions of using spinning aggregates on node 1. We really want to assign 4 sas drives and avoid eating 6 drives. What's the best way to go about this? I know there is a way to move root aggregates but we tried the 9.0 method and it doesn't seem to work correctly. My idea was turn off all shelves and leave only 8 drives left plugged in and reset everything with option 4. Theoretically it would take 4 drives for node 1 and 4 drives 4 node 2 and I would be where I want to be. Anyone see anything wrong with this? Should it be 95% full even after initial setup?

I called support and they told me they can't help since it's a new setup and techinically this is a new setup and we would have to get professional services even though in the guides on moving root volumes they say to work with support if you don't go with their built in setup

Any help would be greatly appreciated.

Thanks!

7 REPLIES 7

GidonMarcus
6,497 Views

Hi

 

you can assign the ADP data partition to any of the nodes while still using the root partition on the other, you just also need to make sure you have "spare" root partitions on that node as well (don't need a whole spare disk) .

i believe ADP will always take the first disks on an internal shelf regardless what they are. so if you want them to be flash you'll need to move around (i didn't cross ref that. so sorry if i'm misleading on that point) .

 

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

JCutter03
6,473 Views

Sorry, I'm not quite sure I understand.

 

In the image below is a screenshot of the aggregate on node 1 that got created with the 10 disks. How can I create a new aggregate on node 2 using all that spare data? The only thing I can see is maybe you are talking about is using that spare data and having its home be node 1 which is what I wanted to avoid. I wanted controller 1 entirely dedicated to vmware storage and controller 2 dedicated to cifs/nfs/icsci. 

 

I could also be completely misunderstanding. Is there something I'm missing?

 

 

aggr01.png

 

 

GidonMarcus
6,458 Views

it's much more optimal (space wise and not to "waste" SSD capacity) to just have the SAS partitioned.

i actually have on one of my clusters a configuration very similar to that (just with SAS - SATA not SSD)

 

you can see the i'm using the SAS (in your case its an SSD) on one node. and partitioned SATA used for data on the other node while the root partition is spread on both.

 

 

 

CLUSTER::*> aggr show -fields aggregate,node,diskcount
aggregate          	diskcount node
----------------------- --------- ----------
NODE1_SATA_01 		22        NODE1
NODE1_SATA_ROOT		10        NODE1
NODE2_SAS_01  		68        NODE2
NODE2_SATA_ROOT		10        NODE2
4 entries were displayed.


CLUSTER::> storage disk show -fields data-owner,root-owner,container-name   -container-type shared
disk   container-name                           data-owner      root-owner
------ ---------------------------------------- --------------- ----------
1.0.0  NODE2_SATA_ROOT                     	NODE1 		NODE2
1.0.1  NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.2  NODE2_SATA_ROOT                     	NODE1 		NODE2
1.0.3  NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.4  NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.5  NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.6  NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.7  NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.8  NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.9  NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.10 NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.11 NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.12 NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.13 NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.14 NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.15 NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.16 NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.17 NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.18 NODE1_SATA_01, NODE2_SATA_ROOT 		NODE1 		NODE2
1.0.19 NODE1_SATA_01, NODE1_SATA_ROOT 		NODE1 		NODE1
1.0.20 NODE1_SATA_01                       	NODE1 		NODE2
1.0.21 NODE1_SATA_01                       	NODE1 		NODE1
1.0.22 NODE1_SATA_01                       	NODE1 		NODE2
1.0.23 NODE1_SATA_01                       	NODE1 		NODE1
24 entries were displayed.
>

 

 

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

JCutter03
6,461 Views

Also was trying to understand the root aggregate/volume a little bit more. I see in documentation its always supposed to be at 95% full but according to the hardware universe (see picture) the minimum root aggregate size is supposed to be 431GB and the minimum root volume size is supposed to be 350GB. It created mine lower then both of those. Should I be worried?

 

2017-12-14 09_42_19-9.2_ONTAP-FAS.pdf.png

 

2017-12-14 09_44_56-ilatcl1.am.mot.com - PuTTY.png

 

2017-12-14 09_46_51-ilatcl1 - NetApp OnCommand System Manager.png

 

 

 

GidonMarcus
6,448 Views

min also not 1:1 on the min size. don't worry about what you can't change (as this is something the admin really don't have any way to control on. the ADP has no customization options in anything regarding sizes)

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

JCutter03
6,420 Views

How do you actually change the data owner?

 

Here is the output of mine:

 

ilatcl1::> storage disk show -fields data-owner,root-owner,container-name
disk container-name data-owner root-owner
------- ---------------- ---------- ----------

Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
1.0.0 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.1 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.2 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.3 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.4 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.5 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.6 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.7 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.8 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.9 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.10 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.11 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.12 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.13 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.14 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.15 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.16 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.17 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
Press <space> to page down, <return> for next line, or 'q' to quit...
disk container-name data-owner root-owner
------- ---------------- ---------- ----------
1.0.18 aggr0_ilatcl1_02 ilatcl1-02 ilatcl1-02
1.0.19 aggr0_ilatcl1_01 ilatcl1-01 ilatcl1-01
1.0.20 - ilatcl1-02 ilatcl1-02
1.0.21 - ilatcl1-02 ilatcl1-01
1.0.22 - ilatcl1-02 ilatcl1-02
1.0.23 - ilatcl1-02 ilatcl1-01
1.10.0 n1_vm_1 - -
1.10.1 n1_vm_1 - -
1.10.2 n1_vm_1 - -
1.10.3 n1_vm_1 - -
1.10.4 n1_vm_1 - -
1.10.5 Pool0 - -
1.10.6 Pool0 - -
1.10.7 Pool0 - -
1.10.8 Pool0 - -
1.10.9 Pool0 - -
1.10.10 Pool0 - -
1.10.11 Pool0 - -
1.11.0 n2_nas_1 - -
1.11.1 n2_nas_1 - -
Press <space> to page down, <return> for next line, or 'q' to quit...
disk container-name data-owner root-owner
------- ---------------- ---------- ----------
1.11.2 n2_nas_1 - -
1.11.3 n2_nas_1 - -
1.11.4 n2_nas_1 - -
1.11.5 n2_nas_1 - -
1.11.6 n2_nas_1 - -
1.11.7 n2_nas_1 - -
1.11.8 n2_nas_1 - -
1.11.9 n2_nas_1 - -
1.11.10 n2_nas_1 - -
1.11.11 n2_nas_1 - -
1.11.12 n2_nas_1 - -
1.11.13 n2_nas_1 - -
1.11.14 n2_nas_1 - -
1.11.15 n2_nas_1 - -
1.11.16 n2_nas_1 - -
1.11.17 n2_nas_1 - -
1.11.18 n2_nas_1 - -
1.11.19 n2_nas_1 - -
1.11.20 n2_nas_1 - -
1.11.21 n2_nas_1 - -
Press <space> to page down, <return> for next line, or 'q' to quit...
disk container-name data-owner root-owner
------- ---------------- ---------- ----------
1.11.22 n2_nas_1 - -
1.11.23 Pool0 - -
60 entries were displayed.

Public