Hey everyone we just got a new 2650 in and we are working on setting it up. We will have a 2 node cluster and on one node we are wanting to assign an all ssd flash aggregate to host our VMware environment on NFS and on the other node we will host NFS, iscsi, and cifs on sas drives with a SSD flash pool assigned to it.
Here comes our problem. When we initialize the 2650 it's taking 10 sas drives for node 1's root volume using ADP. All that space will be wasted because we have no intentions of using spinning aggregates on node 1. We really want to assign 4 sas drives and avoid eating 6 drives. What's the best way to go about this? I know there is a way to move root aggregates but we tried the 9.0 method and it doesn't seem to work correctly. My idea was turn off all shelves and leave only 8 drives left plugged in and reset everything with option 4. Theoretically it would take 4 drives for node 1 and 4 drives 4 node 2 and I would be where I want to be. Anyone see anything wrong with this? Should it be 95% full even after initial setup?
I called support and they told me they can't help since it's a new setup and techinically this is a new setup and we would have to get professional services even though in the guides on moving root volumes they say to work with support if you don't go with their built in setup
you can assign the ADP data partition to any of the nodes while still using the root partition on the other, you just also need to make sure you have "spare" root partitions on that node as well (don't need a whole spare disk) .
i believe ADP will always take the first disks on an internal shelf regardless what they are. so if you want them to be flash you'll need to move around (i didn't cross ref that. so sorry if i'm misleading on that point) .
In the image below is a screenshot of the aggregate on node 1 that got created with the 10 disks. How can I create a new aggregate on node 2 using all that spare data? The only thing I can see is maybe you are talking about is using that spare data and having its home be node 1 which is what I wanted to avoid. I wanted controller 1 entirely dedicated to vmware storage and controller 2 dedicated to cifs/nfs/icsci.
I could also be completely misunderstanding. Is there something I'm missing?
Also was trying to understand the root aggregate/volume a little bit more. I see in documentation its always supposed to be at 95% full but according to the hardware universe (see picture) the minimum root aggregate size is supposed to be 431GB and the minimum root volume size is supposed to be 350GB. It created mine lower then both of those. Should I be worried?
min also not 1:1 on the min size. don't worry about what you can't change (as this is something the admin really don't have any way to control on. the ADP has no customization options in anything regarding sizes)