I'm newish to NetApp and done loads of the online training videos. Non of them go into the whole root aggrates etc.
So I have a FAS2552 with 2 controllers and 24 drives. 4 SSD and 20 SAS. The aggrates setup (by an external contractor) are the following:
Aggregate Size Available Used% State #Vols Nodes RAID Status --------- -------- --------- ----- ------- ------ ---------------- ------------ aggr0_node_01 904.9GB 552.9GB 39% online 1 Node-01 raid_dp, hybrid_ enabled, normal aggr0_node_02 122.8GB 5.95GB 95% online 1 Node-02 raid_dp, normal aggr1 11.43TB 11.43TB 0% online 0 Node-01 mixed_raid_ type, hybrid, normal
So we have one aggr with a root volume using 39% at 900GB size using 16 drives on node 1 and another aggr on node 2 which is 133GB using 95% and 4 drives. Then another aggr1 which is for the SVM providing 11.43TB with a Flash Pool. Is this the correct way of doing this ?? I simple want one big LUN which will be presented to a ESXi cluster of servers.
Can't find anything so far that explains the principles etc behind this.. I'm reading things like the aggr should be split evenly over the disks etc? Why are the root aggr so uneven in size across the controllers ? I'm assuming these where setup by the software when initialised ?
I was wondering what the maximum the the node need for the root aggr and is it easy to change them ? The NetApp isn't in a live envrioment yet so this is why I'm checking now. Just what to make sure it setup correctly. One of the root aggr is almost at capacity too....
Note: The KB works well for FAS (non-AFF), too. You'll just end up with root-data, not root-data-data, partitions. When it's done, each node will "own" half of the 20 disks from a container perspective, and the smaller P2 root partitions will be used to build the root aggregates (and leave 1 spare). The large P1 data partitions will be "spare" for creating data aggregates from either node, as you see fit.
It depends on whether you want to have both nodes serving data (active-active) or you're happy relegating the 2nd node to a passive "backup" role.
If active-active, you'd have one or more aggregates configured on each node. This will have a bit more "parity disk partition" tax than using all data partitions for one big aggregate, but you have more data serving performance with both nodes active.