Please know i am new to NetApp, this is the first time i am doing an installation (There has to be a first time for everything). We have a AFF A200 system with 900Gb SSD drives. Now i have setup a cluster, and added the 2 Service processors, IP addressing / naming is correct. So far so good, no auto assignmend is enabled and the drives are assigned to the 2 nodes and is a shared enviremont.
When i do the command "disk show" is see 10 out of 12 drives are assigned to the nodes, 2 arent, but these are for parity if i am correct. This also seems to be correct, now when i enter the command "aggr show-spare-disks" i see 18 entries, shouldnt that also be 12 ? See below:
To start with, these disks are partitioned as "Root-Data-Data", which means every drive has three partitions, one for holding controller roots, and two for holding user data. In this configuration, Node 1 has 6 data partitions, and node 2 has the remaining 18, for a total of 24 data partitions, at two per drive, across 12 drives.
Hopefully that helps to start making sense from what you see...
From the "960GB" base10 marketing capacity, we get 894GiB, base 2, formatted capacity. Then around 144GiB is used for various overheads etc, as well as the root partition, leaving two 375GB partitions per drive, or 750GiB total.
Starting with Page 49 of this document - https://library.netapp.com/ecm/ecm_download_file/ECMLP2496263 - there are commands outlining how to display which partitions are owned by each system, and how to assign ownership, and then we would recommend moving to the GUI to create data aggregates to store user data on (make sure you set the RAID group size).
Hope this helps, please feel free to post followups!
Thank you both for the reply and time to explain this to me. These explanations / links made me understand the technoligy and how it works and why i see these 2 data partitions per drive.
Now when I use the command "disk show -partition-ownership" I see that AMS_Node-02 has more data partitions assigned to him. Wouldnt it be better to assign Data1 from each drive to Node 1 and Data2 to node 2 ? So its equally split over the nodes?