ONTAP Discussions

Ontap 9.1 SSD Drives

jpdeboer
3,748 Views

Hi,

 

Please know i am new to NetApp, this is the first time i am doing an installation (There has to be a first time for everything). We have a AFF A200 system with  900Gb SSD drives. Now i have setup a cluster, and added the 2 Service processors, IP addressing / naming is correct. So far so good, no auto assignmend is enabled and the drives are assigned to the 2 nodes and is a shared enviremont. 

 

When i do the command "disk show" is see 10 out of 12 drives are assigned to the nodes, 2 arent, but these are for parity if i am correct. This also seems to be correct, now when i enter the command "aggr show-spare-disks" i see 18 entries, shouldnt that also be 12 ? See below:

 

===========================================================================================================================================

AMS_Cluster::> aggr show-spare-disks

Original Owner: AMS_Node-01
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.1 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.3 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.5 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.7 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.9 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.11 SSD solid-state - block 375.2GB 143.7GB 894.3GB zeroed

Original Owner: AMS_Node-02
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.0 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.1 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.2 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.3 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.4 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.5 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.6 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.7 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.8 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.9 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.10 SSD solid-state - block 750.3GB 143.7GB 894.3GB zeroed
1.0.11 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
18 entries were displayed.

=========================================================================================================================================

 

Also, for most of the drives i can only use 375 out of 894.3, does this have to do with the paritioning ?

 

What i would like to accomplish is that i create 2 data aggregates, where i share the drives over. So both of them have the same amount of data available. From there i will create the needed SVMs.

 

Could some help with this ? And explain a litte what i am doing wrong?

 

Thanks in advance, let me know if more information is needed.

3 REPLIES 3

AlexDawson
3,670 Views

Hi there!

 

I'll paste your output in monospaced, as it makes it a little easier to understand

 

 

===========================================================================================================================================
AMS_Cluster::> aggr show-spare-disks
Original Owner: AMS_Node-01
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.1 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.3 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.5 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.7 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.9 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.11 SSD solid-state - block 375.2GB 143.7GB 894.3GB zeroed
Original Owner: AMS_Node-02
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.0 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.1 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.2 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.3 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.4 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.5 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.6 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.7 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.8 SSD solid-state - block 750.3GB 0B 894.3GB zeroed
1.0.9 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
1.0.10 SSD solid-state - block 750.3GB 143.7GB 894.3GB zeroed
1.0.11 SSD solid-state - block 375.2GB 0B 894.3GB zeroed
18 entries were displayed.
=========================================================================================================================================

To start with, these disks are partitioned as "Root-Data-Data", which means every drive has three partitions, one for holding controller roots, and two for holding user data. In this configuration, Node 1 has 6 data partitions, and node 2 has the remaining 18, for a total of 24 data partitions, at two per drive, across 12 drives.

 

Hopefully that helps to start making sense from what you see...

 

From the "960GB" base10 marketing capacity, we get 894GiB, base 2, formatted capacity. Then around 144GiB is used for various overheads etc, as well as the root partition, leaving two 375GB partitions per drive, or 750GiB total. 

 

Starting with Page 49 of this document - https://library.netapp.com/ecm/ecm_download_file/ECMLP2496263 - there are commands outlining how to display which partitions are owned by each system, and how to assign ownership, and then we would recommend moving to the GUI to create data aggregates to store user data on (make sure you set the RAID group size). 

 

Hope this helps, please feel free to post followups!

 

jpdeboer
3,643 Views

Hi There,

 

Thank you both for the reply and time to explain this to me. These explanations / links made me understand the technoligy and how it works and why i see these 2 data partitions per drive.

 

Now when I use the command "disk show -partition-ownership" I see that AMS_Node-02 has more data partitions assigned to him. Wouldnt it be better to assign Data1 from each drive to Node 1 and Data2 to node 2 ? So its equally split over the nodes?

 

Thanks

 

colsen
3,649 Views

As always, Alex has it covered.

 

I honestly haven't found a great KB/etc on root-data-data (ADPv2) partitioning on NetApp's site, but Sami Tururen put this out on his site:

 

https://dontpokethepolarbear.wordpress.com/2016/06/23/netapp-aff-and-advanced-drive-partition-v2-part-1/

 

It just explains the whole thing in a way that made it easier for my slow nuerons to comprehend.

 

 

Good luck!

 

Chris

Public