Hi,
Thanks for this update.
I am still trying to get my head around this issue you raised, but it's interesting.
I am sure, there must be some events in the event logs about whatever transitional-state that made that disk look like that.
I was reading about spare paritions:
https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-psmg%2FGUID-1C0DF65F-4EB1-4729-A0FC-A2A8A6278664.html
I noted something, which I found interesting (un-related to this but educational):
You must leave a disk with both the root and data partition available as spare for every node.
Original Owner: c1-01
Pool0
Shared HDD Spares
Local Local
Data Root Physical
Disk Type RPM Checksum Usable Usable Size
--------------------------- ----- ------ -------------- -------- -------- --------
1.0.1 BSAS 7200 block 753.8GB 73.89GB 828.0GB
Parent topic: Managing aggregates
Theory you were told by the support is believable :
For example:
This is the spare disk available on my cluster-node, to be added to data_aggr:
Basically, root-usable is 'zero' as expected , probably when it was pulled in to be added to the data_aggr, it was in that state where, it got assigned and it showed no data usable & root same time. Makes sense...:)
Original Owner:
Pool0
Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
3.1.22 SSD solid-state - block 1.72TB 0B 3.49TB zeroed