AFF
AFF
Hi Experts,
Clus1::> storage aggregate show-spare-disks -original-owner n1
Original Owner: n1
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.23 SSD-NVM
solid-state - block 862.9GB 62.35GB 1.75TB zeroed
clus1::> storage aggregate show-spare-disks -original-owner -n2
Original Owner: 2
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.5 SSD-NVM
solid-state - block 0B 62.35GB 1.75TB zeroed
1.0.23 SSD-NVM
solid-state - block 862.9GB 0B 1.75TB zeroed
2 entries were displayed.
This o/p is from a AFF400 system with 12 SSD and using root-data-data partition. Do I need to align the spares on n2?
if yes what is the correct CLI
storage disk replace -disk 1.0.5 -replacement 1.0.23 -action start
or
storage disk replace -disk 1.0.23 -replacement 1.0.5 -action start
Also noticed that spare-core status is false, what needs to be done so that it is enabled? please advise.
clus1::> storage aggregate show-spare-disks -fields is-sparecore
original-owner disk is-sparecore
----------------- ------ ------------
n2 1.0.5 false
n1 1.0.23 false
n2 1.0.23 false
3 entries were displayed.
How about starting with this output instaead:
storage aggregate show-spare-disks
Spares should automatically be distributed appropriately.
The spare-core status is to specify if there is a spare core present on any of these disks. A spare core is generated when a panic occurs. This is normal output.
Here is the requested o/p.I am referring to ONTAP 9.8 "Disk and aggregate management" , page 16.
It says:
"When you add partitioned disks to an aggregate, you must leave a disk with both the root and data
partition available as spare for every node. If you do not and your node experiences a disruption,
ONTAP cannot dump the core to the spare data partition."
1) Does this mean on the n2 node it needs both root and spare partition on 1.0.5 disk?
Clus1::> storage aggregate show-spare-disks
Original Owner: n1
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.23 SSD-NVM
solid-state - block 862.9GB 62.35GB 1.75TB zeroed
Original Owner: n2
Pool0
Root-Data1-Data2 Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.5 SSD-NVM
solid-state - block 0B 62.35GB 1.75TB zeroed
1.0.23 SSD-NVM
solid-state - block 862.9GB 0B 1.75TB zeroed
3 entries were displayed.
spare core is when we dump a core file to a spare disk following a panic.
in the event we don't have a spare disk available, a spray core is performed. It seems you have a spare disk on each node, which should be sufficient for a coredump as needed.
Except... the AFF A400 platform (as do most of the newer models) dumps a coredump into a special partition on the boot device, now. So, the content about needing spare partitions for coredump no longer applies. That said, you still want spare partitions for RAID recovery operations.