ONTAP Discussions

Broken disk after I Assign Ownership to them

NetappNewbie-Roy
10,802 Views

Hi Netapp team,

I have assign SSD disk to by "disk assign -disk 1.0.10 -owner AFF-02 ". Then they all become broken. May I know how to fix it. I want to create a data aggregate by those disk. THX.

 

 

AFF::> storage disk show -broken
Original Owner: AFF-02
  Checksum Compatibility: block
                                                                          Usable Physical
    Disk            Outage Reason HA Shelf Bay Chan   Pool  Type    RPM     Size     Size
    --------------- ------------- ------------ ---- ------ ----- ------ -------- --------
    1.0.6           bad label     0a     0   6    B  Pool0   SSD      -  372.4GB  372.6GB
    1.0.7           bad label     0b     0   7    A  Pool0   SSD      -  372.4GB  372.6GB
    1.0.8           bad label     0a     0   8    B  Pool0   SSD      -  372.4GB  372.6GB
    1.0.9           bad label     0b     0   9    A  Pool0   SSD      -  372.4GB  372.6GB
    1.0.10          bad label     0a     0  10    B  Pool0   SSD      -  372.4GB  372.6GB
    1.0.11          bad label     0b     0  11    A  Pool0   SSD      -  372.4GB  372.6GB
6 entries were displayed.

                                                               

1 ACCEPTED SOLUTION

nitish
10,787 Views

Option 1

 

unfail the failed drives to return them to the spare pool:
system node run localhost
priv set advanced
disk unfail -s <disk_name>
Repeat above steps for all disks that have failed

Reassign the disks to the storage node

 

Check disks are added as spares

aggr status -s
All disks should show as spares

disk zero spares
The bad labels will clear once the disk zero is complete.

 

Option 2

 

 

When disks are added to the storage system with an unsupported Data ONTAP version, the disks fail with bad label errors. The disks are tagged as Bad Label in the broken disks pool and are unable to be used.

 

Resolution

Upgrade Data ONTAP to the minimum requirement as indicated in the Disk Drive and Firmware Matrix table in the NetApp Support site.

View solution in original post

1 REPLY 1

nitish
10,788 Views

Option 1

 

unfail the failed drives to return them to the spare pool:
system node run localhost
priv set advanced
disk unfail -s <disk_name>
Repeat above steps for all disks that have failed

Reassign the disks to the storage node

 

Check disks are added as spares

aggr status -s
All disks should show as spares

disk zero spares
The bad labels will clear once the disk zero is complete.

 

Option 2

 

 

When disks are added to the storage system with an unsupported Data ONTAP version, the disks fail with bad label errors. The disks are tagged as Bad Label in the broken disks pool and are unable to be used.

 

Resolution

Upgrade Data ONTAP to the minimum requirement as indicated in the Disk Drive and Firmware Matrix table in the NetApp Support site.

Public