ONTAP Hardware

Replacement of DISK

saru8441
3,218 Views

Yes, nice to meet you all.
I have hardly touched NETAPP Storage and suddenly the person in charge quit and I have to deal with it.
Therefore, I would like your help as my knowledge is limited.

The DISK broke and was replaced with a HDD that was used elsewhere in a hurry.
At that time, it displayed "badlabel" and I was in a hurry, but I found out that it was displayed like that because the disk was used in another chassis.
Then, with the following command, the badlabel was corrected and the disk was recognized.

 

storage disk unfail -s 1.1.15


Up to this point, there was no problem, but the disk was recognized as a spare, and I think it is different from the original state.
Can you please tell me how to restore this to its original state?
What is the original state?
1.1.13 3.63TB 1 13 FSAS aggregate aggr2 main-01
1.1.14 3.63TB 1 14 FSAS aggregate aggr2 main-01
1.1.15 3.63TB 1 15 FSAS aggregate aggr2 main-01
1.1.16 3.63TB 1 16 FSAS aggregate aggr2 main-01
It is believed to be.
The current status is
1.1.13 3.63TB 1 13 FSAS aggregate aggr2 main-01
1.1.14 3.63TB 1 14 FSAS aggregate aggr2 main-01
1.1.15 3.63TB 1 15 FSAS spare Pool0 main-01
1.1.16 3.63TB 1 16 FSAS aggregate aggr2 main-01

And for information on spares,
main::*> storage disk zerospares aggr status -s
Original Owner: main-01
Pool0
Spare Pool
Usable Physical
Disk Type Class RPM Checksum Size Size Status
---------------- ------ ----------- ------ -------------- -------- -------- --------
1.1.15 FSAS capacity 7200 block 3.63TB 3.64TB zeroed
Original Owner: main-01
Pool0
Root-Data Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.15 FSAS capacity 7200 block 3.58TB 53.88GB 3.64TB zeroed
Original Owner: main-02
Pool0
Root-Data Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- -------- -------- --------
1.0.22 FSAS capacity 7200 block 0B 53.88GB 3.64TB zeroed
3 entries were displayed.

 

The information of the spares is as follows.
What is wrong?

I often maintain RAID on servers, so I did it in the same way.

And I am not good at English, so I am using translation software.

6 REPLIES 6

akiendl
3,181 Views

spare disks do not take the same place of the previous failed drive. The spare disk that jumps in will be in the aggregate, but the new disk will become the spare.

 

ak.

saru8441
3,153 Views

Thanks for the replies.
I understand that the spare disk will be in a different location.
Three disks are spares, including the DISK that was replaced and mounted.
Is this normal?

My understanding is that if there are two spares set up, there should be two spares.

The disk that failed this time is a Data DISK, so I believe that the area of the following Spare DISK was used as a spare.
Original Owner: main-02
Pool0
Root-Data Partitioned Spares
Local Local
Data Root Physical
Disk Type Class RPM Checksum Usable Usable Size Status
---------------- ------ ----------- ------ -------------- -------- --------
1.0.22 FSAS capacity 7200 block 0B 53.88GB 3.64TB zeroed

I was wondering if it has to be Root-Data Partitioned Spares.

Is this a problem with the current situation?
I need your help. Thanks.

AlexDawson
3,096 Views

Please provide the output of 

storage aggregate show

 

saru8441
3,068 Views

Thank you.

 

main::> store rage aggregate show


Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_main_01
368.4GB 15.83GB 96% online 2 main-01 raid_tec,
normal
aggr0_main_02
368.4GB 15.83GB 96% online 2 main-02 raid_tec,
normal
aggr1 64.42TB 1.46TB 98% online 4 main-01 raid_tec,
normal
aggr2 65.36TB 21.15TB 68% online 6 main-01 raid_tec,
normal
4 entries were displayed.

saru8441
2,974 Views

Is everyone OK with the way things are going?
Is there anyone who can tell?
Please tell me.
Thank you in advance.

Amador
2,703 Views

This issue would be related to multiple situations which needs some specific analisys (the specific disk model, where the disk was used before, the previous disk configuration, the current changes made to that disk, etc.)

 

Disk ownership, sanitize, partitioning, etc. can create different behaviours (example Disks are not owned or report a Bad label after running the disk sanitize release command - NetApp Knowledge Base)

 

Since this requires further steps and wrong activities would created problematic situations in the current stable configuration, it is better to open a Support Case with the NetApp Technical Support Center.

Public