Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I am trying to configure 2 nodes (with 3 stacks of disks) that we just added to our lab environment. These nodes were decommissioned from the production environment. The disks in 2 stacks are displaying as broken with one stack showing amber light on all but 3 disks.
Please can someone direct me as to how to solve this problem(s) as I am new to Netapp Storage?
xxxxxxxx::*> storage disk show -container-type broken
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
4.10.3 836.9GB 10 3 SAS broken - yyyyy
4.10.4 836.9GB 10 4 SAS broken - yyyyy
4.10.5 836.9GB 10 5 SAS broken - yyyyy
4.10.6 836.9GB 10 6 SAS broken - yyyyy
.
.
5.50.10 3.63TB 50 10 FSAS broken - yyyyy
5.50.11 3.63TB 50 11 FSAS broken - yyyyy
5.50.12 3.63TB 50 12 FSAS broken - yyyyy
.
.
.
Thanks.
Solved! See The Solution
1 ACCEPTED SOLUTION
COG has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gidi,
Thanks for reaching out. We have resolved the problem with the following commands:
Set -priv advanced
disk unfail -s <disk name>
disk zerospares
The disks are now spares and owned by one of the nodes.
4 REPLIES 4
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What OS and mode these disks came from and what OS they are on now?
any steps performed before? (removing from aggr, removing ownership)
Gidi
Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
COG has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gidi,
Thanks for reaching out. We have resolved the problem with the following commands:
Set -priv advanced
disk unfail -s <disk name>
disk zerospares
The disks are now spares and owned by one of the nodes.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good morning,
I have tried the commands indicated and I have not been able to solve it... I keep getting the broken disk.
Any other option?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi JorgeRemeseiro,
Which command failed?
Thanks!
Jianping
