AFF
AFF
In the past couple months, we have had a couple disk drives fail, replacement parts ordered and shipped to us. But, there is no Netapp command to tell which disk has failed from the command line...so, I end up walking down to the basement, checking for an amber light, why ? Why is there no disk show command status =failed ???
Grrr
Solved! See The Solution
Failing over the cluster and a giveback fixed the issue, we needed to do s SCSI reset, which cleared the error.
Try...
cluster::>storage disk show -container-type broken
(there's several other variations too) .
-container-type -container-name
MDC-1N01::> storage disk show -container-type broken
There are no entries matching your query.
MDC-1N01::>
This is a false positive after changing out a failed disk, and, the amber light is on the disk and shelf. I am looking for a command to determine what is the status of the amber light on a disk(or any disk), without having to march down to the datacenter to check the status.
Here's a few commands to help understand about storage/disk faults.
::> storage disk show -broken
::> storage disk error show
::> node run -node <name> -command "storage show fault -v"
If there is an amber light on after the disk change it's probably not assigned.
But as far as disk status in the shelf. check out this command:
CLUSTER::> storage shelf show -bay
Shelf Name: 2.0
Stack ID: 2
Shelf ID: 0
Shelf UID: 00:00:00:00:00:00:00:00
Serial Number: xxxxxxxxxxxxx
Module Type: IOM6E
Model: DS4246
Shelf Vendor: NETAPP
Disk Count: 24
Connection Type: SAS
Shelf State: Online
Status: Normal
Bays:
Has Operational
ID Disk Bay Type Status
--- ----- ----------- -----------
0 true single-disk normal
1 true single-disk normal
2 true single-disk normal
3 true single-disk normal
4 true single-disk normal
5 true single-disk normal
6 true single-disk normal
7 true single-disk normal
8 true single-disk normal
9 true single-disk normal
10 true single-disk normal
11 true single-disk normal
12 true single-disk normal
13 true single-disk normal
14 true single-disk normal
15 true single-disk normal
16 true single-disk normal
17 true single-disk normal
18 true single-disk normal
19 true single-disk normal
20 true single-disk normal
21 true single-disk normal
22 true single-disk normal
23 true single-disk normal
Errors:
------
-
storage shelf show -instance will give you a lot of detail about the shelves / disks
You can run the commands below:
cluster::> storage show disk -broken or system node run -node NODENAME "vol status -f"
If you have quite a few shelves/disks, I always ensure that I turn the led_on just to ensure the lights are on and disk is easily identified.
system node run -node NODENAME (will take you to the old prompt like back in 7-mode)
priv set -advanced
led_on diskID
Once the disk has been replaced you may need to assign disk ownership if auto assign is not enabled.
@pippen23 Following up to see if you are still looking for the solution.
working with support, turns out I may have a bad bay in a shelf...ouch!
Failing over the cluster and a giveback fixed the issue, we needed to do s SCSI reset, which cleared the error.