I have a old Netapp running ontap 7.3.
The system has not been used for some time now, before it was shut down it was working fine.
Now when I try to start, it boots with the error that there are not enough spare disks for the aggregation.
After some debugging I found that the all disks led's flashing green expect for 2, these stay solid green.
When I run "show disk -v" the missing disks are not displayed, but when I do "sysconfig -a" the missing disks are displayed together with all other disks.
The weird thing is that they are showed as 0.0GB 0B/sect instead of 272.0GB 520B/sect
Does this mean that the disk are broken and should be replaced?
I would expect amber led when the disk was malfunctioning, or is that a wrong assumption?
Solved! See The Solution
They might be not accessible rather really failed. in DS14 the disks are chained one to another and managed by individual modules. Try reseating the disks and the modules. And if not working swap disks around (preferably while ontap is halt) to exclude the bays / modules (there’s not Ontap dependency on disks location as they uses software assignment)
I think Gidi means reseating IOM-modules if reseating the disks does not solve the problem.
I would suggest running the 'led_on [diskname]' command prior to reseating the disks. This wil turn on the amber indicator on the disk you point out.
You could also try the 'disk unfail -s [diskname]' command.
This will change the state of the failed disk to spare.
Reseating the disks didnt help, same for reseating the modules.
I moved them to another location, which causes weird behaviour, before they were displayed as 0.0GB 0B/sect with serial number but after moving the serial number is gone also.
I cannot run the 'disk unfail command as I dont see the disks and I dont have the diskname to fail.
Will try to share some output the coming days.