Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a old Netapp running ontap 7.3.
The system has not been used for some time now, before it was shut down it was working fine.
Now when I try to start, it boots with the error that there are not enough spare disks for the aggregation.
After some debugging I found that the all disks led's flashing green expect for 2, these stay solid green.
When I run "show disk -v" the missing disks are not displayed, but when I do "sysconfig -a" the missing disks are displayed together with all other disks.
The weird thing is that they are showed as 0.0GB 0B/sect instead of 272.0GB 520B/sect
Does this mean that the disk are broken and should be replaced?
I would expect amber led when the disk was malfunctioning, or is that a wrong assumption?
Thanks!
Solved! See The Solution
1 ACCEPTED SOLUTION
AvdS has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Issue solved by adding 2 new disks, case closed
Thanks for the help and suggestions
8 REPLIES 8
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
They might be not accessible rather really failed. in DS14 the disks are chained one to another and managed by individual modules. Try reseating the disks and the modules. And if not working swap disks around (preferably while ontap is halt) to exclude the bays / modules (there’s not Ontap dependency on disks location as they uses software assignment)
Gidi
Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gidi, thanks for your reply
What do you mean exactly with reseating the modules?
Do you mean the disk with caddy?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I think Gidi means reseating IOM-modules if reseating the disks does not solve the problem.
I would suggest running the 'led_on [diskname]' command prior to reseating the disks. This wil turn on the amber indicator on the disk you point out.
You could also try the 'disk unfail -s [diskname]' command.
This will change the state of the failed disk to spare.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
could you send us these commands results.
sysconfig -a
sysconfig -r
rdfile /etc/messages
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Reseating the disks didnt help, same for reseating the modules.
I moved them to another location, which causes weird behaviour, before they were displayed as 0.0GB 0B/sect with serial number but after moving the serial number is gone also.
I cannot run the 'disk unfail command as I dont see the disks and I dont have the diskname to fail.
Will try to share some output the coming days.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You should replace with new disks.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yeah, im going to get some refurbished disks
AvdS has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Issue solved by adding 2 new disks, case closed
Thanks for the help and suggestions
