ONTAP Hardware

disk unfail: Disk is not currently failed 7mode FAS2220




There are 12 disks on FAS2220, NetApp Release 8.2.5P5 7-Mode


sysconfig is listing one of the disks as failed

>00.5 : NETAPP X302_HJUPI01TSSM NA04 847.5GB 512B/sect (Failed)

This disk is listed sysconfig -r as partner disk :

partner 0b.00.5 0b 0 5 SA:B 0 BSAS 7200 0/0 847884/1736466816


Unfortunately i cannot unfail it.

*>disk unfail -s 0b.00.5
*>disk unfail: Disk is not currently failed.


> aggr status -s

is showing no spares.


Is there any way to unfail that disk ?

I keep replacing disks on that device and no luck. Tried 3 different new disks from the stock.

Could it be socket issue ? I cannot see any errors. Looks like new disks placed in the socket are not accepted.








Hello Marius,

                    I see that this seems to be related to another Question you asked in regards to disks showing Failed when initially installed.  Please take a look at the following questions and respond to either of your postings and one of us will respond as soon as possible. As to the question of Slot/socket issues. Given the age of the unit I would say that this is a possibility. However I would still lean towards a batch of Faulty Drives or even Seating issues with contact points given age of unit.


1.) is there any pattern ? 

I.E    slots work with some disks but not others?  Any slots just not taking any disks at all? Do disks sometimes work then not work after reseating? 








Thank you for your message.

This post is specifically for one case where I cannot unfail the disk at the command output states that the disk is not failed. From sysconfig output i can see that it is failed.  Why is showing that kind of output ?


*>disk unfail -s 0b.00.5
*>disk unfail: Disk is not currently failed.


The other post is general for all other scenarios, where i cannot see any pattern, sometimes taking the disk out and putting in after a while helps, sometimes disk size blc/sec are recognized properly sometimes not, sometimes taking other disk from the same stock helps. I was wondering if there is any common known issue for those devices as it never happened to me to face the situation that new disks are not accepted by so many storage arrays. If it's not common issue it seems that that the problem is in that disk stock  we have. Original, from netapp partner but still many of them do not work properly.


Given the erratic and inconsistent behavior of the drives I would speculate that it's a bad batch of drives. Do you have any recourse or way to get the vendor involved? Usually if you're experiencing a software glitch or bad hardware the issue would be consistently presented. 

Bad batch of drives would also correlate with the unfail behavior you're seeing. 


Since it belongs to the partner node, did you try unfailing it from the partner node’s CLI?