I see that this seems to be related to another Question you asked in regards to disks showing Failed when initially installed. Please take a look at the following questions and respond to either of your postings and one of us will respond as soon as possible. As to the question of Slot/socket issues. Given the age of the unit I would say that this is a possibility. However I would still lean towards a batch of Faulty Drives or even Seating issues with contact points given age of unit.
1.) is there any pattern ?
I.E slots work with some disks but not others? Any slots just not taking any disks at all? Do disks sometimes work then not work after reseating?
This post is specifically for one case where I cannot unfail the disk at the command output states that the disk is not failed. From sysconfig output i can see that it is failed. Why is showing that kind of output ?
*>disk unfail -s 0b.00.5 *>disk unfail: Disk is not currently failed.
The other post is general for all other scenarios, where i cannot see any pattern, sometimes taking the disk out and putting in after a while helps, sometimes disk size blc/sec are recognized properly sometimes not, sometimes taking other disk from the same stock helps. I was wondering if there is any common known issue for those devices as it never happened to me to face the situation that new disks are not accepted by so many storage arrays. If it's not common issue it seems that that the problem is in that disk stock we have. Original, from netapp partner but still many of them do not work properly.
Given the erratic and inconsistent behavior of the drives I would speculate that it's a bad batch of drives. Do you have any recourse or way to get the vendor involved? Usually if you're experiencing a software glitch or bad hardware the issue would be consistently presented.
Bad batch of drives would also correlate with the unfail behavior you're seeing.