Hello, we have a local failover cluster containing two nodes:
OnCommand / OCUM are complaining that DC1-NAPP-C1-N2 does not have enough spare disks (I have attached the disk details of both nodes to this post) - I think its because DC1-NAPP-C1-N2 uses 1.63 TB SAS drives in its root aggregate and does not have a drive of this type in its hot-spare pool - happy to be corrected on this as I'm still not sure what the spare disk logic is.
I'm unsure how we've ended up in this situation as all failed disks are replaced with the exact same drive type and presumably the replacement disk goes in the spare pool, in any case how can I fix this? - from what I can tell I would need to reassign a disk from partner DC1-NAPP-C1-N1 but there are no available spares of that type to give - I'd need to convert a data disk to a spare - is that possible without losing data?
At least one matching or appropriate hot spare available for each kind of disk per node is recommended.
In your case, it appears this particular one -10K 1.6TB has no spares. Unfortunately, this particular disk-type has been utilized in this particular LARGE aggregate "Aggr1_SAS_N1" (which is composed of 4-raid-groups). I observed, there are 3-rgs of 23, and the 4th-rg size is 19, so probably the 4th was added later to further increase the size of the aggregate. I guess, before this last rg was added (rg3), it was all ok, this is where I believe, somehow 'spare' consideration was completely missed. Is it ?
It's a not a good situation to be in honestly. One cannot just take-away 'data' disk from the raid-group, unless you reduce the raid-protection from raid-dp to raid-4, which will reduce the redundancy. I don't know whether you can afford to move the volumes from a smaller aggr to another aggregate and then destroy it and re-build it with lesser rg-size. Also, just glancing through the list, I don't find any appropriate capacity/performance disk to be used for 'mixing'. I really don't know what to suggest here, let's see if anyone has some advice for you. In the meantime, you can read the following kbs.
Hello, thanks for getting back to me; yes we have had extra disk shelves added in the past but that was approx. 4 years ago - this lack of spare disks issue I think has only been occurring in the last 3 months. As I say, I've no idea how we've managed to get ourselves in this situation.
This has already caused a problem for us as there were two disks in the same RAID group that failed (actually one didn't fail, it was just put into maintenance for a very long time) and because there were no spare disks of that type then the node shut itself down after 24 hours!
6) Now N-2, after the move, you get 3x10K spares (However, there will be no '15k' spare for the newly created root_aggr), however, you will have higher capacity 10K drive in case a drive fails in root-aggregate, and you need to make sure 'disk RPM mixing option is enabled'). The rule for spare is: Hot spare drives should be equal to or larger than the capacities of the drives that they are protecting.
7) Next, move 1x10K to N-1.
😎 Now both N-1 & N-2 has 2x10K spares. I think, this should make it safer at least and may stop the alerts.
Hello, that's great thanks for the reply, I think if we have to we'd go with plan B - however there might be a reason as to why we're down a spare disk - I noticed the following message in the syslogs:
adapterName="0c", debug_string="One or more (1) PHYs on expander 5:00a098:000e7ee:3f are in a bad state."
After further investigation I was able to see that the system can't see the disk in shelf 13, bay 21 despite the fact it it physically present (but with no lights on it), the disks are listed as 4.13.20 to 4.13.22.
I've logged it with our support, will update here once I have a resolution.