ONTAP Discussions
ONTAP Discussions
Hi,
we encounter with an issue which two member aggregate disk was broken, we replaced them with two disk but ontap failed to replaced them soo the status on the disks was "FAILED" and it's seems like the other "new" disks are part of different foreign aggregate.
and our aggregate got into degraded mode
after that one of the volume go offline, when i'm trying to take it online we're getting next message "Unable to set volume attribute "state" for volume "vol name" on Vserver "name". Reason: Volume is inconsistent. Unable to set volume attribute "state" for volume "vol name" on Vserver "name". Reason: Volume is inconsistent."
we assign two spare disk to impacted node and it's now in normal mode
Machine: FAS8020
ONTAP version: 9.5P16
RAID Configuration: mixed_raid_type (Data RAID group size of 20 disks)
RAID Status: hybrid, normal
Aggregate Type: Flash Pool
two questions:
1) is there any option to take the volume online and save the data?
2) why replaced disk showing like they are part of foreign unknown aggregate, and how i take them out?
Solved! See The Solution
As you mentioned - "we replaced them with two disk". Most likely the 'two disk' were part of some other system/aggregate (at some point). For such disks, you should remove its ownership information so that it can be properly integrated into another system.
The volume must have gone offline b'cos the volume’s hosting aggregate was down due to simultaneous failure of two RAID disks. More info in the given links.
Resolving volume offline issues:
Performing diagnostic actions for volume offline conditions:
Determining if a volume is offline because of broken disks in an aggregate:
What is the current state of the volume:
::> volume show -vserver svm1 -volume vol1 -fields state,is-inconsistent
Please open a support case if the volume is offline/inconsistent.
As you mentioned - "we replaced them with two disk". Most likely the 'two disk' were part of some other system/aggregate (at some point). For such disks, you should remove its ownership information so that it can be properly integrated into another system.
The volume must have gone offline b'cos the volume’s hosting aggregate was down due to simultaneous failure of two RAID disks. More info in the given links.
Resolving volume offline issues:
Performing diagnostic actions for volume offline conditions:
Determining if a volume is offline because of broken disks in an aggregate:
What is the current state of the volume:
::> volume show -vserver svm1 -volume vol1 -fields state,is-inconsistent
Please open a support case if the volume is offline/inconsistent.
two questions:
1) is there any option to take the volume online and save the data?
=> The reason your volume went offline is to "SAVE" your data from (further) corruption until you resolve the issue.
2) why replaced disk showing like they are part of foreign unknown aggregate, and how i take them out?
=> This is due to a disk(s) was owned by a different system/cluster.
yes the disk was owned by a different system.
i assign two spare disk to the node and there was reconstructions and it seems like the aggregate
works right and healthy.
but still the volume is offline and doesn't goes online.
how could i save my data now, and take the volume online?
how could i reformat these both disk and assign them as spare or something?
I hope you have already reached out to support for this issue.
Please note: It is advisable to run wafl-iron prior to bringing an inconsistent volume online. Bringing an inconsistent volume online increases the risk of further file system corruption.
Volume Showing WAFL Inconsistent: (Contact Support)
https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/Volume_Showing_WAFL_Inconsistent
Only for use with the assistance of NetApp Technical Support:
https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_is_wafliron
Regarding those 2 disks, you can take it back to old system, erase/disk-zero, disable auto assign, remove-ownership and bring it to the new system.