We are strugulling with the command disk unpartition 0d.01.21. Here is the error :
Disk 0d.01.21 must not have file system partitions. All partitions must be spare Do not proceed with unpartition if disk has file system partitions. Abort unpartition (y/n)? n Jan 11 13:28:50 [localhost:raid.unpartition.disk.fail:notice]: Disk unpartition failed on Disk 0d.01.21 Shelf 1 Bay 21 [NETAPP X371_S163A960ATE NA50] S/N [S396NA0HA06398], error CR_DISK_WRONG_STATE, additional error info (). disk unpartition: 0d.01.21 has partitions that are not spare.
We can't find any solution on google, so any help would be much appreciated 🙂
Hi there - there are no documented procedures for this, however I'd start with this:
Was HA disabled, both nodes halted, then one brought up in maint mode?
Then I would run "storage disk removeowner disk_name -data true" for all of the drives, and then "storage disk removeowner disk_name -root true" for all of them, then reboot and zero one node, then the other.
Ok, so you'd unassign the data partitions in fully booted ONTAP (the presumption here being that you have no data on the data aggregates that you care about).
If that still doesn't let you unpartition the disk, I'd suggest the original poster open a support ticket - we might need to escalate to engineering to find the right mix of commands. The good thing about AFF is that reformatting is pretty quick 🙂
this week I had a similar problem when I tried to re-install a cdot cluster. In my case it was a 2554 single-node system with SATA disks, which was shipped with ONTAP 9.0.
I booted the system and did "Install new software first" to load ONTAP 9.1. After this I went to maintenance mode to unpartition the disks und remove the ownership. I ran into the following error:
*> disk unpartition 0a.00.12 Disk 0a.00.12 must not have file system partitions. All partitions must be spare Do not proceed with unpartition if disk has file system partitions. Abort unpartition (y/n)? n disk unpartition: 0a.00.12 has partitions that are not spare. Jan 19 10:18:43 [pfhn-cl3-01:raid.unpartition.disk.fail:notice]: Disk unpartition failed on Disk 0a.00.12 Shelf 0 Bay 12 [NETAPP X306_HMRKP02TSSM NA00] S/N [P5JTUG5V], error CR_DISK_WRONG_STATE, additional error info ().
I got 4 disks which could not be unpartitioned and the reinstall did not went through.
Finally I netbooted ONTAP 9.0, went to maintenance mode and were able to unpartition the disks. After this the reinstall was possible.
it seems that in my case ONTAP 9.1 can not deal correctly with partitions created prior ONTAP 9.1 - So it would be interresting if a netboot of ONTAP 9 in your case would also solve the problem.
Then there would be an general issue in ONTAP 9.1 in combination with disk unpartioning.
There was a bug in 9.1RC2 for disk unpartition... 9.1GA fixes it though or revert to 9.0 for disk unpartition
High level procedure..zeroes out the system.
on each node..
boot_ontap maint # maint mode boot
aggr offline aggr0
aggr destroy aggr0
disk unpartition all
disk show # if any disks assigned, run "disk remove_ownership" again
setenv allow-root-data1-data2-partitions? True
on node1 only
boot_ontap menu # menu boot
44/7 # hidden zero option
after completion... run option 4 on node1 then node2..
you will have the correct partitions...but I found on a larger systtem they are split across 3 shelves instead of 2...but 48 drives partitioned as expected. The physical unparittioned drive assignment was not symmetric but that is easy to fix after...
Also note that if you have over 2 shelves in an SSD stack (2 is best practice, 4 is max) a best practice is to split disk assignment between nodes in the ha-pair instead of full disk shelf assignment per shelf
Just today spent unforgettable 5 hours with partitions. AFF8080 was upgraded from 9.0 up to 9.1 and then partitions became un-unpartitionable. Error just the same:...disk has partitions that are not spare.
Solution in our case:
1. Reboot to maintenance both nodes. *>halt -> Ctrl+C -> maintenance (option 5).
2. Delete all aggregates (root too). *>aggr status -> *>aggr offline -> *>aggr destroy -> *>aggr status
3. Remove ownership. *>disk remove_ownership all (several times. check *>disk show, it must be empty).
4. Assign all disks to one node. *>disk assign all
5. Reboot this node to maintenance from previous image (9.0). LOADER>boot_backup -> Ctrl+C -> maintenance (option 5).