Got a 2-node-switchless pair of 8060's on 9.1-latest. Stacks of 4243 SAS's with IOM3's, so I can't go any farther forward on ONTAP versions yet.
We've got 2 shelves of 224C's looped up but currently unused. At the end of the line, we'd like to get rid of the 4243's and be only on SSD storage.
Seeing as we're going to get to only SSDs, what I'd love to get to is having root-data partitioning, so we don't waste space on the parity tax for the root aggr.
The Aggregate Relocation/ARL guide warns about reusing heads and tells you to wipeconfig. I'm decently comfortable that I can do that and rejoin. But the root-data partitioning information makes it sound so very dire in trying to use root-data partitioning, that you must nuke everything in an HA pair in order to get nodes to switch over to using it.
It would seem to me that, if I ARL all of node1's aggrs over to node2, that node1 can be wipeconfig'ed, booted, be root-data partitioned on the SSDs, and then allowed to come online and join up.
I don't have extra nodes to try this on/swing the data over to, and I can't take a HA-pair outage to 'build them right from the beginning.' And I can't roll forward past 9.1 unless I start using the SSDs as normal disk, because I need to retire all my old shelves.
Does it sound plausible to do this with an ARL+wipeconfig->partitioning, or is this a lurking disaster that I'm just not seeing?
The problem is that you can only partition disks during initial install and this wipes out node configuration, so node will lose its identity and won't be able to join cluster back.
What may work - relocate aggregates, remove node from cluster, reinitialize with root on SSD, add back. But it is far more involved and I would not eat my hat it will run smoothly.
I would suggest to open support ticket (or talk you your NetApp contact) and ask for a procedure to manually partition disks. Then just relocate mroot using standard migrate-root command and move volumes to new aggregates.
Note: The last supported ONTAP release for IOM3-based shelves is 9.3, actually. 9.3P11 is a good one...
The short answer is "no". Root-data partitioning is a HA-scoped operation. Both nodes in the HA pair need to be initialized at the same time. So ARL-type operations won't help, here.
If you figure a way to move your data/volumes (e.g. swing gear) somewhere else, you can reinitialize the system with the IOM12 shelves using boot menu 9 (root data partioning operations), available with ONTAP 9.2 and later.
Root-data partitioning is a HA-scoped operation. Both nodes in the HA pair need to be initialized at the same time.
Could you elaborate? Nodes are normally initialized one at a time; and it is the first node that partitions disks, right? When nodes are initialized, there is no HA yet, it is initialized much later, when both nodes are up.
So in dual chassis case if we connect SSD shelves to the only controller and start initialization, I expect it to partition SSD. Won't it happen? What are conditions when installation partitions disks then?
In single chassis case controller will see both existing disks and empty SSD disks so I do not know what happens.
boot menu 9 (root data partioning operations), available with ONTAP 9.2 and later.
As far as I know this just adds easy way to remove partitions and select ADP vs. non-partitioned disks; you still have to go via full configuration wipe out to actually create partitions.
Sure... you can initialize a single-node system with ADP-eligible systems. When initializing a system pre-9.2 with an ADP-eligible system, you always want to start initialization on node A first - once disks are zeroing, then you can start the init on Node B.
But if you are dealing with an HA pair, you need to initialize them "holistically". Here's a KB that shows you how you'd go about converting/re-initing pre-9.2.
With Boot menu option 9, a 9a + 9b will completely intialize a system with ADP, if eligible and sufficient disks are available.
Unfortunately, with a 2-node cluster I don't have that option. We're probably going to have to see about loaner gear.
I swear, I've heard that ~70+% of customers are switchless 2-nodes, I'm shocked that this is impossible-by-design, with no maintmode way to fix it one node at a time.
And I can't get Support to even understand this question, to see if there's a "yes there's a way, but we can't talk about it in public" answer.
So, in case anyone comes across this later:
I can't believe, with such a high percentage of 2-node switchless clusters in the field, that you can't partition from maintmode on 1 node, but, here we are. I guess nobody upgrades gear piecemeal/just-in-time anymore.
Thank you to the folks who replied in this thread. You didn't solve me, but you got me more words/leads than Support could bother sending, and in way less time.
- The process basically is "get hold of some gear, spin up to a 4-node-switched, move all the data over, destroy/recreate your nodes, move it all back."
Actually you just need swing controllers for couple of hours, enough to partition SSD disks. Then you can just reconnect SSD shelves back to original cluster, reassign ownership and move data (including mroot).