If you don't know the procedure yet for getting Root Disk Partitioning up on a filer that you are *wiping*, here it is:
(This assumes a new install and no data/config that you want to keep).
1. Make sure that disk auto-assign is enabled. (I don't know how to do this from maint mode, but it should hopefully be on by defaiult)
2. Halt both controllers
3. Boot into Maintenance Mode on one controller
4. Remove all the aggregates from all of the disks on the internal shelf.
5b. If you previously had tried disk partitioning on these disks, remove the partitions from maint mode too (there is a new "disk unpartition" command)
6. Remove ownership from ALL of the disks in the internal shelf.
7. Halt and reboot the first node.
8. Access the boot menu (Ctrl+C) and select Option 4.
9. When the node reboots and starts zeroing disks, it will create partitions on the internal shelves and zero them. Half of the disks will be assigned to each of the nodes.
10. As soon as the first node has started zeroing its disks, you can boot the second node and select Option 4.
After zeroing, you should get, on each node, something like:
Nov 25 13:20:45 [localhost:raid.autoPart.start:notice]: System has started auto-partitioning 6 disks.
....Nov 25 13:20:46 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.1 Shelf 0 Bay 1 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FK9P0000M507GA26], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
....Nov 25 13:20:47 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.3 Shelf 0 Bay 3 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FNFH0000M5075L03], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
....Nov 25 13:20:49 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.5 Shelf 0 Bay 5 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FKAS0000M507GA0U], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
....Nov 25 13:20:50 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.7 Shelf 0 Bay 7 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FMZ30000M507CL7X], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
....Nov 25 13:20:52 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.9 Shelf 0 Bay 9 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FLQW0000M507G9TN], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
....Nov 25 13:20:53 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0a.00.11 Shelf 0 Bay 11 [NETAPP X487_SLTNG600A10 NA00] S/N [S0M3FN6D0000M507CJ3J], partitions created 2, partition sizes specified 1, partition spec summary [2]=37660227.
Nov 25 13:20:55 [localhost:raid.autoPart.done:notice]: Successfully auto-partitioned 6 of 6 disks.
Note: this was a system with 12 disks on the internal shelves.
When the disks are zeroed and ONTAP has booted, you'll get the node setup prompts. You will find an active-active configuration with half of the data partitions assigned to one node, and the other half assigned to the other.
If you want to re-assign all of the data partitions to the first node, wait til both nodes are booted, and then you can do something like:
## show the current assignment of root and data partitions:
## (note that the data-owner alternates between nodes)
MYCLUSTER::> disk show -shelf 00 -fields root-owner,data-owner
disk data-owner root-owner
------ -------------- --------------
1.0.0 MYCLUSTER-02 MYCLUSTER-02
1.0.1 MYCLUSTER-01 MYCLUSTER-01
1.0.2 MYCLUSTER-02 MYCLUSTER-02
1.0.3 MYCLUSTER-01 MYCLUSTER-01
1.0.4 MYCLUSTER-02 MYCLUSTER-02
1.0.5 MYCLUSTER-01 MYCLUSTER-01
1.0.6 MYCLUSTER-02 MYCLUSTER-02
1.0.7 MYCLUSTER-01 MYCLUSTER-01
1.0.8 MYCLUSTER-02 MYCLUSTER-02
1.0.9 MYCLUSTER-01 MYCLUSTER-01
1.0.10 MYCLUSTER-02 MYCLUSTER-02
1.0.11 MYCLUSTER-01 MYCLUSTER-01
12 entries were displayed.
## re-assign the data partitions from MYCLUSTER-02 to MYCLUSTER-01:
## LEAVE AT LEAST ONE DISK PER NODE where the DATA partition and the ROOT partition are owned by the same node
## This is required for the system to be able to write core dumps during a panic. It must own the whole disk.
MYCLUSTER::*> disk assign -data -owner MYCLUSTER-01 -force -disk 1.0.0
MYCLUSTER::*> disk assign -data -owner MYCLUSTER-01 -force -disk 1.0.2
MYCLUSTER::*> disk assign -data -owner MYCLUSTER-01 -force -disk 1.0.4
MYCLUSTER::*> disk assign -data -owner MYCLUSTER-01 -force -disk 1.0.6
MYCLUSTER::*> disk assign -data -owner MYCLUSTER-01 -force -disk 1.0.8
## show assignments again
MYCLUSTER::*> disk show -shelf 00 -fields root-owner,data-owner
disk data-owner root-owner
------ -------------- --------------
1.0.0 MYCLUSTER-01 MYCLUSTER-02
1.0.1 MYCLUSTER-01 MYCLUSTER-01
1.0.2 MYCLUSTER-01 MYCLUSTER-02
1.0.3 MYCLUSTER-01 MYCLUSTER-01
1.0.4 MYCLUSTER-01 MYCLUSTER-02
1.0.5 MYCLUSTER-01 MYCLUSTER-01
1.0.6 MYCLUSTER-01 MYCLUSTER-02
1.0.7 MYCLUSTER-01 MYCLUSTER-01
1.0.8 MYCLUSTER-01 MYCLUSTER-02
1.0.9 MYCLUSTER-01 MYCLUSTER-01
1.0.10 MYCLUSTER-02 MYCLUSTER-02 ## Note: leave one disk as spare for Node 2, with data and root partitions owned
1.0.11 MYCLUSTER-01 MYCLUSTER-01 ## Note: leave one disk as spare for Node 1, with data and root partitions owned
12 entries were displayed.
My thanks to Jawwad Memon at NetApp for explaining this procedure to me.
I am also trying to work out how to do it for existing systems without re-building the cluster (it should be possible to partition the internal disks and write a new root aggregate to them one by one) - I'll post this when I have it, and NetApp should also be producing a TR or support article on it.