Documentation describes how to convert FAS2240 into shelf while retaining data on internal disks. If you do not need data, you can simply replace controllers with IOM and connect to existing FAS. All further work (reassigning and zero disks) can be done online from this system.
Is it not possible to:
Boot the 2240 into maintenance mode and zero the disks.
No really; disk zero_spares command is not available in maintenance mode (I do not think you can even destroy aggregate in maintenance mode before you can start zeroing disks). What you can do is to reassign ownership to new system en mass which is easier than doing it disk by disk online.
My understanding is that if I do that, then, when I attach the disks to the new filer, they will show up as 'spare'.
No; they won't show at all because they belong to different filer. And special boot menu 4 also always creates root aggregate and volume. So you will need to manually assign each disk (because if they are owned you cannot use wildcards here), destroy root aggregate, zero disks belonging to the aggregate. Exactly as if you just connected the same disks without going via special boot menu. So this step won't hurt but it is also absolutely redundant.
So you are willing to clear the disks, in that case the main thing is disk ownership. If the FAS2552 is already running C-Dot. Then simply on the FAS2240 remove ownership of all the disks, in maintenance mode. Convert the shelf to DS2246, connect to the FAS2552, take ownership, then delete the foreign aggregate if it reconginzed and simply zero spares.
Before doing the conversion I removed the ownership for all the disks in the 2240.
After replacing the controller cards with IOM6 cards and cabling it all up I now have a filer with 2 disk shelves.
However, the 24 disks in shelf 1 are showing as equally distributed between the 2 nodes with all the even numbered disks in an aggregate called aggr0 on node1 and the rest in an aggregate called aggr0 on node2 (aside from the disks marked as spares).
I can see the aggregates listed when I do a 'storage disk show -shelf 1' but I can't see them when I do 'storage aggregate show'.
What is the correct way to make these disks available?
Once again, thanks to all who helped with this particular task.
The link provided by aborzenkov has a note at the end that points out that if you are running DataONTAP >= 8.3, you need to refer to KB 1013046.
As we are running 8.3.2, I followed the procedure described in the KB.
I then had 24 new spare disks and a choice.
I could create 2 new raid groups and allocate them to the existing aggregates (or create new aggregates), or, as the existing rg0 raid groups only had 11 disks each, I could add disks to each raid group and let ONTAP partition them.
This is documented in the Advanced Drive Partitioning FAQ that I got from NetApp.
The advantage of doing this is you don't lose 4 disks to parity roles, you will lose 2 disks to additional spare disks but I can live with that.
Even with the partioning, I still end up with ~ 2TB extra disk space.