ONTAP Hardware

FAS3270 Clustered ONTAP reset from incorrect configuration


I inheritted this FAS3270. Two controllers which I stuck in one chassis. Two 2426 shelves.

Had issues migrating from 7 to cDOT but done it eventually.

Here is the problem. I can't create SVMs becasue it tells me there are no data aggregates.  Can't create LIFs because there are no SVMs. I guess my problem is I assigned all disks automagically to root aggregates. Respectively Node 1 owns shelve 1 and Node 2 shelve 2. From reading I understand I need to destroy what I created. Not a biggie.

My question is: do I need to loose 6 disks for root aggregates ? Are there better options ?

Could someone give me a quick step by step on how to proceed from here ?

I really would like to avoid reloading 9.1.






You cannot remove disks from an existing aggregate.

So, unfortunately, you will have to re-install the system.

And yes, you need to loose disks for the root aggregate. You could maybe try to enable Advanced Drive Paritioning if it's a test system, but it is not supported on mid-range systems.

You will have to go to the boot menu, maintenance mode and then restart the configuration.




If there are at least two spares, it is possible to move root to new aggregate avoiding full reinit then destroy existing root and reuse disks for SFO aggregates. Although it will take exactly the same time and is more involved than simple reinstall.


I had setup Advanced Disk Partitioning on a 2520 that's in production now. Really good way to save space.

I'm quite sure it's not supported on FAS3270.

I feel like crying thinking about loosing 6*2TB disks just for root ...


The system is for testing and archiving .. so not really production.

And no, there are no spare disks left. I was trying to move root volume to the other aggregate / shelve but can't figure it out.

At least temporarily , so I can destroy the aggregate and recreate it properly.


Once I receive twinax cable for Cluster network I will proceed to reset.


Thank you quick responses ...

I will keep you posted ...