ONTAP Discussions

Clean installing cDOT 9.1.x onto our FAS2240


Hello guys.


Recently we replaced one of our old FAS2240 storage to a new FAS2650. With thhat being said, we are thinking of using the FAS2240 for testing purpose and such. We have aqcuired a license to use cDOT 9 onwards but i was wondering how to clean install a brand new cDOT 9 OS to the FAS2240


We dont mind the data being erased in the FAS2240.


From a little research we did, it seems like you can only go to cDOT 9 by upgrading from 8.3.x


Can you not cleanly install a cDOT 9 onto a NetApp storage?


If anyone knows how or have any solution to my dillema, that would be more than appreicated


kind regards.





You can in likelly install it clean from 9.1 using netboot.



but i suggest you follow the standard procedure and go via 8.3.2. then upgrade to 9.1. and only after - start your configuration.


Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

There is no point in jumping through 8.3, any cDOT version is installed identically. Also following this KB article will create traditional root aggregate wasting quite some space; if going to cDOT, one should really start with ADP.


Cheers mate for the reply.


Which would be the best way to go about this do you reckon?


I mean reading up on a few articles and documents it seems like going from 8.1.x to 8.3.x and then going up to 9.1

seems like the best way to go about this.


But if replacing the boot drive is easier and more faster it would be better but it seems likke going to 8.3.x amnd then setting te ADP and then going up to 9.1 seems like the way to go?


 kind regards


Have each node at LOADER> prompt and run set-defaults; then netboot them to 9.1. At the boot menu, install new software, then when that is done, boot menu again and enter maint mode on one, remove all disk ownership (both nodes), reboot again and at the option 4 - wipeinfo it will setup ADP. Disk wipe takes about 2 2/3hrs/TB of capacity - so if it's 600GB drives, it'll probably take 2 hours. Don't bother with intermediate ONTAPs.


I'm pretty sure that on FAS2240 set-defaults does not include bootarg.init.boot_clustered. Is it not needed with ONTAP 9.x anymore?


I updated @wolfy by message - you are correct - full steps in https://kb.netapp.com/app/answers/answer_view/a_id/1031513/loc/en_US include setting the boot_clustered arg


Thanks @AlexDawson, i have tried to do what you told me last night but ran into a dillema lol

i had set-default on both node and did bootarg.init command on both nodes. Right after that i ran boot_ontap on node 1

and installed new software (ontap 9.1) so by that that my FAS had gone from 8.1.1 straight to 9.1 (it said it had image 1 and alternative image 2 which im assuming image 2 is the ontap 9.1


so, after this do ineed to run boot_ontap on node 2 aswell? becuase when i rebooted both nodes one booted up with ontap 9.1 and the other node booted with 8.1.1.


another problem im having is that i had created a NFS partition that had been mounted onto vcenter but since i had updated the verion to 9.1 the mpounted datastore is now (inactive) and cant unmount cuz it thinks it is still being used.


SO im really confused on what is happening lol


this keeps looping on the node 1 that has been updatedto 9.1

Starting AUTOBOOT press Ctrl-C to abort...
Loading X86_64/freebsd/image2/kernel:0x200000/10377184 0xbe57e0/6364864 Entry at 0x80294c20
Loading X86_64/freebsd/image2/platform.ko:0x11f8000/2512792 0x145e798/393448 0x14be880/541680 
Starting program at 0x80294c20
NetApp Data ONTAP 9.1
Trying to mount root from msdosfs:/dev/da0s1 [ro]...
md0 attached to /X86_64/freebsd/image2/rootfs.img
Trying to mount root from ufs:/env/md0.uzip []...
mountroot: waiting for device /env/md0.uzip ...
Copyright (C) 1992-2016 NetApp.
All rights reserved.
Writing loader environment to the boot device.
Loader environment has been saved to the boot device.
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
No SVM keys found.
Firewall rules loaded.
 The active image supports clustered Data ONTAP only.
 This image cannot be used to boot a 7-mode system.
 Use "boot_backup" at the loader prompt to boot the
 node in 7-mode.
Uptime: 1m14s
ugen0.2: <Micron Technology> at usbus0 (disconnected)
System rebooting...

and on node 2 that is still on 8.3 says the following

LOADER-B> boot_ontap
Loading X86_64/freebsd/image1/kernel:0x100000/8252680 0x8ded08/1275584 Entry at 0x801582e0
Loading X86_64/freebsd/image1/platform.ko:0xa17000/613472 0xb49908/638088 0xaacc60/38888 0xbe5590/41184 0xab6448/80684 0xac9f74/61528 0xad8fe0/132480 0xbef670/148824 0xaf9560/1560 0xc13bc8/4680 0xaf9b78/288 0xc14e10/864 0xaf9c98/1656 0xc15170/4968 0xafa310/960 0xc164d8/2880 0xafa6d0/184 0xc17018/552 0xafa7a0/448 0xb22ae0/12458 0xb49816/237 0xb25b90/80184 0xb394c8/66382 
Starting program at 0x801582e0
NetApp Data ONTAP 8.1.1 Cluster-Mode
Could not get list of management ports for this platform!
Copyright (C) 1992-2012 NetApp.
All rights reserved.
md1.uzip: 26368 x 16384 blocks
md2.uzip: 3584 x 16384 blocks
*                             *
* Press Ctrl-C for Boot Menu. *
*                             *
Warning: Data ONTAP has detected an attempt to switch the value of
bootarg.init.boot_clustered. Usage of this boot argument requires Technical
Support assistance. If this was an accidental change, ensure
bootarg.init.boot_clustered matches the root aggregate of the node.it 



it really looks like i messed up somewheere along the repurposing process.... 


Hi there!


You need to do the netboot procedure on both nodes to install ONTAP, so they both get 9.1 installed, and then reboot again, and boot 9.1 on both nodes, press control C when prompted to boot to maintenance menu and unassign the disks, then reboot again, and then choose option 4 - wipeinfo from the control-C boot menu. It is a bit complex - but what you're doing isn't a common operation, so don't worry too much - you'll get through it.


@wolfy wrote:

another problem im having is that i had created a NFS partition that had been mounted onto vcenter but since i had updated the verion to 9.1 the mpounted datastore is now (inactive) and cant unmount cuz it thinks it is still being used.

The described procedure is destructive. All existing data on disks is lost. Your NFS partition you created in 7-Mode no more exists. You need to setup filer completely new after having installed 9.1 and having created cluster.