ONTAP Discussions

Shelf Firmware Upgrade not so _pgrade

scotthoward
3,774 Views

Hello all--

I have a 3020 system in my lab (single controller) with one tray of attached storage DS14MK2.  I just received it and have gone through it to upgrade firmware, ontap.  I am running into an issue when I try to upgrade the attached shelf firmware (ESH2 modules) and the following is the output from the command:  "storage download shelf".

"Mon Mar 28 20:17:02 PDT [sfu.partnerNotResponding:error]: Partner either responded in the negative, or did not respond in 20 seconds. Aborting shelf firmware update."

Since the system is not clustered I am a little mystified as to why it is complaining about a missing partner.  When I started this project I flattened disks (initialized--option 4a) and upgraded to 7.3.5.1 thinking it would remove any of the previous config.

Does anyone have any troubleshooting steps to offer or silver bullet solutions?  I've come up with zilch thus far.

Thanks!

Scott

1 ACCEPTED SOLUTION

scottgelb
3,774 Views

What slot is the NVRAM card in?  For the 3020 cluster, the NVRAM card is installed in slot3 and for a single node, it is installed in slot 1.   If the NVRAM card is in slot 3, halt and move the card to slot 1.

There is also a chance that the system was clustered and the partner system id is set in the firmware.  If this is the case (if the fix above doesn't work by moving nvram), halt to the cfe> prompt then type "unsetenv partner-sysid" then "bye" and the system should have no remnants of the cluster.

Also, make sure cluster is not licensed with "license".  If cluster is licensed, "license delete cluster" and "reboot"

View solution in original post

4 REPLIES 4

scottgelb
3,775 Views

What slot is the NVRAM card in?  For the 3020 cluster, the NVRAM card is installed in slot3 and for a single node, it is installed in slot 1.   If the NVRAM card is in slot 3, halt and move the card to slot 1.

There is also a chance that the system was clustered and the partner system id is set in the firmware.  If this is the case (if the fix above doesn't work by moving nvram), halt to the cfe> prompt then type "unsetenv partner-sysid" then "bye" and the system should have no remnants of the cluster.

Also, make sure cluster is not licensed with "license".  If cluster is licensed, "license delete cluster" and "reboot"

scotthoward
3,774 Views

Thanks Scott!

How do I tell which is pci slot 1?

scottgelb
3,774 Views

the slot numbers are stamped into each slot... check the back carefully and you will find it.

scotthoward
3,774 Views

Thanks.  I actually found it right after I typed this...duh. 

Anyway, the slot 3 vs. slot 1 piece was, in fact, the special sauce.  Appreciate the help. 

Take care,

Scott

Public