Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello all--
I have a 3020 system in my lab (single controller) with one tray of attached storage DS14MK2. I just received it and have gone through it to upgrade firmware, ontap. I am running into an issue when I try to upgrade the attached shelf firmware (ESH2 modules) and the following is the output from the command: "storage download shelf".
"Mon Mar 28 20:17:02 PDT [sfu.partnerNotResponding:error]: Partner either responded in the negative, or did not respond in 20 seconds. Aborting shelf firmware update."
Since the system is not clustered I am a little mystified as to why it is complaining about a missing partner. When I started this project I flattened disks (initialized--option 4a) and upgraded to 7.3.5.1 thinking it would remove any of the previous config.
Does anyone have any troubleshooting steps to offer or silver bullet solutions? I've come up with zilch thus far.
Thanks!
Scott
Solved! See The Solution
1 ACCEPTED SOLUTION
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What slot is the NVRAM card in? For the 3020 cluster, the NVRAM card is installed in slot3 and for a single node, it is installed in slot 1. If the NVRAM card is in slot 3, halt and move the card to slot 1.
There is also a chance that the system was clustered and the partner system id is set in the firmware. If this is the case (if the fix above doesn't work by moving nvram), halt to the cfe> prompt then type "unsetenv partner-sysid" then "bye" and the system should have no remnants of the cluster.
Also, make sure cluster is not licensed with "license". If cluster is licensed, "license delete cluster" and "reboot"
4 REPLIES 4
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What slot is the NVRAM card in? For the 3020 cluster, the NVRAM card is installed in slot3 and for a single node, it is installed in slot 1. If the NVRAM card is in slot 3, halt and move the card to slot 1.
There is also a chance that the system was clustered and the partner system id is set in the firmware. If this is the case (if the fix above doesn't work by moving nvram), halt to the cfe> prompt then type "unsetenv partner-sysid" then "bye" and the system should have no remnants of the cluster.
Also, make sure cluster is not licensed with "license". If cluster is licensed, "license delete cluster" and "reboot"
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks Scott!
How do I tell which is pci slot 1?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
the slot numbers are stamped into each slot... check the back carefully and you will find it.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks. I actually found it right after I typed this...duh.
Anyway, the slot 3 vs. slot 1 piece was, in fact, the special sauce. Appreciate the help.
Take care,
Scott
