<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Node Panic message in ONTAP Discussions</title>
    <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441306#M41715</link>
    <description>&lt;P&gt;Agree. In some cases (depending upon various other factors/Including bug),&amp;nbsp; even mgwd unresponsiveness could trigger node panic. So it could be anything.&amp;nbsp; Please take your time to access and then go for an upgrade.&lt;/P&gt;</description>
    <pubDate>Mon, 30 Jan 2023 14:05:43 GMT</pubDate>
    <dc:creator>Ontapforrum</dc:creator>
    <dc:date>2023-01-30T14:05:43Z</dc:date>
    <item>
      <title>Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441290#M41705</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;We have a node panic that occures every weekend at the same time. We don't have support so I can't create a case.&lt;/P&gt;&lt;P&gt;But at least I would like to search myself for a possible cause. I've found a tool&amp;nbsp;&lt;/P&gt;&lt;DIV class=""&gt;ONTAP - Panic Message Analyzer-&amp;nbsp;&amp;nbsp;&lt;A href="https://mysupport.netapp.com/site/bugs-online/pma" target="_blank" rel="noopener"&gt;NetApp Support Site - Bugs Online - PMA &lt;/A&gt;&lt;/DIV&gt;&lt;DIV class=""&gt;but it needs a "panic message".&lt;/DIV&gt;&lt;DIV class=""&gt;Where can I find it and what logs should I look to for more details?&lt;/DIV&gt;&lt;DIV class=""&gt;&amp;nbsp;&lt;/DIV&gt;&lt;DIV class=""&gt;Regards,&lt;/DIV&gt;&lt;DIV class=""&gt;Alexey&lt;/DIV&gt;</description>
      <pubDate>Wed, 04 Jun 2025 09:53:19 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441290#M41705</guid>
      <dc:creator>AlexeyF</dc:creator>
      <dc:date>2025-06-04T09:53:19Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441292#M41707</link>
      <description>&lt;P&gt;When a system panic or crash occurs, the memory contents are saved as core files on the system. NetApp can analyze these core files to diagnose the reason for the panic and suggest corrective action. However, In the absence of a support contract, try to obtain the related EMS and audit logs just to get some hints.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/Overview_of_ONTAP_Logs" target="_blank"&gt;https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/Overview_of_ONTAP_Logs&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;As you mentioned, this occurs every weekend at the same time, that suggests there must be a bug associated with it. However, to map it to this behavior its needs investigation/some leads. Could you share the following:&lt;/P&gt;&lt;P&gt;1) Filer Model? (Is all firmwares including + DQP up-2-date)&lt;BR /&gt;2) data ontap / ONTAP version?&lt;BR /&gt;3) What are the protocols served ?&lt;BR /&gt;4) If CIFS/SMB is used, then what is the output of the following command?&lt;BR /&gt;:&amp;gt;options cifs.smb2.durable_handle.enable&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 11:35:28 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441292#M41707</guid>
      <dc:creator>Ontapforrum</dc:creator>
      <dc:date>2023-01-30T11:35:28Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441294#M41709</link>
      <description>&lt;P&gt;Thank you for your answer.&lt;/P&gt;&lt;P&gt;in EMS I can see that everything is started with the following error:&lt;/P&gt;&lt;P&gt;1/29/2023 08:00:52 CLUSTER002 ALERT vifmgr.cluscheck.droppedall: Total packet loss when pinging from cluster lif CLUSTER-02_clus1 (node NODE002) to cluster lif CLUSTER-01_clus2 (node NODE001).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;the ports should be connected directly ( I will vérify this)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Logical Status Network Current Current Is&lt;BR /&gt;Vserver Interface Admin/Oper Address/Mask Node Port Home&lt;BR /&gt;----------- ---------- ---------- ------------------ ------------- ------- ----&lt;BR /&gt;Cluster&lt;BR /&gt;CLUSTER-01_clus1 up/up 169.254.238.124/16 NODE1 e0a true&lt;BR /&gt;CLUSTER-01_clus2 up/up 169.254.18.86/16 NODE1 e0c true&lt;BR /&gt;CLUSTER-02_clus1 up/up 169.254.50.129/16 NODE2 e0a true&lt;BR /&gt;CLUSTER-02_clus2 up/up 169.254.2.44/16 NODE2 e0c true&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) FAS8040&lt;/P&gt;&lt;P&gt;2) NetApp Release 9.5P16: Tue Jan 12 10:52:07 UTC 2021&lt;/P&gt;&lt;P&gt;3) CIFS, NFS, FC&lt;/P&gt;&lt;P&gt;4)&amp;nbsp;Option "cifs.smb2.durable_handle.enable" is not supported&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 11:49:30 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441294#M41709</guid>
      <dc:creator>AlexeyF</dc:creator>
      <dc:date>2023-01-30T11:49:30Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441296#M41710</link>
      <description>&lt;P&gt;I can see that at the same time the node2 usage is 100%&lt;/P&gt;&lt;P&gt;It could possibly be the reason of not responding to ping.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Or a consequence... because I don't see any extremely increased load on node2 at this moment...&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AlexeyF_0-1675081040006.png" style="width: 400px;"&gt;&lt;img src="https://community.netapp.com/t5/image/serverpage/image-id/25091i7AE0028AA9E0C834/image-size/medium?v=v2&amp;amp;px=400" role="button" title="AlexeyF_0-1675081040006.png" alt="AlexeyF_0-1675081040006.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 12:21:33 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441296#M41710</guid>
      <dc:creator>AlexeyF</dc:creator>
      <dc:date>2023-01-30T12:21:33Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441298#M41711</link>
      <description>&lt;P&gt;Thanks. I assumed it was 7-mode filer.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Considering this is a ONTAP (cDOT) and it is happening on weekends. It may be worth checking what is the overall "&lt;STRONG&gt;load&lt;/STRONG&gt;" on the FAS8040 ? How is it performing over the weekends, especially when there are more workloads in terms of Backups/Mirror transfer. If the load is constantly high to the extent that it is coming to freeze, then try to re-distribute the work-load as much as possible (probably less priority work-loads can be moved around) or may be stagger the transfers that happens during the weekends, try to reduce the frequency.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Following &lt;STRONG&gt;bug&lt;/STRONG&gt; (in the link below)&lt;STRONG&gt; is found in the version your filer is currently on i.e 9.5P16.&lt;/STRONG&gt; However, as I said, only the core-file analysis could give us the clearer picture here. But, there is no harm in upgrading the ONTAP to a higher version that contains the bug-fix. So, take a look at the following links and see if you could do an upgrade.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://mysupport.netapp.com/site/bugs-online/product/ONTAP/BURT/1273914" target="_blank"&gt;https://mysupport.netapp.com/site/bugs-online/product/ONTAP/BURT/1273914&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;&lt;A href="https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/RPANIC%3AProcess_mgwd_unresponsive_for_xxx_seconds_(mgwd_startup%3A_(xxx))_in_process_nodewatchdog_on_release_9.x" target="_blank"&gt;https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/RPANIC%3AProcess_mgwd_unresponsive_for_xxx_seconds_(mgwd_startup%3A_(xxx))_in_process_nodewatchdog_on_release_9.x&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, check the recommended release for the Ontap version you have and try to get it up to recommended release or higher release if supported. I think it can go up to 9.8 (&lt;A href="https://hwu.netapp.com/Controller/Index?platformTypeId=2032" target="_blank"&gt;https://hwu.netapp.com/Controller/Index?platformTypeId=2032&lt;/A&gt;)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://kb.netapp.com/Support_Bulletins/Customer_Bulletins/SU2" target="_blank"&gt;https://kb.netapp.com/Support_Bulletins/Customer_Bulletins/SU2&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 12:40:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441298#M41711</guid>
      <dc:creator>Ontapforrum</dc:creator>
      <dc:date>2023-01-30T12:40:56Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441304#M41713</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/73493"&gt;@Ontapforrum&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The excess load was my first guess but as you can see from the graphics below, we don't observer something exceptional at the time when Node usage climbs to 100%&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="AlexeyF_0-1675085325160.png" style="width: 400px;"&gt;&lt;img src="https://community.netapp.com/t5/image/serverpage/image-id/25092i16836E2B9E06A309/image-size/medium?v=v2&amp;amp;px=400" role="button" title="AlexeyF_0-1675085325160.png" alt="AlexeyF_0-1675085325160.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Thank you for the links. We are definitely determined to upgrade to ONTAP 9.8, just on hold at the moment waiting the support to be renewed. Don't want to risk un upgrade in case of any unpredictable behaviour by the disk array.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 13:32:16 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441304#M41713</guid>
      <dc:creator>AlexeyF</dc:creator>
      <dc:date>2023-01-30T13:32:16Z</dc:date>
    </item>
    <item>
      <title>Re: Node Panic message</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441306#M41715</link>
      <description>&lt;P&gt;Agree. In some cases (depending upon various other factors/Including bug),&amp;nbsp; even mgwd unresponsiveness could trigger node panic. So it could be anything.&amp;nbsp; Please take your time to access and then go for an upgrade.&lt;/P&gt;</description>
      <pubDate>Mon, 30 Jan 2023 14:05:43 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-Panic-message/m-p/441306#M41715</guid>
      <dc:creator>Ontapforrum</dc:creator>
      <dc:date>2023-01-30T14:05:43Z</dc:date>
    </item>
  </channel>
</rss>

