<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic CPU Spike - Network Change in Network and Storage Protocols</title>
    <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/CPU-Spike-Network-Change/m-p/445952#M9900</link>
    <description>&lt;P&gt;We had a 1GB switch for service management and data traffic on NetApp storage and ESXI servers. All VM data are on NetApp storage&lt;/P&gt;&lt;P&gt;after buying a new 10Gb switch, data ports on both devices were switched to the new switch. But the management port remained on the 1Gb switch.&lt;/P&gt;&lt;P&gt;right now, the CPU usage of some of these VMs is currently spiking. We are thinking it’s because of the change. However, I believe NetApp storage should auto negotiate.&lt;/P&gt;&lt;P&gt;Are there any issues we might have overlooked?&lt;/P&gt;</description>
    <pubDate>Wed, 04 Jun 2025 09:47:05 GMT</pubDate>
    <dc:creator>Zadok</dc:creator>
    <dc:date>2025-06-04T09:47:05Z</dc:date>
    <item>
      <title>CPU Spike - Network Change</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/CPU-Spike-Network-Change/m-p/445952#M9900</link>
      <description>&lt;P&gt;We had a 1GB switch for service management and data traffic on NetApp storage and ESXI servers. All VM data are on NetApp storage&lt;/P&gt;&lt;P&gt;after buying a new 10Gb switch, data ports on both devices were switched to the new switch. But the management port remained on the 1Gb switch.&lt;/P&gt;&lt;P&gt;right now, the CPU usage of some of these VMs is currently spiking. We are thinking it’s because of the change. However, I believe NetApp storage should auto negotiate.&lt;/P&gt;&lt;P&gt;Are there any issues we might have overlooked?&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 09:47:05 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/CPU-Spike-Network-Change/m-p/445952#M9900</guid>
      <dc:creator>Zadok</dc:creator>
      <dc:date>2025-06-04T09:47:05Z</dc:date>
    </item>
    <item>
      <title>Re: CPU Spike - Network Change</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/CPU-Spike-Network-Change/m-p/445956#M9902</link>
      <description>&lt;P&gt;Historically, NetApp had recommended that flow control be disabled on all network ports within a NetApp Data ONTAP cluster. This approach is no longer the case. Guidance in this area has since changed, and the new recommended best practice is as follows:&lt;/P&gt;&lt;P&gt;• Disable flow control on cluster network ports in the Data ONTAP cluster.&lt;BR /&gt;• &lt;STRONG&gt;Flow control&lt;/STRONG&gt; on the remaining network ports (the ports that provide &lt;STRONG&gt;data&lt;/STRONG&gt;, management, and intercluster connectivity) should be configured to &lt;STRONG&gt;match&lt;/STRONG&gt; the settings within the &lt;STRONG&gt;rest of your environment.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;In short, Flow control setting should be the same end-2-end. (Either disable it or set the fixed speed, but &lt;STRONG&gt;do not&lt;/STRONG&gt; let it auto-negotiate).&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I don't know how VM's CPU usage could be affected by this, haven't come across it.&amp;nbsp; Throughput could be an issue .&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 17 Jul 2023 13:46:16 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/CPU-Spike-Network-Change/m-p/445956#M9902</guid>
      <dc:creator>Ontapforrum</dc:creator>
      <dc:date>2023-07-17T13:46:16Z</dc:date>
    </item>
  </channel>
</rss>

