<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Node's Performance Capacity &amp;gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is in ONTAP Discussions</title>
    <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153870#M34446</link>
    <description>&lt;P&gt;Ok, so you're doing close to 3 GB/s in this cluster it looks like. That's pretty impressive. Most of those are writes. It looks like Exempt and Nwk_Exempt are probably the busiest, so I'm suspecting that's due to write workload. My honest feel from just those commands is, you are just doing quite a bit of work and are at a comfortable limit with what the controller can handle without adding more work.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to open a case you can, or if you have a hostname/serial and perf archive I can take a deeper look at some actual numbers.&lt;/P&gt;</description>
    <pubDate>Wed, 29 Jan 2020 00:57:51 GMT</pubDate>
    <dc:creator>paul_stejskal</dc:creator>
    <dc:date>2020-01-29T00:57:51Z</dc:date>
    <item>
      <title>Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is low</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153792#M34424</link>
      <description>&lt;P&gt;Do we have performance concern?&lt;/P&gt;
&lt;P&gt;In last 4 days, The node's Performance Capacity is very high constantly reaches &amp;gt;100%, or even &amp;gt;150%, AFF/SDD aggregate's Performance Capacity is only about 30%.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;My understanding is Peformance Capacity is just telling you, you cannot add more workloads. Does that necessraily tell you if are having performance issue?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;An application is slow. It is based on VM's and NFS datastore . But the Datastore volume seems okay, latency is not too high, only &amp;lt; 5ms. The latency graph of the volume doens't match with the slow response.&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;So, we are not sure if we have resources contention, and if Node's Perfomance Capacity is indicating an&amp;nbsp; issue?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 11:21:34 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153792#M34424</guid>
      <dc:creator>heightsnj</dc:creator>
      <dc:date>2025-06-04T11:21:34Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153802#M34425</link>
      <description>&lt;P&gt;I guess you are referring to the Graph in OnCommand&lt;/P&gt;
&lt;P&gt;100% will just suggest that from this point on, latency will increase exponentially&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jan 2020 13:57:13 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153802#M34425</guid>
      <dc:creator>kahuna</dc:creator>
      <dc:date>2020-01-25T13:57:13Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153803#M34426</link>
      <description>&lt;P&gt;As I said, Node's performance capacity reached &amp;gt;150%, and last for many days, but Latency graph doesn't show too high, only about 5 ms/&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Any other ideas?&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jan 2020 21:47:20 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153803#M34426</guid>
      <dc:creator>heightsnj</dc:creator>
      <dc:date>2020-01-25T21:47:20Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153804#M34427</link>
      <description>&lt;P&gt;just focus on latency&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jan 2020 23:29:38 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153804#M34427</guid>
      <dc:creator>kahuna</dc:creator>
      <dc:date>2020-01-25T23:29:38Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153820#M34437</link>
      <description>&lt;P&gt;Please give output of:&lt;/P&gt;
&lt;P&gt;::&amp;gt; set d -c off; qos statistics workload resource cpu show -node XXXXXXXXXXXX&lt;/P&gt;
&lt;P&gt;::&amp;gt; qos statistics volume latency show -volume XXXXXXXX -vserver XXXXXXXXXXXX&lt;/P&gt;
&lt;P&gt;::&amp;gt; qos statistics volume characteristics show -volume XXXXXXXXXXXXX -vserver XXXXXXXXXXXXXX&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What application is this? Also I'd recommend upgrading to AIQUM 9.7 as you can add in your VM level. This will help you compare VM and filer latencies and IOPs.&lt;/P&gt;</description>
      <pubDate>Mon, 27 Jan 2020 16:35:18 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153820#M34437</guid>
      <dc:creator>paul_stejskal</dc:creator>
      <dc:date>2020-01-27T16:35:18Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153869#M34445</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/45689"&gt;@paul_stejskal&lt;/a&gt;&amp;nbsp;,&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Please find outputs of three commands you recommended,and let me know your thoughts. Thanks!&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;STRONG&gt;&lt;U&gt;OUTPUT OF COMMAND1:&lt;/U&gt;&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol&lt;BR /&gt;--------------- ----- ----- ----------- ------ ------- ----- ------ --------&lt;BR /&gt;-total- (2000%) - 773% 63% 0% 203% 0% 507% 0%&lt;BR /&gt;System-Default 1 599% 0% 0% 165% 0% 434% 0%&lt;BR /&gt;_WAFL 7 85% 48% 0% 0% 0% 37% 0%&lt;BR /&gt;_WAFL_SCAN 19 44% 13% 0% 0% 0% 31% 0%&lt;BR /&gt;User-Default 2 37% 0% 0% 37% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;_CLOUD 25 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;-total- (2000%) - 688% 72% 0% 200% 0% 416% 0%&lt;BR /&gt;System-Default 1 508% 0% 0% 159% 0% 349% 0%&lt;BR /&gt;_WAFL 7 79% 54% 0% 0% 0% 25% 0%&lt;BR /&gt;_WAFL_SCAN 19 53% 16% 0% 0% 0% 37% 0%&lt;BR /&gt;User-Default 2 41% 0% 0% 41% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;_CLOUD 25 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;-total- (2000%) - 670% 67% 0% 180% 0% 421% 0%&lt;BR /&gt;System-Default 1 498% 0% 0% 145% 0% 353% 0%&lt;BR /&gt;_WAFL 7 80% 51% 0% 0% 0% 28% 0%&lt;BR /&gt;_WAFL_SCAN 19 49% 15% 0% 0% 0% 34% 0%&lt;BR /&gt;User-Default 2 35% 0% 0% 35% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 2% 0% 0% 0% 0% 2% 0%&lt;BR /&gt;-total- (2000%) - 535% 79% 0% 168% 0% 288% 0%&lt;BR /&gt;System-Default 1 371% 0% 0% 132% 0% 239% 0%&lt;BR /&gt;_WAFL_SCAN 19 65% 21% 0% 0% 0% 44% 0%&lt;BR /&gt;_WAFL 7 59% 57% 0% 0% 0% 2% 0%&lt;BR /&gt;User-Default 2 36% 0% 0% 36% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol&lt;BR /&gt;--------------- ----- ----- ----------- ------ ------- ----- ------ --------&lt;BR /&gt;-total- (2000%) - 404% 78% 0% 164% 0% 162% 0%&lt;BR /&gt;System-Default 1 233% 0% 0% 122% 0% 111% 0%&lt;BR /&gt;_WAFL_SCAN 19 66% 21% 0% 0% 0% 45% 0%&lt;BR /&gt;_WAFL 7 60% 56% 0% 0% 0% 4% 0%&lt;BR /&gt;User-Default 2 41% 0% 0% 41% 0% 0% 0%&lt;BR /&gt;-total- (2000%) - 457% 79% 0% 202% 0% 176% 0%&lt;BR /&gt;System-Default 1 285% 0% 0% 157% 0% 128% 0%&lt;BR /&gt;_WAFL_SCAN 19 66% 21% 0% 0% 0% 45% 0%&lt;BR /&gt;_WAFL 7 56% 56% 0% 0% 0% 0% 0%&lt;BR /&gt;User-Default 2 45% 0% 0% 45% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;-total- (2000%) - 407% 76% 0% 208% 0% 121% 0%&lt;BR /&gt;System-Default 1 228% 0% 0% 160% 0% 68% 0%&lt;BR /&gt;_WAFL_SCAN 19 64% 20% 0% 0% 0% 44% 0%&lt;BR /&gt;_WAFL 7 61% 55% 0% 0% 0% 5% 0%&lt;BR /&gt;User-Default 2 47% 0% 0% 47% 0% 0% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;-total- (2000%) - 455% 75% 0% 249% 1% 130% 0%&lt;BR /&gt;System-Default 1 274% 0% 0% 192% 0% 82% 0%&lt;BR /&gt;_WAFL_SCAN 19 65% 21% 0% 0% 0% 44% 0%&lt;BR /&gt;User-Default 2 57% 0% 0% 57% 0% 0% 0%&lt;BR /&gt;_WAFL 7 53% 53% 0% 0% 0% 0% 0%&lt;BR /&gt;Workload ID CPU Wafl_exempt Kahuna Network Raid Exempt Protocol&lt;BR /&gt;--------------- ----- ----- ----------- ------ ------- ----- ------ --------&lt;BR /&gt;-total- (2000%) - 633% 59% 0% 277% 0% 297% 0%&lt;BR /&gt;System-Default 1 446% 0% 0% 218% 0% 228% 0%&lt;BR /&gt;_WAFL 7 79% 45% 0% 0% 0% 34% 0%&lt;BR /&gt;User-Default 2 59% 0% 0% 59% 0% 0% 0%&lt;BR /&gt;_WAFL_SCAN 19 44% 13% 0% 0% 0% 31% 0%&lt;BR /&gt;_USERSPACE_APPS 14 1% 0% 0% 0% 0% 1% 0%&lt;BR /&gt;_CLOUD 25 1% 0% 0% 0% 0% 1% 0%&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;OUTPUT OF COMMAND2:&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Workload ID Latency Network Cluster Data Disk QoS NVRAM Cloud FlexCache SM Sync&lt;BR /&gt;--------------- ------ ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ---------- ----------&lt;BR /&gt;-total- - 577.00us 129.00us 167.00us 194.00us 66.00us 13.00us 0ms 8.00us 0ms 0ms&lt;BR /&gt;XXXXXXX.. 25859 387.00us 70.00us 146.00us 171.00us 0ms 0ms 0ms 0ms 0ms 0ms&lt;BR /&gt;-total- - 578.00us 124.00us 177.00us 181.00us 82.00us 0ms 0ms 14.00us 0ms 0ms&lt;BR /&gt;XXXXXXX.. 25859 298.00us 59.00us 156.00us 83.00us 0ms 0ms 0ms 0ms 0ms 0ms&lt;BR /&gt;-total- - 559.00us 121.00us 169.00us 188.00us 67.00us 0ms 0ms 14.00us 0ms 0ms&lt;BR /&gt;XXXXXXXXXX.. 25859 309.00us 68.00us 149.00us 90.00us 2.00us 0ms 0ms 0ms 0ms 0ms&lt;BR /&gt;-total- - 891.00us 131.00us 207.00us 358.00us 181.00us 0ms 0ms 14.00us 0ms 0ms&lt;BR /&gt;XXXXXXXX.. 25859 372.00us 78.00us 165.00us 129.00us 0ms 0ms 0ms 0ms 0ms 0ms&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;U&gt;&lt;STRONG&gt;OUTPUT OF COMMAND3&lt;/STRONG&gt;&lt;/U&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Workload ID IOPS Throughput Request Size Read Concurrency&lt;BR /&gt;--------------- ------ -------- ---------------- ------------ ---- -----------&lt;BR /&gt;-total- - 159792 3479.28MB/s 22831B 43% 160&lt;BR /&gt;XXXXXXX.. 25859 44 139.13KB/s 3213B 3% 0&lt;BR /&gt;-total- - 129506 2289.89MB/s 18540B 33% 113&lt;BR /&gt;XXXXXXX.. 25859 40 99.44KB/s 2545B 5% 0&lt;BR /&gt;-total- - 146848 3286.96MB/s 23470B 42% 132&lt;BR /&gt;XXXXXXXX.. 25859 56 126.82KB/s 2332B 1% 0&lt;BR /&gt;-total- - 162914 4078.31MB/s 26249B 37% 194&lt;BR /&gt;XXXXXXXX.. 25859 41 92.88KB/s 2301B 0% 0&lt;BR /&gt;-total- - 151828 3366.92MB/s 23253B 40% 147&lt;BR /&gt;XXXXXXXX.. 25859 31 114.78KB/s 3750B 0% 0&lt;BR /&gt;-total- - 139929 3091.28MB/s 23164B 33% 140&lt;BR /&gt;XXXXXXXXX.. 25859 32 121.75KB/s 3895B 1% 0&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2020 00:20:41 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153869#M34445</guid>
      <dc:creator>heightsnj</dc:creator>
      <dc:date>2020-01-29T00:20:41Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153870#M34446</link>
      <description>&lt;P&gt;Ok, so you're doing close to 3 GB/s in this cluster it looks like. That's pretty impressive. Most of those are writes. It looks like Exempt and Nwk_Exempt are probably the busiest, so I'm suspecting that's due to write workload. My honest feel from just those commands is, you are just doing quite a bit of work and are at a comfortable limit with what the controller can handle without adding more work.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;If you want to open a case you can, or if you have a hostname/serial and perf archive I can take a deeper look at some actual numbers.&lt;/P&gt;</description>
      <pubDate>Wed, 29 Jan 2020 00:57:51 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153870#M34446</guid>
      <dc:creator>paul_stejskal</dc:creator>
      <dc:date>2020-01-29T00:57:51Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153933#M34452</link>
      <description>&lt;P&gt;&lt;SPAN&gt;This is the approach I would like to see. I will send you a private message and send information you need, because I don't want to share the organization infor in the public.&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;Now, why are you saying we are&amp;nbsp;doing close to 3 GB/s in this cluster? Is this because following output:&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;-total- - 159792 3479.28MB/s 22831B 43% 160&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;3Gb/s on this volume consider to be good? I checked several other volumes, they are all around 3Gb/s.&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 13:12:04 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153933#M34452</guid>
      <dc:creator>heightsnj</dc:creator>
      <dc:date>2020-01-30T13:12:04Z</dc:date>
    </item>
    <item>
      <title>Re: Node's Performance Capacity &gt;.100%, but Aggregate's Performance Capacity is ~30%, latency is</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153959#M34456</link>
      <description>&lt;P&gt;Gigabytes, not Gigabits. B vs b. Big difference (factor of 8)!.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;That output is the overall throughput across the cluster. &lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;And 3 GB/s honestly isn't bad, but I don't know how many nodes that is nor model #.&lt;/P&gt;</description>
      <pubDate>Thu, 30 Jan 2020 18:08:34 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/Node-s-Performance-Capacity-gt-100-but-Aggregate-s-Performance-Capacity-is-30/m-p/153959#M34456</guid>
      <dc:creator>paul_stejskal</dc:creator>
      <dc:date>2020-01-30T18:08:34Z</dc:date>
    </item>
  </channel>
</rss>

