<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: FAS3140 Performance issue.- in Network and Storage Protocols</title>
    <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/FAS3140-Performance-issue/m-p/8184#M763</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ariel,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; You haven't stated which drives the volumes are on, but on average you get 75 to 100 iops on SATA drives.&amp;nbsp; Always work with the 75 mark, because that is the minimum you can expect.&amp;nbsp; With 24 drives, assuming RAID-DP and two spares, you get 20*75=1500 iops.&amp;nbsp; You are averaging 1,144 iops for 76% utilization.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Assuming the same information with the 450G drives and that they are 15k SAS drives, you are looking at 175-200, so again using 175, RAID-DP, two spares 175*20=3500 iops, so you should not be disk bound.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What are "sysstat -c 60 -s -x 1" and "sysstat -M -c 20 -s -x 1" showing?&amp;nbsp; The first command grabs 60 seconds worth of general statistics information, in one second intervals, and then prints a summary of high/low/average information.&amp;nbsp; The second show how the multiple CPUs really are being used, the generic "sysstat -x" output should not be used to monitor actual CPU usage on multi-core systems.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also are you running NFS on the volume(s)?&amp;nbsp; NFS mount settings are very important, things like noac and actimeo=0 in the client mount settings will bring a 3140 to its knees.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; - Scott&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Wed, 01 Feb 2012 15:09:31 GMT</pubDate>
    <dc:creator>columbus_admin</dc:creator>
    <dc:date>2012-02-01T15:09:31Z</dc:date>
    <item>
      <title>FAS3140 Performance issue.-</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/FAS3140-Performance-issue/m-p/8180#M762</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi guys, we are facing a performance issue on our storage system. We have two FAS3140 controllers with an average cpu usage of 80%, the user experience is too slow, I run a few commands and im seeing a huge time access for the volumes (a short example):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CNTL-2&amp;gt; stats show -i 10 volume:NH2_7K_SDC_NAS:read_latency volume:NH2_7K_SDC_NAS:write_latency&lt;/P&gt;&lt;P&gt;Instance read_latency write_latenc&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; us&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; us&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 176746.92&amp;nbsp;&amp;nbsp;&amp;nbsp; 255912.20&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 197769.45&amp;nbsp;&amp;nbsp;&amp;nbsp; 257111.93&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 166181.92&amp;nbsp;&amp;nbsp;&amp;nbsp; 438517.48&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 208290.45&amp;nbsp;&amp;nbsp;&amp;nbsp; 340686.83&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 173304.80&amp;nbsp;&amp;nbsp;&amp;nbsp; 237109.22&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 210693.25&amp;nbsp;&amp;nbsp;&amp;nbsp; 275884.64&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 162281.29&amp;nbsp;&amp;nbsp;&amp;nbsp; 300198.90&lt;/P&gt;&lt;P&gt;NH2_7K_SDC_NAS&amp;nbsp;&amp;nbsp;&amp;nbsp; 156559.38&amp;nbsp;&amp;nbsp;&amp;nbsp; 283601.79&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Same on controller #1 for other volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I dont know if this is refered to storage space and fragmentation or it's a problem regarding IOPS&amp;nbsp;&amp;nbsp; cause im seeing a few volumes beeing accesed a lot:&lt;/P&gt;&lt;P&gt;CNTL-2&amp;gt; stats show -i 10 volume:SARM_TMG_B:read_ops volume:SARM_TMG_B:write_ops volume:SARM_TMG_B:total_ops&lt;/P&gt;&lt;P&gt;Instance read_ops write_ops total_ops&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /s&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /s&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /s&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 50&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 930&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1013&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1140&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1279&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 93&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 963&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1092&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 89&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1028&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1156&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 92&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1009&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1138&lt;/P&gt;&lt;P&gt;SARM_TMG_B&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 86&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1057&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1184&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Our infrastructure on disks is: 2xDS4243 SAS (24*450GB) and 1xDS4243 SATA (24*1TB) per Head Unit (and we have two)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any ideas?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;KR,&lt;/P&gt;&lt;P&gt;AL&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 06:35:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/FAS3140-Performance-issue/m-p/8180#M762</guid>
      <dc:creator>LIGUORIARIEL</dc:creator>
      <dc:date>2025-06-05T06:35:56Z</dc:date>
    </item>
    <item>
      <title>Re: FAS3140 Performance issue.-</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/FAS3140-Performance-issue/m-p/8184#M763</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Ariel,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; You haven't stated which drives the volumes are on, but on average you get 75 to 100 iops on SATA drives.&amp;nbsp; Always work with the 75 mark, because that is the minimum you can expect.&amp;nbsp; With 24 drives, assuming RAID-DP and two spares, you get 20*75=1500 iops.&amp;nbsp; You are averaging 1,144 iops for 76% utilization.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Assuming the same information with the 450G drives and that they are 15k SAS drives, you are looking at 175-200, so again using 175, RAID-DP, two spares 175*20=3500 iops, so you should not be disk bound.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What are "sysstat -c 60 -s -x 1" and "sysstat -M -c 20 -s -x 1" showing?&amp;nbsp; The first command grabs 60 seconds worth of general statistics information, in one second intervals, and then prints a summary of high/low/average information.&amp;nbsp; The second show how the multiple CPUs really are being used, the generic "sysstat -x" output should not be used to monitor actual CPU usage on multi-core systems.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Also are you running NFS on the volume(s)?&amp;nbsp; NFS mount settings are very important, things like noac and actimeo=0 in the client mount settings will bring a 3140 to its knees.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; - Scott&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 01 Feb 2012 15:09:31 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/FAS3140-Performance-issue/m-p/8184#M763</guid>
      <dc:creator>columbus_admin</dc:creator>
      <dc:date>2012-02-01T15:09:31Z</dc:date>
    </item>
  </channel>
</rss>

