<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Latency between FAS2050 and VMWare Cluster in VMware Solutions Discussions</title>
    <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6733#M609</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Thomas, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You asked what is the IOPS the fas2050 can do, that depends on how little latency you want. 200ms is too much I agree.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How many disks in aggr. and what type? &lt;/P&gt;&lt;P&gt;Your aggr. seems full and that will potentially be part of the problem. Any filesystem that gets full will have to shuffle blocks&lt;/P&gt;&lt;P&gt;around before it can write stripes. So when you say your aggr. is above 90% I am concerned. Can you tell us exactly&lt;/P&gt;&lt;P&gt;how full it is? Please run the following commands. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;df -Ag aggr.name&lt;/P&gt;&lt;P&gt;aggr show_space -g&lt;/P&gt;&lt;P&gt;snap reserve -A aggr.name&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Lets see if we can free up some space. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How many VMs, data stores are you running on this baby? keep in mind the fas2050 is entry model controller. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Finally I d like you to collect data using "statit" command. please do the following: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;priv set advanced&lt;/P&gt;&lt;P&gt;statit -b&amp;nbsp;&amp;nbsp;&amp;nbsp; (begin)&lt;/P&gt;&lt;P&gt;wait 2 minutes&lt;/P&gt;&lt;P&gt;statib -e&amp;nbsp; (end)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Collect the outout in a wordpad file and attach it to this thread please. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers, &lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 07 Mar 2011 19:24:52 GMT</pubDate>
    <dc:creator>eric_barlier</dc:creator>
    <dc:date>2011-03-07T19:24:52Z</dc:date>
    <item>
      <title>Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6673#M592</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Fairly new to Netapp and need some help.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We are starting to notice some latency between our FAS2050 and our vmware cluster.&amp;nbsp; our 2050 is on ontap 7.2.4L1 and capacity on the aggr's are above 90%.&amp;nbsp; we are running esx 3.5update5.&amp;nbsp; a couple of questions are&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;1) what is the max iops that a 2050 controller head can handle?&lt;/P&gt;&lt;P&gt;2) anything on the controller side that would cause the vmware quest to see upwords of 200ms latency?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thanks&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thomas&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 06:59:08 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6673#M592</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2025-06-05T06:59:08Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6678#M593</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Thomas,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;There are countless things which may impact your performance:&lt;/P&gt;&lt;P&gt;- too little free space in the aggregate&lt;/P&gt;&lt;P&gt;- too high fragmentation (could be a result of little space available)&lt;/P&gt;&lt;P&gt;- too few spindles&lt;/P&gt;&lt;P&gt;- networking issues&lt;/P&gt;&lt;P&gt;- filer just not coping with the workload&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To be perfectly honest with you, FAS2050 is a rather slow box (to say the least) e.g. in the terms of its CPU capabilities.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Can you post the output of the command below (ideally run for few minutes when there is a performance issue, end with Ctrl-C)?&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;sysstat -x -s 5&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;regards,&lt;BR /&gt;Radek&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 10:41:58 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6678#M593</guid>
      <dc:creator>radek_kubka</dc:creator>
      <dc:date>2011-03-03T10:41:58Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6683#M594</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Radek,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you for the response.  A few comments to you post.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-          The aggr’s are around 90% so space is at a premium, we are looking at adding more.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-          Are there any checks I can run to check for fragmentation or vm misalignment?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-          We should have enough disk spindles we have 20 15 SAS and a shelf(14 drives) of 15K fiber drives.  If anything the controller is running out of iops.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-          We are investigating any networking issues.  We have 2 dell switch stacks one for the storage network and another for all other traffic&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;-          I will gather the sysstat data today and post in just a bit.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 14:07:43 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6683#M594</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-03T14:07:43Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6688#M595</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;Are there any checks I can run to check for fragmentation&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;SPAN&gt;Reallocate command will do the trick, both for checking &amp;amp; fixing problems - &lt;/SPAN&gt;&lt;A class="jive-link-external-small" href="http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/cmdref/man1/na_reallocate.1.htm" target="_blank"&gt;http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/cmdref/man1/na_reallocate.1.htm&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;See this thread for some interesting angles, as the topic is not that straightforward and not well documented: &lt;/SPAN&gt;&lt;A _jive_internal="true" href="https://community.netapp.com/message/20969#20969" target="_blank"&gt;http://communities.netapp.com/message/20969#20969&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;vm misalignment?&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;SPAN&gt;This is well documented - &lt;/SPAN&gt;&lt;A class="jive-link-external-small" href="http://media.netapp.com/documents/tr-3747.pdf" target="_blank"&gt;http://media.netapp.com/documents/tr-3747.pdf&lt;/A&gt;&lt;SPAN&gt; (page 31 talks about detection)&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 14:30:16 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6688#M595</guid>
      <dc:creator>radek_kubka</dc:creator>
      <dc:date>2011-03-03T14:30:16Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6692#M596</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;One more thing:&lt;/P&gt;&lt;P&gt;Have you considered upgrading your ONTAP and/or ESX versions?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;On the ONTAP side, there are many areas where performance has been improved comparing 7.2.x to 7.3.x (e.g. read-ahead algorithms, which make a difference to CPU utilisation)&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 14:39:11 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6692#M596</guid>
      <dc:creator>radek_kubka</dc:creator>
      <dc:date>2011-03-03T14:39:11Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6697#M597</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We actually have a couple of projects on the schedule over the next month to upgrade our esx and netapp.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 16:11:13 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6697#M597</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-03T16:11:13Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6702#M598</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Average output on c1&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;25%   180     0     0     278 31349  1361   1646      6     0 28154     7   85%   0%  -   11%      0    98     0     0&lt;/P&gt;&lt;P&gt;37%   178     0     0     260 29367  1253   7374   8445     0 26804     7   98%  44%  Tf  19%      0    82     0     0&lt;/P&gt;&lt;P&gt;26%   208     0     0     280 36640  1748   2035   2478     0 33487     7   89%  12%  :    9%      0    72     0     0&lt;/P&gt;&lt;P&gt;37%   410     0     0     488 49182  3968   4137      6     0 43429     6   76%   0%  -   14%      0    78     0     0&lt;/P&gt;&lt;P&gt;36%   204     0     0     289 36248  1029   4967  10134     0 32847     7   98%  44%  T   17%      0    85     0     0&lt;/P&gt;&lt;P&gt;31%   196     0     0     288 47934  1261    978     11     0 43346     7   88%   0%  -    7%      0    92     0     0&lt;/P&gt;&lt;P&gt;35%   162     0     0     240 39398  1186   5438   5878     0 36333     7   97%  42%  Tf  12%      0    78     0     0&lt;/P&gt;&lt;P&gt;30%   205     0     0     244 46653  1206   1208   2078     0 43149     4   88%   8%  :   10%      0    39     0     0&lt;/P&gt;&lt;P&gt;31%   314     0     0     415 39727  1191   2193     11     0 34537     6   91%   0%  -   15%      0   101     0     0&lt;/P&gt;&lt;P&gt;48%   188     0     0     249 35331  1225   7724  11444     0 32375     6   98%  69%  T   28%      0    61     0     0&lt;/P&gt;&lt;P&gt;21%   214     0     0     272 30676  1167   1310     11     0 27682     6   85%   0%  -    8%      0    58     0     0&lt;/P&gt;&lt;P&gt;31%   276     0     0     344 30569  1186   3984   2429     0 27342     5   97%  21%  Tf  13%      0    68     0     0&lt;/P&gt;&lt;P&gt;35%   269     0     0     317 50644  1141   1437   6476     0 46819     5   93%  30%  :   10%      0    48     0     0&lt;/P&gt;&lt;P&gt;32%   226     0     0     292 51672  1608   1474     11     0 47881     5   83%   0%  -   10%      0    66     0     0&lt;/P&gt;&lt;P&gt;40%   288     0     0     317 37889  1043   6662   8482     0 34157     5   98%  57%  T   15%      0    29     0     0&lt;/P&gt;&lt;P&gt;40%   475     0     0     649 48276  3486   4014    118     0 42952     6   80%   0%  -   15%      0   174     0     0&lt;/P&gt;&lt;P&gt;33%   161     0     0     230 40437  1527   3428    846     0 37067     5   97%  10%  Tf  13%      0    69     0     0&lt;/P&gt;&lt;P&gt;32%   228     0     0     298 42014  1358   1842   8757     0 38247     5   92%  41%  :   13%      0    70     0     0&lt;/P&gt;&lt;P&gt;35%   353     0     0     427 49049  1545   1543      6     0 43293     5   88%   0%  -   10%      0    74     0     0&lt;/P&gt;&lt;P&gt;CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s&lt;/P&gt;&lt;P&gt;                                  in   out   read  write  read write   age   hit time  ty util                 in   out&lt;/P&gt;&lt;P&gt;44%   209     0     0     346 40401  2027   8354   9430     0 36975     9   96%  58%  Tf  22%      0   137     0     0&lt;/P&gt;&lt;P&gt;38%   313     0     0     440 41285  1805   2978   1417     0 32781     5   93%   6%  :   21%      0   127     0     0&lt;/P&gt;&lt;P&gt;32%   262     0     0     364 43827  1927   2158      6     0 39531     6   83%   0%  -   12%      0   102     0     0&lt;/P&gt;&lt;P&gt;46%   177     0     0     203 40369  1027   9911  16051     0 36622     5   99%  92%  Tf  19%      0    26     0     0&lt;/P&gt;&lt;P&gt;26%   243     0     0     286 39857  1299   1373     18     0 36451     5   92%   1%  :    7%      0    43     0     0&lt;/P&gt;&lt;P&gt;29%   234     0     0     380 41367  1666   1513      6     0 36517     5   86%   0%  -    9%      0   146     0     0&lt;/P&gt;&lt;P&gt;39%   192     0     0     266 39504  1206   4408  10552     0 36359     5   98%  54%  T   16%      0    74     0     0&lt;/P&gt;&lt;P&gt;28%   235     0     0     274 46429  1145    858     11     0 42480     5   92%   0%  -    6%      0    39     0     0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Average output from c2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;54%   140     0     0     226  3963 37786  38384   7512     0     0     6   99%  26%  Tf  39%      0    86     0     0&lt;/P&gt;&lt;P&gt;47%    78     0     0     174  2188 37788  37647   3578     0     0     6   99%  15%  :   44%      0    96     0     0&lt;/P&gt;&lt;P&gt;CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s&lt;/P&gt;&lt;P&gt;                                  in   out   read  write  read write   age   hit time  ty util                 in   out&lt;/P&gt;&lt;P&gt;48%    82     0     0     271  3286 44996  45144      6     0     0     6   99%   0%  -   34%      0   189     0     0&lt;/P&gt;&lt;P&gt;50%    75     0     0     247  3404 38288  41668  10059     0     0     5   99%  48%  T   29%      0   172     0     0&lt;/P&gt;&lt;P&gt;47%    72     0     0     247  3440 39371  39317     11     0     0     5   99%   0%  -   39%      0   175     0     0&lt;/P&gt;&lt;P&gt;56%    89     0     0     200  3844 39358  42454   5833     0     0     5   99%  34%  Tf  43%      0   111     0     0&lt;/P&gt;&lt;P&gt;50%    66     0     0     178  3454 43506  42507   4923     0     0     5   99%  21%  :   39%      0   112     0     0&lt;/P&gt;&lt;P&gt;55%   139     0     0     312  2931 45593  45019     11     0     0     5   97%   0%  -   46%      0   173     0     0&lt;/P&gt;&lt;P&gt;55%   117     0     0     213  2983 37411  41009   9738     0     0     6   99%  57%  T   41%      0    96     0     0&lt;/P&gt;&lt;P&gt;58%    77     0     0     514  5567 44210  43752     11     0     0     6   95%   0%  -   49%      0   437     0     0&lt;/P&gt;&lt;P&gt;60%   100     0     0     320  4590 40898  44822   2408     0     0     7   98%  21%  Ts  43%      0   220     0     0&lt;/P&gt;&lt;P&gt;51%    77     0     0     297  3911 34172  35075  11124     0     0     7   99%  39%  :   54%      0   220     0     0&lt;/P&gt;&lt;P&gt;49%    98     0     0     167  2145 42473  43370     11     0     0     7   99%   0%  -   43%      0    69     0     0&lt;/P&gt;&lt;P&gt;50%    88     0     0     188  2359 31868  35439  11345     0     0     7   99%  60%  Tf  42%      0   100     0     0&lt;/P&gt;&lt;P&gt;51%    82     0     0     264  4454 38671  37736    770     0     0     7   98%   5%  :   44%      0   182     0     0&lt;/P&gt;&lt;P&gt;54%    74     0     0     256  4307 40893  39259     11     0     0     6   97%   0%  -   54%      0   182     0     0&lt;/P&gt;&lt;P&gt;52%    70     0     0     274  2955 39878  45548  12642     0     0     6   98%  53%  T   31%      0   204     0     0&lt;/P&gt;&lt;P&gt;48%    83     0     0     273  3663 51264  51033      6     0     0     6   97%   0%  -   20%      0   190     0     0&lt;/P&gt;&lt;P&gt;55%   104     0     0     264  4086 37853  41557  10845     0     0     6   98%  40%  Tf  44%      0   160     0     0&lt;/P&gt;&lt;P&gt;45%    83     0     0     285  4398 35637  36226   1370     0     0     6   98%   6%  :   38%      0   202     0     0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 18:37:43 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6702#M598</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-03T18:37:43Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6707#M599</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank for the links I will review them today.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 18:42:14 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6707#M599</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-03T18:42:14Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6711#M601</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;High cpu on c2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s&lt;/P&gt;&lt;P&gt;                                  in   out   read  write  read write   age   hit time  ty util                 in   out&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;100%    56     0     0     292   475  2659   7082     11     0     0     2s  52%   0%  -   20%      0   236     0     0&lt;/P&gt;&lt;P&gt;100%    64     0     0     353  3076  2228   7096   4050     0     0     2s  55%  23%  T   26%      0   289     0     0&lt;/P&gt;&lt;P&gt;100%    83     0     0     319  1076  2551   5927      6     0     0     2s  51%   0%  -   20%      0   236     0     0&lt;/P&gt;&lt;P&gt;100%    86     0     0     356  1196  2567   9550   3130     0     0     2s  53%  19%  Tf  30%      0   270     0     0&lt;/P&gt;&lt;P&gt;100%    75     0     0     366  1513  2511   6976   3683     0     0     2s  52%  19%  :   24%      0   291     0     0&lt;/P&gt;&lt;P&gt;100%   191     0     0     467  1703  2950   7578      6     0     0     2s  51%   0%  -   25%      0   276     0     0&lt;/P&gt;&lt;P&gt;100%   223     0     0     807  1710  5685  13415   6738     0     0     2s  56%  35%  T   31%      0   584     0     0&lt;/P&gt;&lt;P&gt;100%    79     0     0     699  1421  6027   9835     11     0     0     2s  52%   0%  -   21%      0   620     0     0&lt;/P&gt;&lt;P&gt;100%    91     0     0     643  1215  5705  12610   5710     0     0     3s  55%  33%  T   28%      0   552     0     0&lt;/P&gt;&lt;P&gt;100%    67     0     0     631  1563  5077   9263     11     0     0     2s  52%   0%  -   24%      0   564     0     0&lt;/P&gt;&lt;P&gt;100%    60     0     0     660  4912  6663  12511   3534     0     0     3s  54%  11%  Ts  28%      0   600     0     0&lt;/P&gt;&lt;P&gt;100%    83     0     0     835  2668  5683  10466   5217     0     0     2s  54%  17%  :   26%      0   752     0     0&lt;/P&gt;&lt;P&gt;100%    90     0     0     572  1463  4574   9668     11     0     0     2s  52%   0%  -   24%      0   482     0     0&lt;/P&gt;&lt;P&gt;100%    67     0     0     729  1798  5733  11699   7022     0     0     2s  55%  28%  T   26%      0   662     0     0&lt;/P&gt;&lt;P&gt;100%    61     0     0     794  2113  6594   9783      6     0     0     2s  52%   0%  -   27%      0   733     0     0&lt;/P&gt;&lt;P&gt;100%    76     0     0     704  1472  5600  11387   5690     0     0     2s  54%  24%  T   29%      0   628     0     0&lt;/P&gt;&lt;P&gt;100%    67     0     0     893  4991  6321   9518     11     0     0     2s  52%   0%  -   21%      0   826     0     0&lt;/P&gt;&lt;P&gt;100%    88     0     0    1190 10357  4213  11498   8079     0     0     4s  58%  30%  Fs  36%      0  1102     0     0&lt;/P&gt;&lt;P&gt;100%    67     0     0     970  9307  5603   9428   6875     0     0     2s  53%  23%  :   22%      0   903     0     0&lt;/P&gt;&lt;P&gt;100%    77     0     0     823  2242  6150  13613  15297     0     0     3s  56%  63%  F   33%      0   746     0     0&lt;/P&gt;&lt;P&gt;100%    68     0     0     798  2207  5700   9341      6     0     0     2s  52%   0%  -   20%      0   730     0     0&lt;/P&gt;&lt;P&gt;99%    91     0     0     782  2175  5566  10245     13     0     0     3s  53%   3%  Tn  23%      0   691     0     0&lt;/P&gt;&lt;P&gt;62%    84     0     0     510  1461  2948  12586  12811     0     0     8s  96%  89%  Z   51%      0   426     0     0&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 20:09:13 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6711#M601</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-03T20:09:13Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6716#M603</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;OK, fantastic.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When you look at the output with average CPU load, your disks are actually working harder &amp;amp; there is more network traffic than when CPU peaks. Normally that would indicate some internal process is hammering CPU.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My first guess is de-duplication scan may be causing this CPU peak.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;BR /&gt;Radek&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 03 Mar 2011 21:11:08 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6716#M603</guid>
      <dc:creator>radek_kubka</dc:creator>
      <dc:date>2011-03-03T21:11:08Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6720#M604</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Can you explain the scoring for reallocateing?  Is there a command you can run that will tell you if the volume needs to be reallocated?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;“It says that these volumes are not in need of a reallocate (scoring 1 or 2”&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thomas Maine&lt;/P&gt;&lt;P&gt;Technical Services Manager&lt;/P&gt;&lt;P&gt;Bond International Software, Inc.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 04 Mar 2011 01:25:13 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6720#M604</guid>
      <dc:creator>thmaine</dc:creator>
      <dc:date>2011-03-04T01:25:13Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6725#M606</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;The man page I was referring to earlier describes the scoring:&lt;/P&gt;&lt;P&gt;&lt;EM&gt;The threshold when a LUN, file or volume is considered unoptimized enough that a reallocation should be performed is given as a number from 3 (moderately optimized) to 10 (very unoptimized)&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Although I have seen people on this forum saying they got results as high as 20-something.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;From the results you have, and also looking at the sysstat outputs (where disks seem to be not that busy), fragmentation is not the culprit you are looking for,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;Radek&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 07 Mar 2011 10:10:31 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6725#M606</guid>
      <dc:creator>radek_kubka</dc:creator>
      <dc:date>2011-03-07T10:10:31Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6728#M607</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Could You please suggest what might cause so heavy filler CPU load? Is there any way to check it (networking, disk checksum calculations etc.)? We're also having performance issue especially with &lt;A href="https://community.netapp.com/thread/13413" target="_blank"&gt; Thread Link : 13413&lt;/A&gt;.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 07 Mar 2011 15:37:15 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6728#M607</guid>
      <dc:creator>p_maniawski</dc:creator>
      <dc:date>2011-03-07T15:37:15Z</dc:date>
    </item>
    <item>
      <title>Re: Latency between FAS2050 and VMWare Cluster</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6733#M609</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Thomas, &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You asked what is the IOPS the fas2050 can do, that depends on how little latency you want. 200ms is too much I agree.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How many disks in aggr. and what type? &lt;/P&gt;&lt;P&gt;Your aggr. seems full and that will potentially be part of the problem. Any filesystem that gets full will have to shuffle blocks&lt;/P&gt;&lt;P&gt;around before it can write stripes. So when you say your aggr. is above 90% I am concerned. Can you tell us exactly&lt;/P&gt;&lt;P&gt;how full it is? Please run the following commands. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;df -Ag aggr.name&lt;/P&gt;&lt;P&gt;aggr show_space -g&lt;/P&gt;&lt;P&gt;snap reserve -A aggr.name&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Lets see if we can free up some space. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How many VMs, data stores are you running on this baby? keep in mind the fas2050 is entry model controller. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Finally I d like you to collect data using "statit" command. please do the following: &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;priv set advanced&lt;/P&gt;&lt;P&gt;statit -b&amp;nbsp;&amp;nbsp;&amp;nbsp; (begin)&lt;/P&gt;&lt;P&gt;wait 2 minutes&lt;/P&gt;&lt;P&gt;statib -e&amp;nbsp; (end)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Collect the outout in a wordpad file and attach it to this thread please. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Cheers, &lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 07 Mar 2011 19:24:52 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Latency-between-FAS2050-and-VMWare-Cluster/m-p/6733#M609</guid>
      <dc:creator>eric_barlier</dc:creator>
      <dc:date>2011-03-07T19:24:52Z</dc:date>
    </item>
  </channel>
</rss>

