<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: ESXI 4 multipath full bandwidth utilization in VMware Solutions Discussions</title>
    <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22067#M2185</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Eric,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How about the IO patterns you are simulating? Are they large sequential reads or small random IOs? That could make a big difference. Also, how many hard disk drives are used by the LUN?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Wei&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Tue, 23 Mar 2010 18:16:30 GMT</pubDate>
    <dc:creator>lwei</dc:creator>
    <dc:date>2010-03-23T18:16:30Z</dc:date>
    <item>
      <title>ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22055#M2182</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;DIV class="jive-rendered-content"&gt;&lt;P&gt;I've followed most of the online guides on how to setup ESXI for ISCSI multipathing, but apparently I'm missing something.&amp;nbsp; Technically, multipathing is working, but I'm unable to utilize more than 50% of both nics.&amp;nbsp; I have setup a datastore that points to a NetApp lun.&amp;nbsp; I'm doing a sqlio test from within a virtual machine writing to a hard drive on the NetApp datastore.&amp;nbsp; I've also tried the test using 2 different virtual machines at the same time and still only get 50% total on each 1Gb connection.&amp;nbsp; The sqlio process is running from an internal drive on the ESXI server to make sure that's not causing a bottleneck.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here is my configuration:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;NetApp 3140&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;- Both interfaces connected on first controller&lt;/P&gt;&lt;P&gt;- Unique IP address on each interface&lt;/P&gt;&lt;P&gt;- Not using switch trunk or virtual interfaces&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;ESXI 4&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;- 2 VMKernel Ports on 2 separate 1Gb nic ports&lt;/P&gt;&lt;P&gt;- Unique IP addresses on each VMKernel port&lt;/P&gt;&lt;P&gt;- vSwitch setup for IP Hash load balancing&lt;/P&gt;&lt;P&gt;- Storage path selection set to Round Robin (VMWare)&lt;/P&gt;&lt;P&gt;- Storage array type set to VMW_SATP_ALUA&lt;/P&gt;&lt;P&gt;- 4 paths found that correspond to the 2 IP addresses assigned to controller on the NetApp&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;HP Procurve 2848 Switch&lt;BR /&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;- Tried with trunking on and off&lt;/P&gt;&lt;P&gt;- All devices are connected directly to this switch.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any ideas?&amp;nbsp; Thank you&lt;/P&gt;&lt;/DIV&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 07:18:42 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22055#M2182</guid>
      <dc:creator>eric_lackey</dc:creator>
      <dc:date>2025-06-05T07:18:42Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22059#M2183</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;&lt;SPAN style="font-size: 8pt;"&gt;How many iSCSI sessions did you see on the FAS3140? What I/O pattern were you simulating using sqlio? Thanks,&amp;nbsp;&amp;nbsp; -Wei&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 12 Feb 2010 06:28:12 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22059#M2183</guid>
      <dc:creator>lwei</dc:creator>
      <dc:date>2010-02-12T06:28:12Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22062#M2184</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Sorry for the long delay... just getting back to work on this issue.&amp;nbsp; The NetApp is showing 4 sessions connected.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; 43&amp;nbsp;&amp;nbsp;&amp;nbsp; 2003&amp;nbsp;&amp;nbsp; iqn.1998-01.com.vmware:localhost-78826b22 / 00:02:3d:00:00:01 / ESXI2&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; 44&amp;nbsp;&amp;nbsp;&amp;nbsp; 2003&amp;nbsp;&amp;nbsp; iqn.1998-01.com.vmware:localhost-78826b22 / 00:02:3d:00:00:02 / ESXI2&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; 45&amp;nbsp;&amp;nbsp;&amp;nbsp; 2001&amp;nbsp;&amp;nbsp; iqn.1998-01.com.vmware:localhost-78826b22 / 00:02:3d:00:00:03 / ESXI2&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; 46&amp;nbsp;&amp;nbsp;&amp;nbsp; 2001&amp;nbsp;&amp;nbsp; iqn.1998-01.com.vmware:localhost-78826b22 / 00:02:3d:00:00:04 / ESXI2&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 17:57:24 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22062#M2184</guid>
      <dc:creator>eric_lackey</dc:creator>
      <dc:date>2010-03-23T17:57:24Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22067#M2185</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Eric,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How about the IO patterns you are simulating? Are they large sequential reads or small random IOs? That could make a big difference. Also, how many hard disk drives are used by the LUN?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Wei&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 18:16:30 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22067#M2185</guid>
      <dc:creator>lwei</dc:creator>
      <dc:date>2010-03-23T18:16:30Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22071#M2186</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I'm running sqlio with 64 threads.&amp;nbsp; If I run one instance of this I get about 116MB per second.&amp;nbsp; If I run two instances of this, I get about 60MB each.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;My command looks something like this...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;sqlio -t64 -b8 -s15 c:\temp\test.dat&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm running the sqlio command from local storage in the ESXI server.&amp;nbsp; That is drive letter E on the server i'm testing.&amp;nbsp; The drive letter i'm writing to C:\ is on the NetApp.&amp;nbsp; The LUN is on a flex vol which is on an aggregate with 14 drives. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've tested ISCSI multi-pathing from a Windows 2008 box and get closer to 225MB, so I'm fairly certain the NetApp is setup correctly and can perform much better than it is.&amp;nbsp; Both of these servers are using the same network configuration (switches, etc.).&amp;nbsp; I think the bottleneck is somewhere on the ESX server.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 18:22:39 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22071#M2186</guid>
      <dc:creator>eric_lackey</dc:creator>
      <dc:date>2010-03-23T18:22:39Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22076#M2187</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;225MB/s seems a pretty good number. I agree with you that it's likely be ESX issue. However, on the other hand, 225MB/s divided by 4 is ~50MB/s' that's ~50% of the bandwidth.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;&lt;P&gt;Wei&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 20:28:23 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22076#M2187</guid>
      <dc:creator>lwei</dc:creator>
      <dc:date>2010-03-23T20:28:23Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22081#M2188</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Ok, I think i've figured this out.&amp;nbsp; I missed in the docs about changing the round robin settings in ESXI.&amp;nbsp; By default, the round robin method switches on every 1000 IOPS.&amp;nbsp; So, path #1 gets 1000 IOPS, then path #2 gets 1000 IOPS.&amp;nbsp; Well, the NetApp can finish handling 1000 IOPS by the time it comes back to the same path, so at times the path was just bored waiting for more.&amp;nbsp; The solution is to modify this setting so that ESX switches much sooner.&amp;nbsp; From the documents I read, this number should be around 3.&amp;nbsp; That seemed to give me the best results as well in my testing as well.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To set this command, you need to get the LUN or device id from the Vsphere client or CLI interface.&amp;nbsp;&amp;nbsp; Then, login to the ESX server with SSH and run the following command...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;gt; esxcli nmp roundrobin setconfig -d &amp;lt;device id&amp;gt; --iops 3 --type iops&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;To verify that it took, you can run this command.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;gt; esxcli nmp roundrobin getconfig --device &amp;lt;device id&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The only thing that stinks is that it won't save this so you have to put that command in your /etc/rc.local file on the ESX server.&amp;nbsp; If anyone knows of a better way, please let me know. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;With this setting in place, I've hit as high as 200MB per second in my tests.&amp;nbsp; I'll try it again tonight when there is nothing going on.&amp;nbsp; That should give me a better indication of the maximum speed.&amp;nbsp; But, at 200MB, I'm getting pretty close to saturing my 2Gb connection, so I'm very pleased with that.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 22:27:42 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22081#M2188</guid>
      <dc:creator>eric_lackey</dc:creator>
      <dc:date>2010-03-23T22:27:42Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22085#M2189</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thanks for sharing this. It's good to know &lt;SPAN __jive_emoticon_name="happy" __jive_macro_name="emoticon" class="jive_macro jive_emote" src="https://community.netapp.com/images/emoticons/happy.gif"&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 23 Mar 2010 22:32:20 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22085#M2189</guid>
      <dc:creator>lwei</dc:creator>
      <dc:date>2010-03-23T22:32:20Z</dc:date>
    </item>
    <item>
      <title>Re: ESXI 4 multipath full bandwidth utilization</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22090#M2190</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We are in the same case for some of our customers. All NetApp storage are in production so we can't test this method.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is it possible to change this nmp setting from 1000 to 3 on a storage array in production ? (Hot change the value)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If we change it, it is only for the LUN which is concerned or for all LUNs using round robin policy ?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Is 3 the best practice for netapp ? I have seen that it's the best practice for EMC, but they have also said that all constructor of array have its own setting.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you in advance for your answer.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Have a nice day.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yannick N.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 26 Nov 2010 16:37:53 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/ESXI-4-multipath-full-bandwidth-utilization/m-p/22090#M2190</guid>
      <dc:creator>yannick_n</dc:creator>
      <dc:date>2010-11-26T16:37:53Z</dc:date>
    </item>
  </channel>
</rss>

