<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Volume IOPs monitoring in NetApp 6210 and 3170 in Network and Storage Protocols</title>
    <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5292#M543</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your response. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What i want to do is. i want to have IOPs for each volume as separate and aggregate by using a third party NMS (Zabbix) via SNMP. I need to know the OID's and MIBs which i can download and can do the needful. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Riaz.. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Fri, 01 Feb 2013 17:15:00 GMT</pubDate>
    <dc:creator>RJAVEDBUTT</dc:creator>
    <dc:date>2013-02-01T17:15:00Z</dc:date>
    <item>
      <title>Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5282#M538</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I want to monitor my NetApp 6210 and 3170 SAN storage. I am pretty much done with everything. I am using open source third party NMS and doing SNMP based monitoring of my storage infrastructure. i am struck at below points:&lt;/P&gt;&lt;P&gt;1. Number of IOPs per volume.&lt;/P&gt;&lt;P&gt;2. Aggregate number of IOPs on all volumes.&lt;/P&gt;&lt;P&gt;3. Number of IOPs on each physical disk. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Prompt response in this regard will be highly appreciated. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Riaz...&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-email-small" href="mailto:rizi.jbutt@gmail.com" target="_blank"&gt;rizi.jbutt@gmail.com&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 06:11:10 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5282#M538</guid>
      <dc:creator>RJAVEDBUTT</dc:creator>
      <dc:date>2025-06-05T06:11:10Z</dc:date>
    </item>
    <item>
      <title>Re: Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5286#M540</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You can use NetApp "stats" command which is available in the Data ONTAP on any filer.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;You can find out the available measureable objects on the filer by entering:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;step1:&lt;/STRONG&gt;&lt;BR /&gt;toaster&amp;gt;stats list objects&lt;/P&gt;&lt;P&gt;As you are initerested in '&lt;STRONG&gt;lun&lt;/STRONG&gt;', '&lt;STRONG&gt;volume'&lt;/STRONG&gt; &amp;amp; '&lt;STRONG&gt;agg&lt;/STRONG&gt;r' related stats, you can see them listed among objects listed by the previous command.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;To see 'counters' available for the objects listed in step1:&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;step2:&lt;/STRONG&gt;&lt;BR /&gt;toaster&amp;gt;stats list counters volume&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Counters for object name: volume&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; instance_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; node_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; instance_uuid&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vserver_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vserver_uuid&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avg_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; read_data&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; read_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; read_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; write_data&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; write_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; write_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; other_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; other_ops&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;toaster&amp;gt;stats list counters lun&lt;BR /&gt;Counters for object name: lun&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; instance_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; node_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; display_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; read_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; write_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; other_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; read_data&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; write_data&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; queue_full&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avg_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total_ops&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avg_read_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avg_write_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avg_other_latency&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; queue_depth_lun&lt;/P&gt;&lt;P&gt;toaster&amp;gt; stats list counters aggregate&lt;BR /&gt;Counters for object name: aggregate&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; instance_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; node_name&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total_transfers&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_reads&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_writes&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_reads&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_read_blocks&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_write_blocks&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_read_blocks&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total_transfers_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_reads_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_writes_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_reads_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_read_blocks_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_write_blocks_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_read_blocks_hdd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total_transfers_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_reads_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_writes_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_reads_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_read_blocks_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user_write_blocks_ssd&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cp_read_blocks_ssd&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;You can go one step further and determine what instances are available for the object;&amp;nbsp; say ' volume'. The instances are basically the volumes available on that filer.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;toaster&amp;gt;stats list instances volume&lt;BR /&gt;Instances for object name: volume&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vol0&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vol_QT_CIFS&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vol_show&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; centos_iscsi&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; cl_test_clone_centos_iscsi_20130204124118&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vol_dell&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; centos_nfs&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; vol_iscsi_win_test&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Now, Start system statistics gathering in the background using '-I' as identifier. For example: To observe stats for volume 'vol_iscsi_win_test':&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;step3:&lt;/STRONG&gt;&lt;BR /&gt;toaster&amp;gt; stats start -I volstats volume:vol_iscsi_win_test&lt;/P&gt;&lt;P&gt;Note:volstats is just the name I have given, you can give any meaningful name as you like.&lt;/P&gt;&lt;P&gt;To see the results while I/O is in progress:&lt;/P&gt;&lt;P&gt;toaster&amp;gt; stats show -I volstats&lt;BR /&gt;StatisticsID: volstats&lt;BR /&gt;volume:vol_iscsi_win_test:instance_name:vol_iscsi_win_test&lt;BR /&gt;volume:vol_iscsi_win_test:node_name:&lt;BR /&gt;volume:vol_iscsi_win_test:instance_uuid:368fa20c-6263-11e2-ad8d-123478563412&lt;BR /&gt;volume:vol_iscsi_win_test:vserver_name:&lt;BR /&gt;volume:vol_iscsi_win_test:vserver_uuid:&lt;BR /&gt;volume:vol_iscsi_win_test:avg_latency:4926032.41us&lt;BR /&gt;volume:vol_iscsi_win_test:total_ops:8/s&lt;BR /&gt;volume:vol_iscsi_win_test:read_data&amp;amp;colon;0b/s&lt;BR /&gt;volume:vol_iscsi_win_test:read_latency:0us&lt;BR /&gt;volume:vol_iscsi_win_test:read_ops:0/s&lt;BR /&gt;volume:vol_iscsi_win_test:write_data&amp;amp;colon;494556b/s&lt;BR /&gt;volume:vol_iscsi_win_test:write_latency:5479065.05us&lt;BR /&gt;volume:vol_iscsi_win_test:write_ops:7/s&lt;BR /&gt;volume:vol_iscsi_win_test:other_latency:8969.92us&lt;BR /&gt;volume:vol_iscsi_win_test:other_ops:0/s&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Note:I am running these commands on my simulator, so dont read too much into the vaules here.&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;To stop the background stats and print the result on the console, enter:&lt;BR /&gt;toaster&amp;gt;stats stop -I volstats &lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Similarly for LUNS:&lt;/P&gt;&lt;P&gt;toaster&amp;gt; stats list instances lun&lt;BR /&gt;Instances for object name: lun&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/vol_iscsi_win_test/iometer-BWKgW]BaIqNQ&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/cl_test_clone_centos_iscsi_20130204124118/lun_centos_0-BWKgW]BaIqOk&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/centos_iscsi/lun_centos_0-BWKgW]BaIqOA&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/centos_iscsi/lun_centos_1-BWKgW]BaIqOC&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/cl_test_clone_centos_iscsi_20130204124118/lun_centos_1-BWKgW]BaIqOl&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/centos_iscsi/lun_centos-BWKgW]BaIqNE&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/cl_test_clone_centos_iscsi_20130204124118/lun_centos-BWKgW]BaIqOj&lt;/P&gt;&lt;P&gt;toaster&amp;gt;stats start -I lun_stats lun:/vol/vol_iscsi_win_test/iometer-BWKgW]BaIqNQ&lt;BR /&gt;toaster&amp;gt;stats show -I lun_stats&lt;BR /&gt;toaster&amp;gt;stats stop -I lun_stats&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;There is alreay a wonerful blog post on this matter:&lt;/P&gt;&lt;P&gt;Performance "stats" without PerfStat or Ops Mgr:&lt;BR /&gt;&lt;A _jive_internal="true" href="https://community.netapp.com/groups/chris-kranz-hardware-pro/blog/2009/04/01/performance-stats-without-perfstat-or-ops-mgr" target="_blank"&gt;https://communities.netapp.com/groups/chris-kranz-hardware-pro/blog/2009/04/01/performance-stats-without-perfstat-or-ops-mgr&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 01 Feb 2013 15:45:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5286#M540</guid>
      <dc:creator>ASHWINPAWARTESL</dc:creator>
      <dc:date>2013-02-01T15:45:56Z</dc:date>
    </item>
    <item>
      <title>Re: Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5292#M543</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks for your response. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What i want to do is. i want to have IOPs for each volume as separate and aggregate by using a third party NMS (Zabbix) via SNMP. I need to know the OID's and MIBs which i can download and can do the needful. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Regards,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Riaz.. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 01 Feb 2013 17:15:00 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5292#M543</guid>
      <dc:creator>RJAVEDBUTT</dc:creator>
      <dc:date>2013-02-01T17:15:00Z</dc:date>
    </item>
    <item>
      <title>Re: Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5298#M545</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;IOPS, and latency, etc are not available via snmp, only via the Ontap API.&lt;/P&gt;&lt;P&gt;Roll your own, or go with something that has this built in and automated. (e.g. &lt;A href="http://www.logicmonitor.com/monitoring/storage/netapp-filers/" title="http://www.logicmonitor.com/monitoring/storage/netapp-filers/" target="_blank"&gt;http://www.logicmonitor.com/monitoring/storage/netapp-filers/&lt;/A&gt;)&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 01 Feb 2013 19:43:53 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5298#M545</guid>
      <dc:creator>steve_francis</dc:creator>
      <dc:date>2013-02-01T19:43:53Z</dc:date>
    </item>
    <item>
      <title>Re: Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5303#M547</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Steve,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I can have IOPs per protocol level i.e. Number of IOPs on NFS, iSCSI and FC but i wanna have IOPs per volume level. I think if i can have at protocol level then there must be some way out for volume level as well. &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 04 Feb 2013 16:45:41 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5303#M547</guid>
      <dc:creator>RJAVEDBUTT</dc:creator>
      <dc:date>2013-02-04T16:45:41Z</dc:date>
    </item>
    <item>
      <title>Re: Volume IOPs monitoring in NetApp 6210 and 3170</title>
      <link>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5308#M548</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You would think. But unfortunately it's not true. &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Having been through the NetApp MIBs comprehensivley several times, I can state categorically there is no way to get per volume IOps or latency.&lt;/P&gt;&lt;P&gt;Stuff like this:&lt;/P&gt;&lt;P&gt;&lt;IMG src="http://community.netapp.com/legacyfs/online/18618_IOps.png" width="450" /&gt;&lt;/P&gt;&lt;P&gt;can only be gotten via the API.&lt;/P&gt;&lt;P&gt;(The above is from logicmonitor).&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 05 Feb 2013 17:30:10 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Network-and-Storage-Protocols/Volume-IOPs-monitoring-in-NetApp-6210-and-3170/m-p/5308#M548</guid>
      <dc:creator>steve_francis</dc:creator>
      <dc:date>2013-02-05T17:30:10Z</dc:date>
    </item>
  </channel>
</rss>

