<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Constraints based tool for volume migration suggestions to optimize IO &amp;amp; space in Active IQ Unified Manager Discussions</title>
    <link>https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/Constraints-based-tool-for-volume-migration-suggestions-to-optimize-IO-amp-space/m-p/59926#M12487</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Does there exist a tool to make volume/vFiler migration suggestions based on optimizing overall performance and space goals of a set of filers?&lt;/P&gt;&lt;P&gt;Analogous to how VMware DRS makes prioritized vMotion suggestions based certain criteria (CPU usage being primary criteria) - in the case of the Netapp tool the primary resources would be IOPs (observed average and peak) and storage space (observed current + growth rate).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Inputs (all known by DFM):&lt;/P&gt;&lt;P&gt;1) aggregates: sizes, IO characteristics (how many IOPs can this aggr do at 5,10,20ms latency?) (based on type, number of disks)&lt;/P&gt;&lt;P&gt;2) volumes: sizes, IO characteristics (avg and avg peak IOPS)&lt;/P&gt;&lt;P&gt;3) administrative goals (similar to vmware DRS rules) where the admin can configure rules to force a vFiler/volume stick on a certain aggr, or vFilers/volumes not be located on the same aggr or cluster (eg for fault tolerance)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Output:&lt;/P&gt;&lt;P&gt;The system would use a contraints based system to perform &lt;A href="http://en.wikipedia.org/wiki/Combinatorial_optimization" target="_blank"&gt;combinatorial optimization&lt;/A&gt; on the inputs to&amp;nbsp; recommend migrations based on optimizing the volume/vFiler set of aggregates calculated ability to provide IOPs at 5,10,20ms etc&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Assumptions:&lt;/P&gt;&lt;P&gt;- temp space (to rearrage volumes/vFilers on a fully allocated or&amp;nbsp; nearly fully allocated set of aggrs a certain amount of temporary space&amp;nbsp; will be needed)&lt;/P&gt;&lt;P&gt;- all IOs are equal (the VMware concept of prioritizing via shares is not addressed directly) - instead the admin can dictate this volume/vFiler sticks here (eg&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This tool could work in different use cases:&lt;/P&gt;&lt;P&gt;1) fully allocated systems (suggest migrations to increase IO efficiency)&lt;/P&gt;&lt;P&gt;2) upgrades (how best to re-arrange volumes on new filer aggrs&lt;/P&gt;&lt;P&gt;3) presales sizing (inputs would be estimates of dataset IO and sizes)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;is there a tool like this in the works?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Fletcher.&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://vmadmin.info" target="_blank"&gt;http://vmadmin.info&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 05 Jun 2025 07:06:21 GMT</pubDate>
    <dc:creator>fletch2007</dc:creator>
    <dc:date>2025-06-05T07:06:21Z</dc:date>
    <item>
      <title>Constraints based tool for volume migration suggestions to optimize IO &amp; space</title>
      <link>https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/Constraints-based-tool-for-volume-migration-suggestions-to-optimize-IO-amp-space/m-p/59926#M12487</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;Does there exist a tool to make volume/vFiler migration suggestions based on optimizing overall performance and space goals of a set of filers?&lt;/P&gt;&lt;P&gt;Analogous to how VMware DRS makes prioritized vMotion suggestions based certain criteria (CPU usage being primary criteria) - in the case of the Netapp tool the primary resources would be IOPs (observed average and peak) and storage space (observed current + growth rate).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Inputs (all known by DFM):&lt;/P&gt;&lt;P&gt;1) aggregates: sizes, IO characteristics (how many IOPs can this aggr do at 5,10,20ms latency?) (based on type, number of disks)&lt;/P&gt;&lt;P&gt;2) volumes: sizes, IO characteristics (avg and avg peak IOPS)&lt;/P&gt;&lt;P&gt;3) administrative goals (similar to vmware DRS rules) where the admin can configure rules to force a vFiler/volume stick on a certain aggr, or vFilers/volumes not be located on the same aggr or cluster (eg for fault tolerance)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Output:&lt;/P&gt;&lt;P&gt;The system would use a contraints based system to perform &lt;A href="http://en.wikipedia.org/wiki/Combinatorial_optimization" target="_blank"&gt;combinatorial optimization&lt;/A&gt; on the inputs to&amp;nbsp; recommend migrations based on optimizing the volume/vFiler set of aggregates calculated ability to provide IOPs at 5,10,20ms etc&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Assumptions:&lt;/P&gt;&lt;P&gt;- temp space (to rearrage volumes/vFilers on a fully allocated or&amp;nbsp; nearly fully allocated set of aggrs a certain amount of temporary space&amp;nbsp; will be needed)&lt;/P&gt;&lt;P&gt;- all IOs are equal (the VMware concept of prioritizing via shares is not addressed directly) - instead the admin can dictate this volume/vFiler sticks here (eg&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This tool could work in different use cases:&lt;/P&gt;&lt;P&gt;1) fully allocated systems (suggest migrations to increase IO efficiency)&lt;/P&gt;&lt;P&gt;2) upgrades (how best to re-arrange volumes on new filer aggrs&lt;/P&gt;&lt;P&gt;3) presales sizing (inputs would be estimates of dataset IO and sizes)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;is there a tool like this in the works?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thanks!&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Fletcher.&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://vmadmin.info" target="_blank"&gt;http://vmadmin.info&lt;/A&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 07:06:21 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/Constraints-based-tool-for-volume-migration-suggestions-to-optimize-IO-amp-space/m-p/59926#M12487</guid>
      <dc:creator>fletch2007</dc:creator>
      <dc:date>2025-06-05T07:06:21Z</dc:date>
    </item>
    <item>
      <title>Re: Constraints based tool for volume migration suggestions to optimize IO &amp; space</title>
      <link>https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/Constraints-based-tool-for-volume-migration-suggestions-to-optimize-IO-amp-space/m-p/59931#M12489</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;As of today, there is no tool to do an automated way of advise on doing migration.But as you said all this is available in dfm and the dfm SDK are available,&lt;/P&gt;&lt;P&gt;using which customers can build their own migration suggestor as per there needs.&lt;/P&gt;&lt;P&gt;Also the dfm db access is given via sql view which can again be used in making this decision.&lt;/P&gt;&lt;P&gt;Even the Performance Advisor data can be exported, to do IO profiling.&lt;/P&gt;&lt;P&gt;Have you taken a look at the Performance Advisor view in the NMC that has in-depth performance information about the controller and its objects.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Some ppl create custom view like the ones below help them in making the decision.&lt;/P&gt;&lt;P&gt;The outputs below are of the cli dfm perf view&amp;nbsp; describe of the custom views created.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The Custom_system_summary_view is very similar tothe default system summary view, except that it has some extra countersincluded to show&amp;nbsp; read/write information for network, ops, and latency&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;DIV id="_mcePaste"&gt;View Name: Custom System Summary &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;View&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Applies To: Object type (filer)&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Details:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Name: Network Throughput&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Type: simple chart&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counters in this Chart:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: system:net_data_recv&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: system:net_data_sent&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;ChartName: Average Latency per Protocol&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Type: simple chart&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counters in this Chart:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: nfsv3:nfsv3_read_latency&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: nfsv3:nfsv3_write_latency&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: cifs:cifs_latency&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: nfsv3:nfsv3_avg_op_latency&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Name: All Protocol Ops&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Type: simple chart&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counters in this Chart:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: system:nfs_ops&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: system:cifs_ops&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: nfsv3:nfsv3_write_ops&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: nfsv3:nfsv3_read_ops&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Name: CPU Utilization&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Chart Type: simple chart&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counters in this Chart:&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;Counter: system:cpu_busy&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;The all_volumes_summary_view is a bar chart thatsummarizes throughput, ops, and latency for all volumes on a physical storagesystem.&lt;SPAN&gt;&amp;nbsp; S&lt;/SPAN&gt;et the perfMaxObjectInstancesInBarChart option to 500 so that we make sure to get all volumes included.&lt;SPAN&gt;&amp;nbsp; &lt;/SPAN&gt;These bar charts can be converted&amp;nbsp; to line graphs so that they can historically see which volumes on a given physical storage system are driving the most I/O over time.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;View Name: All Volumes Summary &lt;/P&gt;&lt;P&gt;View&lt;/P&gt;&lt;P&gt;Applies To: Object type (filer)&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Chart Details:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Chart Name: IOPs&lt;/P&gt;&lt;P&gt;Chart Type: bar&lt;/P&gt;&lt;P&gt;Number of object instances: All&lt;/P&gt;&lt;P&gt;Top or Bottom Instances: Top&lt;/P&gt;&lt;P&gt;Counters in this Chart:&lt;/P&gt;&lt;P&gt;Counter: volume:total_ops&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Chart Name: Throughput&lt;/P&gt;&lt;P&gt;Chart Type: bar&lt;/P&gt;&lt;P&gt;Number of object instances: All&lt;/P&gt;&lt;P&gt;Top or Bottom Instances: Top&lt;/P&gt;&lt;P&gt;Counters in this Chart:&lt;/P&gt;&lt;P&gt;Counter: volume:throughput&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Chart Name: Latency&lt;/P&gt;&lt;P&gt;Chart Type: bar&lt;/P&gt;&lt;P&gt;Number of object instances: All&lt;/P&gt;&lt;P&gt;Top or Bottom Instances: Top&lt;/P&gt;&lt;P&gt;Counters in this Chart:&lt;/P&gt;&lt;P&gt;Counter: volume:avg_latency&lt;/P&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;&lt;P class="MsoPlainText"&gt;.The all_aggregates_summary_view is similar to volume summary, but at an aggregate level. The idea here is to compare which aggregates are the busiest, both in terms of total transfers and disk busy percentage.&lt;SPAN&gt;&amp;nbsp; &lt;/SPAN&gt;This is to help identify if disk utilization is a potential bottleneck on the system.&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;View Name: All Aggregates Summary View&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Applies To: Object type (filer)&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Chart Details:&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Chart Name: Transfers&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Chart Type: bar&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Number of object instances: All&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Top or Bottom Instances: Top&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Counters in this Chart:&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Counters: aggregate:total_transfers&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Chart Name: Avg Disk Busy&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Chart Type: bar&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Number of object instances: All&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Top or Bottom Instances: Top&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Counters in this Chart:&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Counter: aggregate:pa_avg_disk_busy&lt;/P&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Hope this help, nevertheless its a tool to build right away.&lt;/P&gt;&lt;P class="MsoPlainText"&gt;&lt;/P&gt;&lt;P class="MsoPlainText"&gt;Regards&lt;/P&gt;&lt;P class="MsoPlainText"&gt;adai&lt;/P&gt;&lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 01 Nov 2010 07:57:07 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Active-IQ-Unified-Manager-Discussions/Constraints-based-tool-for-volume-migration-suggestions-to-optimize-IO-amp-space/m-p/59931#M12489</guid>
      <dc:creator>adaikkap</dc:creator>
      <dc:date>2010-11-01T07:57:07Z</dc:date>
    </item>
  </channel>
</rss>

