<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Filerview Extremely slow on a FAS6030 in Data Protection</title>
    <link>https://community.netapp.com/t5/Data-Protection/Filerview-Extremely-slow-on-a-FAS6030/m-p/35892#M8830</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a FAS6030 cluster running OnTap 7.2.6.1P3D8 that is exhibiting extremely poor response when working in FilerView.&amp;nbsp;&amp;nbsp; Browsing to the filerview page and logging in is very quick but once you try to view a list of volumes or try to add a snapmirror relationship or perform similar tasks it hangs.&amp;nbsp; I often have to select an option and leave the browser window for 5-10 min or more before it finally shows up and I can work.&amp;nbsp; We have also received periodic alerts in DFM indicating the filer has stopped communicating via SNMP.&amp;nbsp; Some commands in the CLI are also slow including "snap list" and running a df command on an aggregate although not nearly as bad as filerview.&amp;nbsp; Only one of the head units in the cluster is experiencing this extreme slowness even though both show similar statistics when looking at both sysstat and statit commands.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CPU performance is not horrible as it ranges from between 22% to 65% when running the sysstat -x 1 command.&amp;nbsp; In some cases it even appears as though the filer not experiencing the slowness issues is under the higher load.&amp;nbsp; The disk utilization on the filer doesn't seem to be a large problem either.&amp;nbsp; There are no disks on the system with utilization higher than 40-45%.&amp;nbsp; We are using Flexshare on both heads. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If anyone has any ideas as to what could be causing these performance issues I would greatly appreciate it.&amp;nbsp; Also, while I realize upgrading to OnTap 7.3.x or higher would probably help but that just isn't an option at the moment.&amp;nbsp; We have a number of SQL servers running older versions of Snapdrive that we have had problems upgrading that have put a hold on the OnTap upgrades.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;Bob&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 05 Jun 2025 07:03:22 GMT</pubDate>
    <dc:creator>rluthersutt</dc:creator>
    <dc:date>2025-06-05T07:03:22Z</dc:date>
    <item>
      <title>Filerview Extremely slow on a FAS6030</title>
      <link>https://community.netapp.com/t5/Data-Protection/Filerview-Extremely-slow-on-a-FAS6030/m-p/35892#M8830</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I have a FAS6030 cluster running OnTap 7.2.6.1P3D8 that is exhibiting extremely poor response when working in FilerView.&amp;nbsp;&amp;nbsp; Browsing to the filerview page and logging in is very quick but once you try to view a list of volumes or try to add a snapmirror relationship or perform similar tasks it hangs.&amp;nbsp; I often have to select an option and leave the browser window for 5-10 min or more before it finally shows up and I can work.&amp;nbsp; We have also received periodic alerts in DFM indicating the filer has stopped communicating via SNMP.&amp;nbsp; Some commands in the CLI are also slow including "snap list" and running a df command on an aggregate although not nearly as bad as filerview.&amp;nbsp; Only one of the head units in the cluster is experiencing this extreme slowness even though both show similar statistics when looking at both sysstat and statit commands.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;CPU performance is not horrible as it ranges from between 22% to 65% when running the sysstat -x 1 command.&amp;nbsp; In some cases it even appears as though the filer not experiencing the slowness issues is under the higher load.&amp;nbsp; The disk utilization on the filer doesn't seem to be a large problem either.&amp;nbsp; There are no disks on the system with utilization higher than 40-45%.&amp;nbsp; We are using Flexshare on both heads. &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If anyone has any ideas as to what could be causing these performance issues I would greatly appreciate it.&amp;nbsp; Also, while I realize upgrading to OnTap 7.3.x or higher would probably help but that just isn't an option at the moment.&amp;nbsp; We have a number of SQL servers running older versions of Snapdrive that we have had problems upgrading that have put a hold on the OnTap upgrades.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks!&lt;/P&gt;&lt;P&gt;Bob&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 07:03:22 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Filerview-Extremely-slow-on-a-FAS6030/m-p/35892#M8830</guid>
      <dc:creator>rluthersutt</dc:creator>
      <dc:date>2025-06-05T07:03:22Z</dc:date>
    </item>
    <item>
      <title>Re: Filerview Extremely slow on a FAS6030</title>
      <link>https://community.netapp.com/t5/Data-Protection/Filerview-Extremely-slow-on-a-FAS6030/m-p/35896#M8831</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We're seeing the same problems, tho usually thru the CLI. Here are the results of doing some timing tests a few minutes ago. The general test process was done on a UNIX host with ssh-enabled access to a filer, using a command:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;$ time ssh &amp;lt;filename&amp;gt; snap list &amp;lt;volname&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;twice. The format of times reported were as stock with the UNIX time command. The overall results were consistently inconsistent; sometimes both were slow, sometimes both fast, sometimes one but not the other. A respresentative sample of result pairs and some analysis are below.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Version: NetApp Release 7.3.1.1: Mon Apr 20 22:58:46 PDT 2009&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We ran this on all 15 volumes of a pair of servers. There seems to be&amp;nbsp; no correspondance with volume size, number of snapshots, etc. Delays&amp;nbsp; occurred running the commands on both src_filer and mirror_filer. There&amp;nbsp; were no cases where the second query was significantly longer than the&amp;nbsp; first.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Five of the volumes had been queried a few&amp;nbsp; minute before with a 'snap list volname.' Four of those five were fast&amp;nbsp; on both queries, one showed a delay on the first query (20%). Of the ten that were not queried a few minutes&amp;nbsp; before, five were slow on first query (50%). Mind you, there aren't enough of these tests to be statisitically significant. But they're a pretty solid lead, IMHO.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Details on a few query pairs follow.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Both queries fast (ie, what we'd normally expect):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;2010/12/22 16:03:11: Doing vol_B&lt;BR /&gt;Volume vol_B&lt;BR /&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 1%)&amp;nbsp; Jul 14 11:01&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Jul 14 00:00&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Jul 13 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.366s&lt;BR /&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.020s&lt;BR /&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;BR /&gt;Volume vol_B&lt;BR /&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 1%)&amp;nbsp; Jul 14 11:01&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Jul 14 00:00&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Jul 13 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.371s&lt;BR /&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.020s&lt;BR /&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First query slow, second fast:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;2010/12/22 16:00:55: Doing vol_A&lt;/P&gt;&lt;P&gt;Volume vol_A&lt;/P&gt;&lt;P&gt;working......&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;/P&gt;&lt;P&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 20:08&amp;nbsp; mirrorfiler(0101184681)_vol_A.12461 (snapmirror)&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 11:00&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 1%)&amp;nbsp; Dec 22 00:00&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp; 2% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 21 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 2m15.385s&lt;/P&gt;&lt;P&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.010s&lt;/P&gt;&lt;P&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;/P&gt;&lt;P&gt;Volume vol_A&lt;/P&gt;&lt;P&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;/P&gt;&lt;P&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 20:08&amp;nbsp; mirrorfiler(0101184681)_vol_A.12461 (snapmirror)&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 11:00&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 1%)&amp;nbsp; Dec 22 00:00&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&amp;nbsp; 2% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 21 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.366s&lt;/P&gt;&lt;P&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.010s&lt;/P&gt;&lt;P&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.010s&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;First query slow, second slower than expected.&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;2010/12/22 16:03:30: Doing vol_G&lt;/P&gt;&lt;P&gt;Volume vol_G&lt;/P&gt;&lt;P&gt;working.......................................................................................&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;/P&gt;&lt;P&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 20:39&amp;nbsp; src_filer(0101184645)_vol_G.11455&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 19:39&amp;nbsp; src_filer(0101184645)_vol_G.11454&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 1m40.033s&lt;/P&gt;&lt;P&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;/P&gt;&lt;P&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;/P&gt;&lt;P&gt;Volume vol_G&lt;/P&gt;&lt;P&gt;working.......................................................................................&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;/P&gt;&lt;P&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 20:39&amp;nbsp; src_filer(0101184645)_vol_G.11455&lt;/P&gt;&lt;P&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 19:39&amp;nbsp; src_filer(0101184645)_vol_G.11454&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m7.699s&lt;/P&gt;&lt;P&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.010s&lt;/P&gt;&lt;P&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.010s&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;We ran this on all 15 volumes of a pair of servers. There seems to be no correspondance with volume size, number of snapshots, etc. Delays occurred running the commands on both src_filer and mirror_filer. There were no cases where the second query was significantly longer than the first.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Five of the volumes involved had been queried a few minute before with a 'snap list volname.' Four of those five were fast on both queries, one showed a delay on the first query:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;P&gt;2010/12/22 16:05:17: Doing vol_H&lt;BR /&gt;Volume vol_H&lt;BR /&gt;working....&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 11:00&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 22 00:01&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 21 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m28.653s&lt;BR /&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.020s&lt;BR /&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;BR /&gt;Volume vol_H&lt;BR /&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; Dec 22 11:00&amp;nbsp; hourly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 1%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 22 00:01&amp;nbsp; nightly.0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;BR /&gt;&amp;nbsp; 1% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 1% ( 0%)&amp;nbsp; Dec 21 23:01&amp;nbsp; hourly.1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;real&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.385s&lt;BR /&gt;user&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.020s&lt;BR /&gt;sys&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0m0.000s&lt;/P&gt;&lt;P&gt;==========&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 22 Dec 2010 22:35:41 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Filerview-Extremely-slow-on-a-FAS6030/m-p/35896#M8831</guid>
      <dc:creator>steve_simmons</dc:creator>
      <dc:date>2010-12-22T22:35:41Z</dc:date>
    </item>
  </channel>
</rss>

