ONTAP Hardware

Interpreting sysstat output

brendanheading
14,272 Views

Hi guys,

We have a FAS2020 with two controllers, each with 13 disks (raid-dp, two hot spares per controller leaving 9 data disks).

One controller seems fine. The other controller seems to have rather bad write performance. So we ran sysstat, expecting it to show we'd maxed out the filer with CPU or I/O. But that appears not to be the case, the filer looks reasonably quiet.

Here's a brief grab of sysstat -x 1 on the "fast" controller.

=========

CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s iSCSI  kB/s

                                  in   out   read  write  read write   age   hit time  ty util                 in   out    in   out

11%    54    29     0      83   397    23   2308      0     0     0     2s  99%   0%  -   56%      0     0     0     0     0     0

16%    59    16     0      75    16    15   3440   4292     0     0     2s  99%  59%  H   59%      0     0     0     0     0     0

10%    34    28     0      62    89    20   2484      8     0     0     3   98%   0%  -   54%      0     0     0     0     0     0

31%    27    29     0      56   231    64   2384      0     0     0     3   99%   0%  -   50%      0     0     0     0     0     0

14%    39    30     0      69   251    35   2216     24     0     0     2s  99%   0%  -   49%      0     0     0     0     0     0

  9%    30    28     0      58    35    11   2176      0     0     0     2s  99%   0%  -   45%      0     0     0     0     0     0

10%    42    29     0      71    73    14   2616      0     0     0     2s  98%   0%  -   55%      0     0     0     0     0     0

13%    52    21     0      73    32    45   2448     32     0     0     2s  98%   9%  Hn  49%      0     0     0     0     0     0

10%    20    29     0      49    22     9   3068   3580     0     0     3   98%  37%  :   44%      0     0     0     0     0     0

  8%    28    17     0      45    19     8   1888      0     0     0     3   99%   0%  -   46%      0     0     0     0     0     0

  8%    14    28     0      42    12    76   2044     24     0     0     2s  99%   0%  -   40%      0     0     0     0     0     0

  8%    19    29     0      48    11     8   2156      0     0     0     2s  98%   0%  -   47%      0     0     0     0     0     0

  7%     9    30     0      39   112    11   1672      8     0     0     2s  99%   0%  -   37%      0     0     0     0     0     0

  9%    29    28     0      57   252    15   2124     24     0     0     2s  99%   0%  -   47%      0     0     0     0     0     0

19%    93   125     0     218   921  6171   8324      0     0     0     3   87%   0%  -   47%      0     0     0     0     0     0

28%     6   143     0     149   209  7064   7832      0     0     0     3   94%   0%  -   55%      0     0     0     0     0     0

30%    38    14     0      52    24    32   3952   4364     0     0     2s  98%  58%  Hs  79%      0     0     0     0     0     0

10%    25    16     0      41    22     9   2260   1128     0     0     2s  99%  20%  :   48%      0     0     0     0     0     0

  9%    30    28     0      58   204    17   1964      0     0     0     3s  99%   0%  -   42%      0     0     0     0     0     0

  9%    16    28     0      44   152    12   2336     24     0     0     4s  98%   0%  -   48%      0     0     0     0     0     0

And here's a grab on the "slow" controller where we are having trouble.

CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s iSCSI  kB/s

                                  in   out   read  write  read write   age   hit time  ty util                 in   out    in   out

27%    42   592     0     634   615  1088   4800   9396     0     0     2   96%  95%  Hf  94%      0     0     0     0     0     0

22%   270   157     0     427  1789   960   3856    956     0     0     2   98%  89%  :  100%      0     0     0     0     0     0

14%    58    56     0     114   361   136   2900      0     0     0     2s  98%   0%  -   74%      0     0     0     0     0     0

14%    99    79     0     178   612   353   2952      8     0     0     2s  96%   0%  -   66%      0     0     0     0     0     0

13%    42   108     0     150   243   232   2648     24     0     0     2s  98%   0%  -   76%      0     0     0     0     0     0

12%    34   128     0     162   331   282   2432      0     0     0     2s  97%   0%  -   58%      0     0     0     0     0     0

12%    74    92     0     166   130   258   2424      0     0     0     2s  98%   0%  -   65%      0     0     0     0     0     0

21%    25   124     0     149   354   241   4424   5644     0     0     2   97%  15%  Hn  90%      0     0     0     0     0     0

18%    27    81     0     108   194   214   3976   4716     0     0     2   98% 100%  :v  86%      0     0     0     0     0     0

16%    27    61     0      88   193   257   3017     24     0     0     2s  98%   6%  :   63%      0     0     0     0     0     0

25%    42    68     0     110   339   380   2480     24     0     0     2s  96%   0%  -   51%      0     0     0     0     0     0

23%   169    53     0     222   381   594   3760      0     0     0     2s  97%   0%  -   74%      0     0     0     0     0     0

18%   129    72     0     201  1173   740   4048      0     0     0     2   96%   0%  -   80%      0     0     0     0     0     0

13%    58   109     0     167   344  1149   3888     32     0     0     2   94%   0%  -   56%      0     0     0     0     0     0

13%    24    86     0     110   139   473   2772      0     0     0     2s  97%   0%  -   60%      0     0     0     0     0     0

14%    35   116     0     151   392   405   3112      0     0     0     2s  97%   0%  -   67%      0     0     0     0     0     0

24%    49    94     0     143   360   284   4480   7052     0     0     2s  98%  89%  Hs  89%      0     0     0     0     0     0

16%    24    59     0      83   194   117   3560   3296     0     0     2   97% 100%  :f  90%      0     0     0     0     0     0

We also have a third filer, which is currently being used as a snapmirror destination (backup/DR) - no active volumes are provisioned on it. I thought it would be worth including as a control. The incoming snapmirror activity is once per hour. Snapmirror was idle whenever this measurement was taken.

CPU   NFS  CIFS  HTTP   Total    Net kB/s   Disk kB/s     Tape kB/s Cache Cache  CP   CP Disk    FCP iSCSI   FCP  kB/s iSCSI  kB/s

                                  in   out   read  write  read write   age   hit time  ty util                 in   out    in   out

19%     0     0     0       0     1     0   3460     80     0     0    37s  98%  40%  :   77%      0     0     0     0     0     0

20%     0     0     0       0     2     2   3356      0     0     0    37s  98%   0%  -   81%      0     0     0     0     0     0

17%     0     0     0       0     2     2   3244      0     0     0    38s  97%   0%  -   77%      0     0     0     0     0     0

26%     0     0     0       0    11    21   3288      0     0     0    39s  97%   0%  -   78%      0     0     0     0     0     0

19%     0     0     0       0     1     0   3152      0     0     0    38s  98%   0%  -   74%      0     0     0     0     0     0

25%     0     0     0       0     5    47   3056      0     0     0    39s  98%   0%  -   80%      0     0     0     0     0     0

19%     0     0     0       0     1     1   2932      0     0     0    39s  98%   0%  -   72%      0     0     0     0     0     0

15%     0     0     0       0     1     0   2936      0     0     0     2s  97%   0%  -   74%      0     0     0     0     0     0

15%     0     0     0       0     1     0   2913      0     0     0    39s  97%   0%  -   73%      0     0     0     0     0     0

17%     0     0     0       0     1     0   3300      0     0     0     2s  97%   0%  -   80%      0     0     0     0     0     0

19%     0     0     0       0     2    20   3428    704     0     0    40s  98%  26%  T   76%      0     0     0     0     0     0

19%     0     0     0       0     1     0   3356      0     0     0    41s  98%   0%  -   83%      0     0     0     0     0     0

21%     0     0     0       0     1     0   3448      0     0     0    41s  98%   0%  -   76%      0     0     0     0     0     0

18%     0     0     0       0     1     0   3560      0     0     0    43s  98%   0%  -   78%      0     0     0     0     0     0

17%     0     0     0       0     1     0   3368      0     0     0    42s  97%   0%  -   77%      0     0     0     0     0     0

18%     0     0     0       0     1     0   3516      0     0     0    43s  98%   0%  -   73%      0     0     0     0     0     0

18%     0     0     0       0     3     3   3708      0     0     0     2s  97%   0%  -   74%      0     0     0     0     0     0

25%     0     0     0       0     5    47   3396      0     0     0    44s  98%   0%  -   75%      0     0     0     0     0     0

16%     0     0     0       0     1     0   2705      0     0     0    46s  98%   0%  -   60%      0     0     0     0     0     0

19%     0     0     0       0     1     0   3464      0     0     0     2s  99%   0%  -   73%      0     0     0     0     0     0

It does look as if the slower controller is busier with doing operations, particularly CIFS operations - which leads me to believe that something CIFS-related is doing a lot of random I/O. Do you agree and if so can you suggest how I can profile the volumes to identify this issue ?

I'm still a bit unsure about drawing too many conclusions here :

- why is there a constant disk read activity of around 2-4MB/sec going on which apparently is not related to data being shipped out ? I would have thought that disk read would correspond to net out. I can understand why disk write shows spikes, because that's the NVRAM getting committed.

- why are there disk reads going on on a filer which is doing nothing or very little in terms of IOPs ? And accordingly, why is the cache age periodically low (indicating cache churn) ?

- why do those disk reads of 2-4MB/sec appear to be sufficient to send the disk utilization levels to the upper end of the scale ?

I know the FAS2020 is not a speed demon, but it is very surprising indeed that it is getting stuck on 4-5MB/sec of throughput. There must be something going on here beyond a simple question of spindles.

6 REPLIES 6

andrc
14,270 Views

TBH your best bet with performance issues is to open a case with NetApp GSC, run Perfstat (http://now.netapp.com/NOW/download/tools/perfstat/) and send them the output. They have some tools that can trawl through the Perfstat output and come up with possible causes and recommendations.

Sysstat can be a good tool for showing up obvious bottlenecks but it's not perfect and can sometimes be misleading, I wouldn't worry too much about what you see there unless you are actually experiencing a performance issue.

peter_lehmann
14,270 Views

Hi

Something is happening on the disk backend... Find out what it is and the problem could go away.

If you have a system with this little load, and the disk util columns in ot dropping, then there is something running (dedupe, scrub, snaps, snapmirro, snapvault, ndmp, dump) etc...

Peter

rmharwood
14,270 Views

To me, those disk read and write operations and network I/O do not look particularly large to indicate that the filer is busy servicing CIFS. I also don't think you have dedupe or snapvault or ndmp running as these would show large disk and network I/O. I would agree with the earlier poster that perfstat collection would help Netapp to determine what's going on. I have seen times when deleting large files or snapshots has caused a filer to go very busy on disk I/O when nothing else is seemingly going on.

Is it just write performance that is bad or is it affecting reads also?

If you have one filer acting OK and the other not I would suggest looking at network stats as well. Check outputs from tools like "netdiag" and "ifstat" to see if there are any network receive or transmit errors. Mismatches in network speed/duplex or flow control can also have a very detrimental affect.

Hope this helps somewhat...

Richard

brendanheading
14,270 Views

Thank you to everyone for your replies.

Aside from normal storage service (which is all CIFS or NFS, we aren't using block right now) we do not have SnapVault or NDMP at all (yet). We recently deployed a second FAS and we are doing hourly Volume SnapMirror to the secondary FAS. As you can see from the above output, that FAS is almost completely idle.

Richard - I can't say exactly whether it is reads or writes that are mainly effected, I think it's both (not very helpful I know). The tests that we have done so far are not terribly scientific; the one where I demonstrated a real problem was performing a "git clone" operation of a local copy of the Linux kernel. It would take 20 minutes on the OK controller and 40 minutes on the bad controller. The git clone operation writes out a big 600MB file (the metadata database) then walks over that file to build indexes. I think I might be best running some tests with bonnie++ or something so that we can measure random vs sequential reads/writes of various sizes.

The consensus seems to be that we need to get NetApp to examine the perfstat output, so I think we will raise a ticket before we do anything else.

Thanks for all your help again guys. If we find anything interesting we'll reply back.

vmsjaak13
14,270 Views

Perhaps the "slow" filer is busy "deswizzling" or "container block reclamation". That can explain the disk usage.

priv set advanced

wafl scan status

You've told us your snapmirroring, so you will probably see Ontap doing that on those volumes.

Especially on small filers and/or small aggregates, I don't recommend snapmirroring many volumes and/or having a short snapmirror interval.

Regards,

Niek

brendanheading
14,270 Views

vmsjaak, I thought that deswizzling was an operation observed on a snapmirror destination ? The slow filer does not have any snapmirror sources in place. I will check though, thank you for the pointer.

We have about 20 VSM jobs going across the two controllers. We didn't notice any immediate impact before or after we implemented VSM (except when it was doing the initial seeding of course). These problems are fairly recent and appear to predate the VSM implementation.

Public