ONTAP Hardware

Slow General performance - troubleshoot Netapp Filer

ZELJKO_MIL
25,458 Views

Greetings people.

On of my filers has NetApp Release 8.0.3P3 7-Mode , and it is a model FAS2040.

The capacity of the filer is now filled 98%.

I have periodic slow respone from the CIFS and NFS clients.

What could I do in general to troubleshoot.

thanks in advance.

12 REPLIES 12

JGPSHNTAP
25,411 Views

Ok, let's tackle the first issue...

If you say your aggregate is 98%, that's obviously going to affect performance.

You are also running code that should be updated if possible.

What is your disk layout?  Are you disk bound

priv set diag

statit -b

Wait 9 seconds during peak work load

statit -e

You should see what the disk UTs are ..

Also, sysstat -m 1

ZELJKO_MIL
25,411 Views

Thanks for the fast reply.

To answer your questions, the disks are organized, or as you say bound into one Aggregate (aggr0). That is I know not good organized as Performance based, but this was before my time.

As for the priv statistics I will post them at Peak time, and also the CPU.

What will I achieve with the stats?

I also have one Linux CEntos NFS Client that is always having only 1.2MB/s copy Speed and the other have 80MB/s which is ok. No matter what I do on the Client it stays the same.

When the Peak hours are , some Client can normal open the CIFS Shares and some cannot, or it is really slow.

I have also a HA pair and another one with SnapMirror. That are working ok, but the Primary one is slow, gets slow.

I have managed now to free up some space. Will do some more. But the space is always the Problem.

What I wanted to do , is to troubleshoot, and see if something else is impacting the Performance.

Thanks in advance.

ZELJKO_MIL
25,411 Views

Another one.

Should I update the Data Ontap? Can I do it painfully without Netapp support ?

What is the downtime, what should I gain with the newer version?

P.S. Still I am not able to see why is this particular client limited to 1.2MB/s transfer to NFS shares (the centos one), and on other filers the speed is normal (80-90MB/s).

All the other , the same workstation clients have normal speed there with the same NIC cards, same OS, and same settings.

Thanks in advance.

JGPSHNTAP
25,416 Views

Updating Netapp Ontap is as easy as well.. anything..

I can't comment until i know how big your aggr is, how many disks and some stats...

ZELJKO_MIL
25,416 Views

JGPSHNTAP schrieb:

Updating Netapp Ontap is as easy as well.. anything..

I can't comment until i know how big your aggr is, how many disks and some stats...

JGPSHNTAP schrieb:

Updating Netapp Ontap is as easy as well.. anything..

I can't comment until i know how big your aggr is, how many disks and some stats...

Thanks for the Support.

I have two FAS2040 Heads.

These heads have three Filers DS2342 with 24x450GB

The two heads are configured as a HA Pair. And I have another FAS2040 Head with Snapvault configured for some of the volumes. This third head has 24x1TB drives.

Total space assigned for the aggr0 is 19TB.

Here are some stats:

The CPU is not yet under Peak

ANY  AVG  CPU0 CPU1

20%  13%   13%  12%

17%  11%   12%  11%

17%  11%   13%  10%

23%  14%   15%  14%

21%  13%   13%  14%

22%  14%   14%  14%

22%  14%   14%  15%

Hostname: filer01    Memory: 2948 MB

  NetApp Release 8.0.3P3 7-Mode: Tue Jul  3 22:28:59 PDT 2012

    <5O>

  Start time: Thu Jul 10 09:10:27 CEST 2014

                       CPU Statistics

      13.104113 time (seconds)       100 %

       3.728640 system time           28 %

       0.189237 rupt time              1 %   (99924 rupts x 2 usec/rupt)

       3.539403 non-rupt system time  27 %

      22.479584 idle time            172 %

       0.393873 time in CP             3 %   100 %

       0.006789 rupt time in CP                2 %   (3255 rupts x 2 usec/rupt)

                       Multiprocessor Statistics (per second)

                          cpu0       cpu1      total

sk switches           20634.90   22372.59   43007.49

hard switches          9150.71   13200.59   22351.30

domain switches         777.77     455.89    1233.66

CP rupts                150.03      98.37     248.40

nonCP rupts            5076.96    2300.04    7377.00

IPI rupts                 0.00       0.00       0.00

grab kahuna               0.31       0.00       0.31

grab w_xcleaner          14.50       0.00      14.50

grab kahuna usec          5.88       0.00       5.88

grab w_xcleaner usec   1210.61       0.00    1210.61

CP rupt usec            369.43     148.58     518.08

nonCP rupt usec       11795.99    2126.89   13922.96

idle                 858542.05  856918.21 1715460.25

kahuna                73923.43    7077.78   81001.29

storage                8577.61    5161.59   13739.27

exempt                16320.22   13621.83   29942.13

raid                   9204.29   16599.60   25803.96

target                   10.23       8.55      18.77

netcache                  0.00       0.00       0.00

netcache2                 0.00       0.00       0.00

cifs                   9056.39    6552.45   15608.92

wafl_exempt               0.00       0.00       0.00

wafl_xcleaner             0.00       0.00       0.00

sm_exempt                22.82      33.65      56.47

cluster                   0.00       0.00       0.00

protocol                  0.00       0.00       0.00

nwk_exclusive             0.00       0.00       0.00

nwk_exempt                0.00       0.00       0.00

nwk_legacy             7955.14   86037.87   93993.08

nwk_ctx1                  0.00       0.00       0.00

nwk_ctx2                  0.00       0.00       0.00

nwk_ctx3                  0.00       0.00       0.00

nwk_ctx4                  0.00       0.00       0.00

hostOS                 4222.03    5712.40    9934.51

       2.865145 seconds with one or more CPUs active   ( 22%)

       2.286771 seconds with one CPU active            ( 17%)

       0.578374 seconds with both CPUs active          (  4%)

                       Domain Utilization of Shared Domains (per second)

      0.00 idle                              0.00 kahuna

      0.00 storage                           0.00 exempt

      0.00 raid                              0.00 target

      0.00 netcache                          0.00 netcache2

      0.00 cifs                              0.00 wafl_exempt

      0.00 wafl_xcleaner                     0.00 sm_exempt

      0.00 cluster                           0.00 protocol

      0.00 nwk_exclusive                     0.00 nwk_exempt

      0.00 nwk_legacy                        0.00 nwk_ctx1

      0.00 nwk_ctx2                          0.00 nwk_ctx3

      0.00 nwk_ctx4                          0.00 hostOS

                       Miscellaneous Statistics (per second)

  22351.30 hard context switches            77.00 NFS operations

   1022.12 CIFS operations                   0.00 HTTP operations

    906.59 network KB received           33831.29 network KB transmitted

  33001.85 disk KB read                    439.86 disk KB written

      9.92 NVRAM KB written                  0.00 nolog KB written

   8144.01 WAFL bufs given to clients        0.00 checksum cache hits  (   0%)

      1.68 no checksum - partial buffer      0.00 FCP operations

      0.00 iSCSI operations

                       WAFL Statistics (per second)

     28.01 name cache hits      (  75%)      9.31 name cache misses    (  25%)

  35474.13 buf hash hits        (  91%)   3675.03 buf hash misses      (   9%)

   2324.16 inode cache hits     ( 100%)      0.00 inode cache misses   (   0%)

   9401.86 buf cache hits       ( 100%)     41.28 buf cache misses     (   0%)

      3.13 blocks read                    8171.25 blocks read-ahead

    215.73 chains read-ahead                 2.14 dummy reads

   8121.11 blocks speculative read-ahead     80.36 blocks written

      1.37 stripes written                   0.00 blocks over-written

      0.15 wafl_timer generated CP           0.00 snapshot generated CP

      0.00 wafl_avail_bufs generated CP      0.00 dirty_blk_cnt generated CP

      0.00 full NV-log generated CP          0.00 back-to-back CP

      0.00 flush generated CP                0.00 sync generated CP

      0.00 deferred back-to-back CP          0.00 container-indirect-pin CP

      0.00 low mbufs generated CP            0.00 low datavecs generated CP

   1770.59 non-restart messages              0.61 IOWAIT suspends

      0.00 next nvlog nearly full msecs      0.00 dirty buffer susp msecs

      0.00 nvlog full susp msecs           521728 buffers

                       RAID Statistics (per second)

     23.43 xors                              0.00 long dispatches [0]

      0.00 long consumed [0]                 0.00 long consumed hipri [0]

      0.00 long low priority [0]             0.00 long high priority [0]

      0.00 long monitor tics [0]             0.00 long monitor clears [0]

      0.00 long dispatches [1]               0.00 long consumed [1]

      0.00 long consumed hipri [1]           0.00 long low priority [1]

      0.00 long high priority [1]            0.00 long monitor tics [1]

      0.00 long monitor clears [1]             18 max batch

      1.98 blocked mode xor                  9.00 timed mode xor

      0.00 fast adjustments                  0.00 slow adjustments

         0 avg batch start                      0 avg stripe/msec

    250.76 checksum dispatches           27197.34 checksum consumed

      1.68 tetrises written                  0.00 master tetrises

      0.00 slave tetrises                   13.28 stripes written

     10.15 partial stripes                   3.13 full stripes

     80.66 blocks written                   37.93 blocks read

      0.38 1 blocks per stripe size 2        1.76 2 blocks per stripe size 2

      4.96 1 blocks per stripe size 18       0.46 2 blocks per stripe size 18

      0.53 3 blocks per stripe size 18       0.46 4 blocks per stripe size 18

      0.31 5 blocks per stripe size 18       0.08 6 blocks per stripe size 18

      0.08 7 blocks per stripe size 18       0.15 8 blocks per stripe size 18

      0.08 9 blocks per stripe size 18       0.23 10 blocks per stripe size 18

      0.08 12 blocks per stripe size 18      0.53 13 blocks per stripe size 18

      0.38 14 blocks per stripe size 18      0.76 15 blocks per stripe size 18

      0.31 16 blocks per stripe size 18      0.38 17 blocks per stripe size 18

      1.37 18 blocks per stripe size 18

                       Network Interface Statistics (per second)

iface    side      bytes    packets multicasts     errors collisions  pkt drops

e0a      recv   11409.85      24.27       0.00       0.00                  0.00

         xmit 34637698.41   23371.44       0.08       0.00       0.00

e0b      recv    3638.55      16.71       0.00       0.00                  0.00

         xmit    3969.59      18.77       0.08       0.00       0.00

e0c      recv    2545.31      11.14       0.00       0.00                  0.00

         xmit    1009.76       1.37       0.08       0.00       0.00

e0d      recv  910815.56   12439.68       0.00       0.00                  0.00

         xmit     564.63       1.37       0.08       0.00       0.00

e0P      recv       0.00       0.00       0.00       0.00                  0.00

         xmit       0.00       0.00       0.00       0.00       0.00

vh       recv       0.00       0.00       0.00       0.00                  0.00

         xmit       0.00       0.00       0.00       0.00       0.00

lacp1    recv  935757.19   12589.02       9.16       0.00                  0.00

         xmit 34914103.61   23579.01       0.31       0.00       0.00

                       Disk Statistics (per second)

        ut% is the percent of time the disk was busy.

        xfers is the number of data-transfer commands issued per second.

        xfers = ureads + writes + cpreads + greads + gwrites

        chain is the average number of 4K blocks per command.

        usecs is the average disk round-trip time per 4K block.

disk             ut%  xfers  ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs

/aggr0/plex0/rg0:

0d.01.6            1   3.13    0.61   1.00   125   1.98   4.31   393   0.53  10.00    71   0.00   ....     .   0.00   ....     .

0d.01.7            1   3.43    0.61   1.00     0   2.29   4.00   300   0.53  10.00   129   0.00   ....     .   0.00   ....     .

0d.01.8            2  10.61    9.54  26.86    74   0.69   3.11   571   0.38   2.60  1615   0.00   ....     .   0.00   ....     .

0d.02.10           2   7.40    6.41  39.63    59   0.46   4.00  1083   0.53   2.14  1867   0.00   ....     .   0.00   ....     .

0d.01.0            2   8.62    8.09  31.80    56   0.23   9.00   333   0.31   6.75   556   0.00   ....     .   0.00   ....     .

0d.02.11           2   8.17    7.63  32.42    60   0.31   7.00   714   0.23   3.33  1100   0.00   ....     .   0.00   ....     .

0d.01.1            1   8.40    7.86  31.90    48   0.31   6.50   846   0.23   3.00  1556   0.00   ....     .   0.00   ....     .

0d.02.12           2   7.63    7.10  33.63    67   0.31   6.50   615   0.23   3.00  1444   0.00   ....     .   0.00   ....     .

0d.01.2            2   8.17    7.56  33.73    57   0.31   6.75   741   0.31   2.75  1364   0.00   ....     .   0.00   ....     .

0d.02.13           2   8.93    8.17  31.86    60   0.38   5.00   680   0.38   2.60  1692   0.00   ....     .   0.00   ....     .

0d.01.3            2   8.17    7.56  33.73    63   0.31   6.75   741   0.31   2.25  2111   0.00   ....     .   0.00   ....     .

0d.02.14           2  10.53    9.77  25.56    71   0.38   5.40   963   0.38   2.20  1091   0.00   ....     .   0.00   ....     .

0d.01.4            2   7.94    7.02  36.55    54   0.46   4.50  1222   0.46   2.17  1615   0.00   ....     .   0.00   ....     .

0d.02.15           2   8.17    7.56  35.10    57   0.15  32.00   109   0.46  17.83   271   0.00   ....     .   0.00   ....     .

0d.01.5            2   7.86    7.33  35.15    63   0.23   9.00   296   0.31   6.50   692   0.00   ....     .   0.00   ....     .

0d.02.16           2   7.56    7.10  36.60    55   0.31   6.75   963   0.15   4.00  1000   0.00   ....     .   0.00   ....     .

0d.01.9            2   8.85    8.24  31.17    62   0.31   6.25   800   0.31   2.50  1400   0.00   ....     .   0.00   ....     .

0d.02.17           2   9.16    8.62  29.81    73   0.23   9.00   407   0.31   5.50   727   0.00   ....     .   0.00   ....     .

0d.02.6            0   0.99    0.00   ....     .   0.53   3.71  2115   0.46   2.33   786   0.00   ....     .   0.00   ....     .

0d.02.7            0   0.46    0.00   ....     .   0.31   6.75   741   0.15   4.00  1000   0.00   ....     .   0.00   ....     .

/aggr0/plex0/rg1:

0d.01.10           0   0.53    0.00   ....     .   0.23   1.00  5333   0.31   1.00  2750   0.00   ....     .   0.00   ....     .

0d.02.18           0   0.53    0.00   ....     .   0.23   1.00  6000   0.31   1.00  2000   0.00   ....     .   0.00   ....     .

0d.01.21           2   6.94    6.94  37.11    63   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.19           1   8.01    7.78  32.69    55   0.08   1.00  4000   0.15   1.00  5000   0.00   ....     .   0.00   ....     .

0d.01.12           1   6.79    6.79  37.26    54   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.20           1   6.72    6.72  37.99    52   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.13           2   8.40    8.40  31.25    60   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.21           1   9.16    9.16  27.88    57   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.22           1   8.09    8.09  31.46    46   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.22           1   8.40    8.40  31.16    56   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.15           1   7.33    7.33  35.83    55   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.23           1   7.48    7.48  33.97    49   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.16           1   8.47    8.47  30.32    57   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.17           1   7.71    7.71  32.36    54   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.18           1   8.32    8.32  30.17    51   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.11           1   7.56    7.56  34.03    39   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.20           1   8.85    8.85  28.93    49   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.01.14           1   6.11    6.11  42.24    54   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.9            0   0.38    0.00   ....     .   0.15   1.00  3500   0.23   1.00  4333   0.00   ....     .   0.00   ....     .

0d.01.23           0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

/aggr0/plex0/rg2:

0d.03.0            0   0.99    0.00   ....     .   0.46   7.50   600   0.53   4.14   793   0.00   ....     .   0.00   ....     .

0d.01.19           0   0.99    0.00   ....     .   0.46   7.50   756   0.53   4.14   690   0.00   ....     .   0.00   ....     .

0d.03.1            0   0.84    0.08   1.00  8000   0.31   6.75   852   0.46   2.00  1167   0.00   ....     .   0.00   ....     .

0d.03.2            0   0.92    0.08   1.00  6000   0.31   6.75   667   0.53   1.86   769   0.00   ....     .   0.00   ....     .

0d.03.3            0   0.69    0.00   ....     .   0.31   6.50   692   0.38   2.00   900   0.00   ....     .   0.00   ....     .

0d.03.4            0   0.76    0.08   1.00  6000   0.31   6.50   615   0.38   2.00   900   0.00   ....     .   0.00   ....     .

0d.03.5            1   1.30    0.15   1.00  9000   0.46   4.50  1000   0.69   3.11  1571   0.00   ....     .   0.00   ....     .

0d.03.6            0   0.84    0.00   ....     .   0.38   5.20   962   0.46   2.67  2125   0.00   ....     .   0.00   ....     .

0d.03.7            0   0.69    0.08   1.00  5000   0.23  10.67   375   0.38   4.40   500   0.00   ....     .   0.00   ....     .

0d.03.8            0   0.69    0.08   1.00  6000   0.31   6.75   630   0.31   5.25   571   0.00   ....     .   0.00   ....     .

0d.03.19           0   1.07    0.08   1.00  8000   0.38   5.40   667   0.61   3.00  1083   0.00   ....     .   0.00   ....     .

0d.03.10           0   0.99    0.23   1.00  4667   0.31   6.75   519   0.46   2.50   867   0.00   ....     .   0.00   ....     .

0d.03.21           0   0.69    0.08   1.00  5000   0.31   6.75   296   0.31   2.50  1500   0.00   ....     .   0.00   ....     .

0d.03.22           0   0.99    0.08   1.00  3000   0.38   5.40   630   0.53   3.14  1773   0.00   ....     .   0.00   ....     .

0d.03.12           0   1.14    0.15   1.00  6500   0.38   5.20   923   0.61   2.75  1136   0.00   ....     .   0.00   ....     .

0d.03.13           0   0.99    0.08   1.00  7000   0.38   5.60   857   0.53   3.86   778   0.00   ....     .   0.00   ....     .

0d.03.14           0   1.14    0.15   1.00  4500   0.38   5.20   769   0.61   2.50  1150   0.00   ....     .   0.00   ....     .

0d.03.15           0   0.92    0.15   1.00  4000   0.31   6.75   593   0.46   1.67  2200   0.00   ....     .   0.00   ....     .

0d.03.16           0   0.84    0.08   1.00  5000   0.31   7.00   500   0.46   2.83  1000   0.00   ....     .   0.00   ....     .

0d.03.17           0   0.76    0.00   ....     .   0.31   6.75   296   0.46   2.00  2000   0.00   ....     .   0.00   ....     .

/aggr0/plex0/rg3:

0d.03.23           0   0.23    0.00   ....     .   0.15  14.00   429   0.08  16.00   250   0.00   ....     .   0.00   ....     .

0d.03.20           0   0.23    0.00   ....     .   0.15  14.00   393   0.08  16.00   250   0.00   ....     .   0.00   ....     .

0d.03.18           0   0.38    0.08   1.00  7000   0.15  12.00   458   0.15  10.00   200   0.00   ....     .   0.00   ....     .

0d.03.11           0   0.31    0.00   ....     .   0.15  13.50   296   0.15   8.00   625   0.00   ....     .   0.00   ....     .

Aggregate statistics:

Minimum            0   0.00    0.00                0.00                0.00                0.00                0.00

Mean               1   4.50    3.89                0.23                0.23                0.00                0.00

Maximum            2  10.61    9.77                2.29                0.69                0.00                0.00

Spares and other disks:

0d.02.0            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.1            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.2            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.3            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.4            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.5            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.02.8            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

0d.03.9            0   0.00    0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .   0.00   ....     .

                       FCP Statistics (per second)

      0.00 FCP Bytes recv                    0.00 FCP Bytes sent

      0.00 FCP ops

                       iSCSI Statistics (per second)

      0.00 iSCSI Bytes recv                  0.00 iSCSI Bytes xmit

      0.00 iSCSI ops

                       Interrupt Statistics (per second)

    398.50 int_1                          3177.48 Gigabit Ethernet (IRQ 4)

     11.52 Gigabit Ethernet (IRQ 5)         18.16 Gigabit Ethernet (IRQ 6)

     19.61 Gigabit Ethernet (IRQ 7)        398.50 int_8

    398.50 int_9                             0.00 RTC

      0.00 IPI                             999.99 Msec Clock

   5422.27 total

                       NVRAM Statistics (per second)

      0.00 total dma transfer KB             0.00 wafl write req data KB

      0.00 dma transactions                  0.00 dma destriptors

      0.15 waitdone preempts                 0.00 waitdone delays

      0.00 transactions not queued           5.19 transactions queued

      5.49 transactions done                 0.00 total waittime (MS)

    986.79 completion wakeups                0.38 nvdma completion wakeups

      0.23 nvdma completion waitdone         9.92 total nvlog KB

      0.00 nvlog shadow header array full      0.00 channel1 dma transfer KB

      0.00 channel1 dma transactions         0.00 channel1 dma descriptors

I would like update Data ontap on all three Heads, some how to would be great.

And also I would like to solve the performance issues, these statistics are not under heavy load, but I will send them also.

I still have this NFS Centos client that only has 1.2MB/s transfer speed only with the filer01. The filer02 and filer03 have normal speed transfer when I mount this client.

I have started to free up space on the volumes.

Now the aggr0 is only 93% full, this will go more low as I clean more data.

Also I have noticed that ACP has not full connectivity:

storage show acp

Alternate Control Path:  Enabled

Ethernet Interface:      e0P

ACP Status:              Active

ACP IP Address:          10.10.160.144

ACP Subnet:              10.10.160.0

ACP Netmask:             255.255.255.0

ACP Connectivity Status: Partial Connectivity

Shelf Module      Reset Cnt    IP Address      FW Version   Module Type  Status

----------------- ------------ --------------- ------------ ------------ -------

0d.01.A           000          10.10.160.246   01.20        IOM3         active

0d.01.B           000          10.10.160.244   01.20        IOM3         not-responding (last contact at: "Thu Jul 10 09:03:56 CEST 2014")

0d.02.A           000          10.10.160.13    01.20        IOM3         not-responding (last contact at: "Thu Jul 10 08:51:31 CEST 2014")

0d.02.B           000          10.10.160.18    01.20        IOM3         not-responding (last contact at: "Thu Jul 10 08:41:13 CEST 2014")

0d.03.A           000          10.10.160.95    01.20        IOM3         active

0d.03.B           000          10.10.160.134   01.20        IOM3         active

Or this ACP is only temporary now?

Any help is appreciated.

Thanks in advance.

JGPSHNTAP
25,416 Views

Ok, so let's tackle this first.

if these are your filer stats under load, your filer is sleeping like a baby.  Not CPU bound and not disk bound.    As for upgrading, you need to have support on these boxes, and run upgrade advisors from my support. 

For the HA pair, ACP is a plus, helps the cluster shelves out.  For optimal cluster performance you want this at full connectivity if it's cabled up properly. (Google that part)

i'm a little perplexed how the linux box copies fine to two of the three heads, but only receives very slow speeds coming from the one head.

You can turn on nfs client statistics on that head and try copying it and see what's going on.   Check your port configurations on the vif as well, make sure you don't have a duplex issue or something silly.

ZELJKO_MIL
25,416 Views

Thanks for the fast reply.

  To further explain , filer was not at full load, and yes this is not so much, it is sleeping 

  As for the upgrading, I have tried to contact a local Support in my Country and still waiting for an answer. Does this come free? So I cannot do this without them ? The upgrade advisor?

  I will Google why I do not have the full ACP connectivity, and will check the cables.

As for the Linux BOX, this is a mistery to me also, the cabling to the BOX is ok, and it is FULL Duplex, 1000MBit, I have tried a lot of NFS Settings, but still slow. This is for sure something silly.

Could I turn on options nfs.per_client_stats.enable on, and then what should I do to collect the info and post it here? To further troubleshoot?

Thanks in advance.

JGPSHNTAP
25,416 Views

a 2040 filer is pretty old, so you might not have support on it,  But you should be able to go to now.netapp.com and click mysupport.  You can generator your own autosupports on there as well..

Run a compare on your nfs options on each controller to see if something is different.   What's wierd is you state it goes ok to the other node in the cluster.

Just turn on nfs stats and see if it points anything out.  no need to post

ZELJKO_MIL
25,416 Views

Greetings.

I have turned on the NFS stats per client, and I have choosen to collect the stats for this current Linux BOX.

I have used the dd to test the speed once more:

dd if=/dev/zero of=file.out bs=1MB count=10

10+0 records in

10+0 records out

10000000 bytes (10 MB) copied, 8.56381 s, 1.2 MB/s

And as you can see the 1.2 is to slow.

During the copy I did the nfsstat -h 192.168.0.xxx

Here is the output, I could not find anything fishy about it:

naqpunkt01*> nfsstat -h 192.168.0.xxx

Client: 192.168.0.xxx (xxx.intern)  ------------------------------------

Server rpc:

TCP:

calls       badcalls    nullrecv    badlen      xdrcall

375         0           0           0           0

UDP:

calls       badcalls    nullrecv    badlen      xdrcall

0           0           0           0           0

Server nfs:

calls       badcalls

375         0

Server nfs V2:

null       getattr    setattr    root       lookup     readlink   read

0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

wrcache    write      create     remove     rename     link       symlink

0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

mkdir      rmdir      readdir    statfs

0 0%       0 0%       0 0%       0 0%

Read request stats (version 2)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          0          0          0          0

Write request stats (version 2)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          0          0          0          0

Server nfs V3:

null       getattr    setattr    lookup     access     readlink   read

0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

write      create     mkdir      symlink    mknod      remove     rmdir

0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

rename     link       readdir    readdir+   fsstat     fsinfo     pathconf

0 0%       0 0%       0 0%       0 0%       0 0%       0 0%       0 0%

commit

0 0%

Read request stats (version 3)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          0          0          0          0

Write request stats (version 3)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          0          0          0          0

Server nfs V4: (375 calls, 1037 ops)

null           compound       badproc2       access         close          commit

0              375            0 0%           3 0%           1 0%           0 0%

create         delegpurge     delegret       getattr        getfh          link

0 0%           0 0%           0 0%           330 32%        1 0%           0 0%

lock           lockt          locku          lookup         lookupp        nverify

0 0%           0 0%           0 0%           0 0%           0 0%           0 0%

open           openattr       open_confirm   open_downgrade putfh          putpubfh

1 0%           0 0%           0 0%           0 0%           315 30%        0 0%

putrootfh      read           readdir        readlink       remove         rename

15 1%          0 0%           0 0%           0 0%           0 0%           0 0%

renew          restorefh      savefh         secinfo        setattr        setclntid

30 3%          0 0%           0 0%           0 0%           1 0%           15 1%

setclntid_cfm  verify         write          rlsowner

15 1%          0 0%           310 30%        0 0%

no delegation=0, read delegation=0

v4 acls set=0

Read request stats (version 4)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          0          0          0          0

Write request stats (version 4)

0-511      512-1023   1K-2047    2K-4095    4K-8191    8K-16383   16K-32767  32K-65535  64K-131071 > 131071

0          0          0          0          0          0          19         291        0          0

Misaligned Read request stats

BIN-0    BIN-1    BIN-2    BIN-3    BIN-4    BIN-5    BIN-6    BIN-7

0        0        0        0        0        0        0        0

Misaligned Write request stats

BIN-0    BIN-1    BIN-2    BIN-3    BIN-4    BIN-5    BIN-6    BIN-7

291      0        0        0        0        0        0        0

tcp input flowcontrol receive=0, xmit=0

tcp input flowcontrol out, receive=0, xmit=0

Thanks for the support.

Cheers

JGPSHNTAP
11,889 Views

I checked the hwu.netapp.com and it looks like your 2040's can go to 8.1.4.  I would still request upgrade advisors if you can't grab them from the support staff and do an upgrade

Also, i'm not going to be able to troubleshoot from the client perspective.  If after the upgrade and reboot happens, and this issue still persists, you need to open a case with netapp support for further troubleshooting

ZELJKO_MIL
11,889 Views

Thanks for the information.

I have created a MySupport account. So then that it is true that I need to restart the filers and the heads after the soft upgrade ? Ok , I then must plan a maintenance day.

As for the Linux box, to open a case at Netapp , is that free of charge , for this kind of scenario I have ?

Thank you so much for the support.

JGPSHNTAP
11,888 Views

I'm not sure how critical your environment is, but I can tell you my opinion, all enterprise storage units need to have some sort of maintenance contract. 

Support is not free of charge, the forums are.  But if you have a maintenance issue on the 2040, you are stuck. And if it's serving production data it's a bad idea.

As for upgrades, these are non-disruptive if you do it correctly.

Basically, you copy the install file to software directory and issue

software update (installfilename) -r

once done you do cluster takeover and giveback on each node and viola. you're done!

Public