ONTAP Discussions

CDOT 8.2.3 - Difference between the statistics periodic volume vs controller

protocult
3,287 Views

I'm comparing my volumes avg total ops vs controller avg total ops(output below highlighted)  , there's nothing else on my controller no snapmirror or any other jobs running but my controller's IOPS is almost twice my volumes . If there's nothing else running on the controller the total ops output should be the same or at least close for both . Please help me understand why its different and whay am I missing.

 

from the first command my average total ops = 9778 (stats running for a controller )

from the second command my average total ops = 5193 (stats running for a volume )

 

 

 

 

nas01: : > statistics periodic -object node -instance node -counter "" nas01-ctrl2

nas01-ctrl2: node.node: 5/2/2015 10:25:05
  cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
 busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
 ---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
  30%    10738    10738        0   3%   44.9MB   16.2MB      0%   1.21MB    299KB   17.9MB   72.2MB
  33%     8975     8975        0   3%   38.7MB   17.6MB      0%    945KB    214KB   27.2MB    126MB
  32%    10211    10211        0   5%   69.5MB   16.0MB      0%    545KB    210KB   21.1MB   40.3MB
  36%    10930    10930        0   4%   57.2MB   41.5MB      0%   1.00MB    302KB   37.6MB    164MB
  36%    15516    15516        0   5%   68.2MB   22.5MB      0%    954KB    218KB   37.9MB   51.2MB
  31%    10894    10894        0   5%   69.9MB   15.1MB      0%   1.69MB    218KB   13.2MB    122MB
  23%     4951     4951        0   1%   17.7MB   5.39MB      0%   1.42MB    222KB   10.6MB   15.7KB
  30%     6452     6452        0   2%   35.1MB   4.14MB      0%    613KB    212KB   24.2MB    140MB
  29%    10637    10637        0   2%   30.8MB   9.27MB      0%    709KB    211KB   18.3MB   19.3MB
  35%     9554     9554        0   3%   39.4MB   10.4MB      0%    780KB    211KB   20.9MB   84.4MB
  28%     8701     8701        0   3%   46.1MB   28.1MB      0%    582KB    179KB   21.5MB   69.1MB
nas01-ctrl2: node.node: 5/2/2015 10:25:28
  cpu    total                   data     data     data cluster  cluster  cluster     disk     disk
 busy      ops  nfs-ops cifs-ops busy     recv     sent    busy     recv     sent     read    write
 ---- -------- -------- -------- ---- -------- -------- ------- -------- -------- -------- --------
Minimums:
  23%     4951     4951        0   1%   17.7MB   4.14MB      0%    545KB    179KB   10.6MB   15.7KB
Averages for 11 samples:
  31%     9778     9778        0   3%   47.0MB   16.9MB      0%    962KB    227KB   22.8MB   80.9MB
Maximums:
  36%    15516    15516        0   5%   69.9MB   41.5MB      0%   1.69MB    302KB   37.9MB    164MB

 

nas01: : > statistics periodic -object volume -instance devqa_vol -counter total_ops

nas01: volume.devqa_vol: 5/2/2015 10:25:06
    total
      ops
 --------
     5815
     4096
     6846
     5880
     6823
     3855
     3364
     4187
     5879
phx-e2-nas01: volume.devqa_vol: 5/2/2015 10:25:26
    total
      ops
 --------
Minimums:
     3364
Averages for 9 samples:
     5193
Maximums:
     6846

 

Cheers

 

1 ACCEPTED SOLUTION

--John--
3,208 Views

TR-4211: NetApp Storage Performance Primer might help.

 

Besides serving operating system functions for Data ONTAP, the memory in a NetApp controller also acts as a cache. Incoming writes are coalesced in main memory prior to being written to disk. Memory is also used as a read cache to provide extremely fast access time to recently read data.

 

Next, consider how data is written to the storage system. For most storage systems, writes must be placed into a persistent and stable location prior to acknowledging to the client or host that the write was successful. Waiting for the storage system to write an operation to disk for every write could introduce significant latency.

 

To solve this problem, NetApp storage systems use battery-backed RAM to create nonvolatile RAM (NVRAM) to log incoming writes. When controllers are in highly available pairs, half of the NVRAM is used to mirror the remote partner node’s log, while the other half is used for logging local writes.

View solution in original post

1 REPLY 1

--John--
3,209 Views

TR-4211: NetApp Storage Performance Primer might help.

 

Besides serving operating system functions for Data ONTAP, the memory in a NetApp controller also acts as a cache. Incoming writes are coalesced in main memory prior to being written to disk. Memory is also used as a read cache to provide extremely fast access time to recently read data.

 

Next, consider how data is written to the storage system. For most storage systems, writes must be placed into a persistent and stable location prior to acknowledging to the client or host that the write was successful. Waiting for the storage system to write an operation to disk for every write could introduce significant latency.

 

To solve this problem, NetApp storage systems use battery-backed RAM to create nonvolatile RAM (NVRAM) to log incoming writes. When controllers are in highly available pairs, half of the NVRAM is used to mirror the remote partner node’s log, while the other half is used for logging local writes.

Public