General Discussion

What is the maximum spec information that can be used as a CIFS server for FAS2554?

nsky
4,468 Views

Hi,

 

I'm using FAS2554 only as a CIFS server.

When I check the status with systat, the CPU usage is constantly changing at 99%, and it takes time to access the shared folder from the terminal.

What is the maximum spec information that can be used as a CIFS server for FAS2554?

For example, the number of CIFS operations, the number of CIFS connections, etc.

Also, please tell us about the factors that increase the CPU usage rate.

 

Regards,

16 REPLIES 16

Fabian1993
4,369 Views

Hi @nsky,

 

please give us your ONTAP Version, are talking about 7-Mode or cDot? Do you use Compression or anything else?

 

nsky
4,359 Views

Hi,  Fabian1993

 

Thank you for your reply.

The ONTAP version currently in use is 8.3P2. Also, I use it with  cDot.

 

Since support for ONTAP8.3P2 has ended, I would like to know the specifications information for ONTAP 9.1 in FAS2554.

 

Regards,

Fabian1993
4,355 Views

You can update your System to ONTAP 9.8, 8.3 to 9.1 to 9.3 to 9.8

 

to check the Limitis for each Version go to hwu.netapp.com

nsky
4,289 Views

Hi, Fabian1993

 

Thank you for fast replay.

I checked hwu.netapp.com.

In ONTAP 9.1, the "Maximum number of connected shares" item in the CIFS Cluster Limits column was 40,000.

I checked the current "connected shares" with the following command and it was about 4,800.

 

::*> cifs stat -instance ksfshfa1 -counter connected_shares

Counter Value
-------------------------------- --------------------------------
connected_shares 4816

 

"Connected_shares" is much lower than the spec value, but the CPU usage is still at 99% and access is slow.

What is the cause of high CPU usage?

Will ONTAP 9.1 improve the CIFS protocol to reduce CPU load and access faster?

 

Regards,

 

Fabian1993
4,246 Views

Hi @nsky,

 

please give me more details about your System.

Aggr, volume, efficeny, Disktype etc etc.

 

 

nsky
4,214 Views

Hi, Fabian1993

 

Thank you for your replay.

The system information I'm using is listed below.

node1::> aggr show

Aggregate     Size Available Used% State   #Vols  Nodes            RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0_nd1   4.82TB   471.8GB   90% online       2 node1-01      raid_dp,
                                                                   normal
aggr0_nd2   4.82TB   471.8GB   90% online       2 node1-02      raid_dp,
                                                                   normal
aggr1_nd1  57.83TB   24.86TB   57% online      27 node1-01      raid_dp,
                                                                   normal
aggr1_nd2  57.83TB   24.90TB   57% online      27 node1-02      raid_dp,
                                                                   normal
aggr2_nd1  57.83TB   30.69TB   47% online       2 node1-01      raid_dp,
                                                                   normal
aggr2_nd2  57.83TB   30.72TB   47% online       2 node1-02      raid_dp,
                                                                   normal
6 entries were displayed.

node1::>
node1::> volume show
Vserver   Volume       Aggregate    State      Type       Size  Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
fserver1  fserver1_root aggr1_nd1   online     RW          1GB    972.3MB    5%
fserver1  nd1_alog01   aggr1_nd1    online     RW       1000GB    410.2GB   58%
fserver1  nd1_vol01_01 aggr1_nd1   online     RW        903GB    86.80GB   90%
fserver1  nd1_vol02_01 aggr1_nd1   online     RW       1.17TB    99.67GB   91%
fserver1  nd1_vol03_01 aggr1_nd1    online     RW       2.10GB     1.58GB   24%
fserver1  nd1_vol04_01 aggr1_nd1    online     RW       1.43TB    720.4GB   50%
fserver1  nd1_vol05_01  aggr1_nd1    online     RW        716GB    260.8GB   63%
fserver1  nd1_vol06_01 aggr1_nd1    online     RW       1.70TB    272.2GB   84%
fserver1  nd1_vol07_01    aggr1_nd1    online     RW        767GB    325.9GB   57%
fserver1  nd1_vol08_01 aggr1_nd1    online     RW        725GB    270.7GB   62%
fserver1  nd1_vol09_01  aggr2_nd1    online     RW      26.66TB    20.92TB   21%
fserver1  nd1_vol10_01  aggr1_nd1    online     RW        987GB    48.22GB   95%
fserver1  nd1_vol11_01  aggr1_nd1    online     RW       1.34TB    126.0GB   90%
fserver1  nd1_mksys01  aggr1_nd1    online     RW          2GB    824.0MB   59%
fserver1  nd1_mksys02  aggr1_nd1    online     RW        316GB    41.02GB   87%
fserver1  nd1_nss01  aggr1_nd1    online     RW       1.26TB    137.5GB   89%
fserver1  nd1_vol12_01 aggr1_nd1    online     RW         53MB    49.46MB    6%
fserver1  nd1_vol13_01   aggr1_nd1    online     RW       1.03TB    474.1GB   54%
fserver1  nd1_vol14_01  aggr1_nd1    online     RW       2.01TB    118.2GB   94%
fserver1  nd1_vol15_01 aggr1_nd1  online     RW       1.15TB    291.3GB   75%
fserver1  nd1_vol16_01 aggr1_nd1    online     RW       1.82TB    231.5GB   87%
fserver1  nd1_vol17_01 aggr1_nd1   online     RW       5.58TB    664.9GB   88%
fserver1  nd1_vol18_01 aggr1_nd1    online     RW        600GB    472.6GB   21%
fserver1  nd1_soft01   aggr1_nd1    online     RW        258GB     6.10GB   97%
fserver1  nd1_vol19_01   aggr1_nd1    online     RW        914GB    405.7GB   55%
fserver1  nd1_vol20_01 aggr1_nd1   online     RW       5.13TB     2.56TB   49%
fserver1  nd1_vol21_01 aggr1_nd1    online     RW       1.71TB    80.74GB   95%
fserver2  fserver2_root aggr1_nd2   online     RW          1GB    972.3MB    5%
fserver2  nd2_alog01   aggr1_nd2    online     DP       1000GB    410.2GB   58%
fserver2  nd2_vol01_01 aggr1_nd2   online     DP        903GB    86.77GB   90%
fserver2  nd2_vol02_01 aggr1_nd2   online     DP       1.17TB    99.67GB   91%
fserver2  nd2_vol03_01 aggr1_nd2    online     DP       2.10GB     1.58GB   24%
fserver2  nd2_vol04_01 aggr1_nd2    online     DP       1.43TB    720.4GB   50%
fserver2  nd2_vol05_01  aggr1_nd2    online     DP        716GB    260.8GB   63%
fserver2  nd2_vol06_01 aggr1_nd2    online     DP       1.70TB    272.5GB   84%
fserver2  nd2_vol07_01    aggr1_nd2    online     DP        767GB    325.9GB   57%
fserver2  nd2_vol08_01 aggr1_nd2    online     DP        830GB    270.7GB   62%
fserver2  nd2_vol09_01  aggr2_nd2    online     DP      26.66TB    20.92TB   21%
fserver2  nd2_vol10_01  aggr1_nd2    online     DP        987GB    48.23GB   95%
fserver2  nd2_vol11_01  aggr1_nd2    online     DP       1.34TB    126.5GB   90%
fserver2  nd2_mksys01  aggr1_nd2    online     DP          2GB    824.0MB   59%
fserver2  nd2_mksys02  aggr1_nd2    online     DP        316GB    41.02GB   87%
fserver2  nd2_nss01  aggr1_nd2    online     DP       1.26TB    137.3GB   89%
fserver2  nd2_vol12_01 aggr1_nd2    online     DP         53MB    49.46MB    6%
fserver2  nd2_vol13_01   aggr1_nd2    online     DP       1.03TB    474.1GB   54%
fserver2  nd2_vol14_01  aggr1_nd2    online     DP       2.01TB    118.2GB   94%
fserver2  nd2_vol15_01 aggr1_nd2  online     DP       1.15TB    291.3GB   75%
fserver2  nd2_vol16_01 aggr1_nd2    online     DP       1.82TB    231.6GB   87%
fserver2  nd2_vol17_01 aggr1_nd2   online     DP       5.58TB    662.4GB   88%
fserver2  nd2_vol18_01 aggr1_nd2    online     DP        600GB    472.6GB   21%
fserver2  nd2_soft01   aggr1_nd2    online     DP        258GB     5.48GB   97%
fserver2  nd2_vol19_01   aggr1_nd2    online     DP        914GB    405.7GB   55%
fserver2  nd2_vol20_01 aggr1_nd2   online     DP       5.13TB     2.56TB   49%
fserver2  nd2_vol21_01 aggr1_nd2    online     DP       1.71TB    80.68GB   95%
node1  MDV_aud_263512698d4f4d308278617488bb4cbd aggr1_nd1 online RW 300GB 284.5GB  5%
node1  MDV_aud_409d0fdc58d842a98ce6b55eaad9731e aggr0_nd1 online RW 2GB 1.90GB  5%
node1  MDV_aud_544429ce61a6426687b541b2b0e6f44a aggr2_nd2 online RW 300GB 285.0GB  5%
node1  MDV_aud_5539cb529eb4410eb28c0cc4563e7b68 aggr2_nd1 online RW 300GB 284.2GB  5%
node1  MDV_aud_ab29285a00844cdcafc499a8ca5d2fc1 aggr1_nd2 online RW 300GB 285.0GB  5%
node1  MDV_aud_c6bb6bec6a8540149a8193efc71da485 aggr0_nd2 online RW 2GB 1.90GB  5%
node1-01 vol0       aggr0_nd1    online     RW       4.33TB     4.06TB    6%
node1-02 vol0       aggr0_nd2    online     RW       4.33TB     4.10TB    5%
62 entries were displayed.

node1::>
node1::> disk show
                     Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.0.0                5.35TB     0   0 FSAS    aggregate   aggr0_nd1 node1-01
1.0.1                5.35TB     0   1 FSAS    aggregate   aggr0_nd1 node1-01
1.0.2                5.35TB     0   2 FSAS    aggregate   aggr0_nd1 node1-01
1.0.3                5.35TB     0   3 FSAS    aggregate   aggr1_nd1 node1-01
1.0.4                5.35TB     0   4 FSAS    spare       Pool0     node1-01
1.0.5                5.35TB     0   5 FSAS    aggregate   aggr1_nd1 node1-01
1.0.6                5.35TB     0   6 FSAS    aggregate   aggr1_nd1 node1-01
1.0.7                5.35TB     0   7 FSAS    aggregate   aggr1_nd1 node1-01
1.0.8                5.35TB     0   8 FSAS    aggregate   aggr1_nd1 node1-01
1.0.9                5.35TB     0   9 FSAS    aggregate   aggr1_nd1 node1-01
1.0.10               5.35TB     0  10 FSAS    aggregate   aggr1_nd1 node1-01
1.0.11               5.35TB     0  11 FSAS    aggregate   aggr1_nd1 node1-01
1.0.12               5.35TB     0  12 FSAS    aggregate   aggr1_nd1 node1-01
1.0.13               5.35TB     0  13 FSAS    aggregate   aggr1_nd1 node1-01
1.0.14               5.35TB     0  14 FSAS    aggregate   aggr1_nd1 node1-01
1.0.15               5.35TB     0  15 FSAS    aggregate   aggr1_nd1 node1-01
1.0.16               5.35TB     0  16 FSAS    aggregate   aggr1_nd1 node1-01
1.0.17               5.35TB     0  17 FSAS    spare       Pool0     node1-01
1.0.18               5.35TB     0  18 FSAS    spare       Pool0     node1-01
1.0.19               5.35TB     0  19 FSAS    spare       Pool0     node1-01
1.0.20               5.35TB     0  20 FSAS    aggregate   aggr2_nd1 node1-01
1.0.21               5.35TB     0  21 FSAS    aggregate   aggr2_nd1 node1-01
1.0.22               5.35TB     0  22 FSAS    aggregate   aggr2_nd1 node1-01
1.0.23               5.35TB     0  23 FSAS    aggregate   aggr2_nd1 node1-01
1.1.0                5.35TB     1   0 FSAS    aggregate   aggr2_nd1 node1-01
1.1.1                5.35TB     1   1 FSAS    aggregate   aggr2_nd1 node1-01
1.1.2                5.35TB     1   2 FSAS    aggregate   aggr2_nd1 node1-01
1.1.3                5.35TB     1   3 FSAS    aggregate   aggr2_nd1 node1-01
1.1.4                5.35TB     1   4 FSAS    aggregate   aggr2_nd1 node1-01
1.1.5                5.35TB     1   5 FSAS    aggregate   aggr2_nd1 node1-01
1.1.6                5.35TB     1   6 FSAS    aggregate   aggr2_nd1 node1-01
1.1.7                5.35TB     1   7 FSAS    aggregate   aggr2_nd1 node1-01
1.1.8                5.35TB     1   8 FSAS    aggregate   aggr2_nd1 node1-01
1.1.9                5.35TB     1   9 FSAS    aggregate   aggr2_nd1 node1-01
1.1.10               5.35TB     1  10 FSAS    aggregate   aggr1_nd1 node1-01
1.1.11               5.35TB     1  11 FSAS    spare       Pool0     node1-01
1.1.12               5.35TB     1  12 FSAS    aggregate   aggr0_nd2 node1-02
1.1.13               5.35TB     1  13 FSAS    aggregate   aggr0_nd2 node1-02
1.1.14               5.35TB     1  14 FSAS    aggregate   aggr0_nd2 node1-02
1.1.15               5.35TB     1  15 FSAS    aggregate   aggr1_nd2 node1-02
1.1.16               5.35TB     1  16 FSAS    aggregate   aggr1_nd2 node1-02
1.1.17               5.35TB     1  17 FSAS    spare       Pool0     node1-02
1.1.18               5.35TB     1  18 FSAS    aggregate   aggr1_nd2 node1-02
1.1.19               5.35TB     1  19 FSAS    aggregate   aggr1_nd2 node1-02
1.1.20               5.35TB     1  20 FSAS    aggregate   aggr1_nd2 node1-02
1.1.21               5.35TB     1  21 FSAS    aggregate   aggr1_nd2 node1-02
1.1.22               5.35TB     1  22 FSAS    aggregate   aggr1_nd2 node1-02
1.1.23               5.35TB     1  23 FSAS    aggregate   aggr1_nd2 node1-02
1.2.0                5.35TB     2   0 FSAS    aggregate   aggr1_nd2 node1-02
1.2.1                5.35TB     2   1 FSAS    aggregate   aggr1_nd2 node1-02
1.2.2                5.35TB     2   2 FSAS    aggregate   aggr1_nd2 node1-02
1.2.3                5.35TB     2   3 FSAS    spare       Pool0     node1-02
1.2.4                5.35TB     2   4 FSAS    spare       Pool0     node1-02
1.2.5                5.35TB     2   5 FSAS    aggregate   aggr1_nd2 node1-02
1.2.6                5.35TB     2   6 FSAS    aggregate   aggr1_nd2 node1-02
1.2.7                5.35TB     2   7 FSAS    aggregate   aggr1_nd2 node1-02
1.2.8                5.35TB     2   8 FSAS    aggregate   aggr2_nd2 node1-02
1.2.9                5.35TB     2   9 FSAS    aggregate   aggr2_nd2 node1-02
1.2.10               5.35TB     2  10 FSAS    aggregate   aggr2_nd2 node1-02
1.2.11               5.35TB     2  11 FSAS    aggregate   aggr2_nd2 node1-02
1.2.12               5.35TB     2  12 FSAS    aggregate   aggr2_nd2 node1-02
1.2.13               5.35TB     2  13 FSAS    aggregate   aggr2_nd2 node1-02
1.2.14               5.35TB     2  14 FSAS    aggregate   aggr2_nd2 node1-02
1.2.15               5.35TB     2  15 FSAS    aggregate   aggr2_nd2 node1-02
1.2.16               5.35TB     2  16 FSAS    aggregate   aggr2_nd2 node1-02
1.2.17               5.35TB     2  17 FSAS    aggregate   aggr2_nd2 node1-02
1.2.18               5.35TB     2  18 FSAS    aggregate   aggr2_nd2 node1-02
1.2.19               5.35TB     2  19 FSAS    aggregate   aggr2_nd2 node1-02
1.2.20               5.35TB     2  20 FSAS    aggregate   aggr2_nd2 node1-02
1.2.21               5.35TB     2  21 FSAS    aggregate   aggr2_nd2 node1-02
1.2.22               5.35TB     2  22 FSAS    spare       Pool0     node1-02
1.2.23               5.35TB     2  23 FSAS    spare       Pool0     node1-02
72 entries were displayed.

node1::>
node1::> vserver show
                               Admin      Operational Root
Vserver     Type    Subtype    State      State       Volume     Aggregate
----------- ------- ---------- ---------- ----------- ---------- ----------
fserver1    data    default    running    running     fserver1_  aggr1_nd1
                                                      root
fserver2    data    default    running    running     fserver2_  aggr1_nd2
                                                      root
node1    admin   -          -          -           -          -
node1-01 node    -          -          -           -          -
node1-02 node    -          -          -           -          -
5 entries were displayed.

node1::>
node1::> vserver show -vserver fserver1

                                    Vserver: fserver1
                               Vserver Type: data
                            Vserver Subtype: default
                               Vserver UUID: 6da0f27a-2f92-11e5-b8f9-00a0988545cd
                                Root Volume: fserver1_root
                                  Aggregate: aggr1_nd1
                                 NIS Domain: -
                 Root Volume Security Style: ntfs
                                LDAP Client: -
               Default Volume Language Code: ja_JP.PCK_v2.UTF-8
                            Snapshot Policy: default
                                    Comment:
                               Quota Policy: default
                List of Aggregates Assigned: -
 Limit on Maximum Number of Volumes allowed: unlimited
                        Vserver Admin State: running
                  Vserver Operational State: running
   Vserver Operational State Stopped Reason: -
                          Allowed Protocols: cifs
                       Disallowed Protocols: nfs, fcp, iscsi, ndmp
            Is Vserver with Infinite Volume: false
                           QoS Policy Group: -
                                Config Lock: false
                               IPspace Name: Default

node1::>
node1::> cluster show
Node                  Health  Eligibility
--------------------- ------- ------------
node1-01           true    true
node1-02           true    true
2 entries were displayed.

node1::>
node1::> cifs show -vserver fserver1

                                          Vserver: fserver1
                         CIFS Server NetBIOS Name: FSERVER1
                    NetBIOS Domain/Workgroup Name: DOM1
                      Fully Qualified Domain Name: DOM1.LOCAL
Default Site Used by LIFs Without Site Membership:
                             Authentication Style: domain
                CIFS Server Administrative Status: up
                          CIFS Server Description:
                          List of NetBIOS Aliases: -

node1::>

 

Performance information is listed below.

 

node1::> node run -node node1-01 sysstat -x 1
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 98%      0   9868      0   10149    2655   5826    6631  30012       0      0     9     95%  100%  :v   68%     281      0      0       0      0       0      0
 98%      0   9950      0   10286    2648   6218    4430     35       0      0     9     96%    7%  :    58%     336      0      0       0      0       0      0
 98%      0   9937      0   10265    2630   7390    3635     24       0      0     9     96%    0%  -    56%     328      0      0       0      0       0      0
 98%      0  10018      0   10299    2678   8508    4209      0       0      0     9     96%    0%  -    40%     281      0      0       0      0       0      0
 98%      0   9447      0    9745    8711   6254    2676      8       0      0     9     96%    0%  -    45%     298      0      0       0      0       0      0

node1::>
node1::*> statistics show -sample-id sample_54687

Object: cifs
Instance: fserver1
Start-time: 8/3/2021 10:39:03
End-time: 8/4/2021 10:08:00
Elapsed-time: 84537s
Cluster: node1

    Counter                                                     Value
    -------------------------------- --------------------------------
    cifs_ops                                                     6832
    connected_shares                                             4823
    connections                                                  1651
    established_sessions                                         2818

Object: cifs
Instance: fserver2
Start-time: 8/3/2021 10:39:03
End-time: 8/4/2021 10:08:00
Elapsed-time: 84537s
Cluster: node1

    Counter                                                     Value
    -------------------------------- --------------------------------
    cifs_ops                                                        0
    connected_shares                                                3
    connections                                                     3
    established_sessions                                            3
8 entries were displayed.

node1::*>

 

If you have any other necessary information, please let me know.

 

Regards,

AlexDawson
4,200 Views

Hi there!

 

Can you please run sysstat for a longer period of time - 60 seconds or so? but in general, it looks like you've run out of disk performance capacity on the system.

 

You've got 72 disks, 10 of which are spare, and unpartitioned root aggrs, so only 56 are holding your data. They are 6TB SATA drives, meaning at most you can hope for about ~5600 IOPS out of the systems (assuming 100 IOPS per disk), and sysstat is showing it doing about 10,000 IOPS. It might be slightly higher due to the larger drives, but not much. 

 

You're also running two data aggregates, one per node, so for any given data workload, it can only use half of the backend disk capacity. Depending on the workload, this may be best, as it allows CPU usage to spread between both controllers, but there are pluses and minuses to both options.

 

But my quick assessment remains the same - you're doing too much IO against too few disks. You can add more disks (up to 144 total for that platform), or use the workload analyser to determine if a flashpool would help (instructions on page 175 of https://library.netapp.com/ecm/ecm_download_file/ECMP1636022 ), but I don't see any quick fixes, sorry.

 

Hope this helps!

nsky
4,192 Views

Hi,

 

Thank you for your fast replay.

I am sorry. The sysstat log is shown below.

 

 

node1::> node run -node node1-01 sysstat -x 1
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 99%      0   9523      0    9714   16215  13878   24803  49955       0      0     5     94%   71%  Hq   71%     191      0      0       0      0       0      0
 99%      0  10660      0   10911   14865  17339   11622  44122       0      0     5     95%  100%  :s   67%     251      0      0       0      0       0      0
 99%      0  10779      0   11039   15098  21431    9186  30549       0      0     5     96%  100%  :f   74%     260      0      0       0      0       0      0
 99%      0   9154      0    9368   14977  19927    5922  34035       0      0     5     95%  100%  :f   64%     214      0      0       0      0       0      0
 99%      0   9860      0   10115   15129  16899    5152  32369       0      0     5     95%  100%  :f   50%     255      0      0       0      0       0      0
 98%      0  10307      0   10555   17530   7386    7503  33394       0      0     5     94%  100%  :f   66%     248      0      0       0      0       0      0
 99%      0   9870      0   10258   14619  10392   12745  30342       0      0     5     93%  100%  :f   72%     388      0      0       0      0       0      0
 99%      0   9815      0   10237   15386  11598    9034  37011       0      0     5     94%  100%  :f   60%     422      0      0       0      0       0      0
 93%      0   8702      0    8988   14619  10799    6169  30192       0      0     5     94%  100%  :f   61%     286      0      0       0      0       0      0
 99%      0   9542      0    9897   17511   6773    4112  34087       0      0     5     93%  100%  :f   61%     355      0      0       0      0       0      0
 99%      0   9628      0    9896   14559   8859    4345    116       0      0     5     95%   43%  :    60%     268      0      0       0      0       0      0
 99%      0  10264      0   10590   14542   7754    3927      0       0      0     5     93%    0%  -    57%     326      0      0       0      0       0      0
 99%      0   9773      0   10117   15084  11163    6580      0       0      0     5     94%    0%  -    48%     344      0      0       0      0       0      0
 99%      0  10487      0   10734   14846   8607    4318     23       0      0     5     94%    0%  -    59%     247      0      0       0      0       0      0
 99%      0  10431      0   10808   15070   9801    5370      8       0      0     5     93%    0%  -    52%     377      0      0       0      0       0      0
 99%      0  10290      0   10867   14858  11604    6826      0       0      0     5     95%    0%  -    46%     577      0      0       0      0       0      0
 99%      0   8366      0    8711   60792   9831   17753  14824       0      0     5     94%   31%  Hn   57%     345      0      0       0      0       0      0
 99%      0   7818      0    8039   34942   9739   11583  88250       0      0     5     95%  100%  :s   67%     221      0      0       0      0       0      0
 99%      0  10610      0   10962   16704  11452   12047  40661       0      0     5     94%  100%  :s   77%     352      0      0       0      0       0      0
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 99%      0   9943      0   10278   14574  10211    9616  34806       0      0     5     94%  100%  :f   73%     335      0      0       0      0       0      0
 99%      0  10051      0   10389   14682  10721    5996  38098       0      0     5     93%  100%  :f   68%     338      0      0       0      0       0      0
 99%      0   9649      0    9986   14635   8674    4594  33322       0      0     5     93%  100%  :f   54%     337      0      0       0      0       0      0
 99%      0   9855      0   10141   14569  11468    9379  40751       0      0     5     94%  100%  :f   71%     286      0      0       0      0       0      0
 99%      0   8429      0    8815   13830   9029    5561  33451       0      0     5     94%  100%  :f   48%     386      0      0       0      0       0      0
 99%      0   9475      0    9757   14453   8971    4850  11288       0      0     5     95%   78%  :    64%     282      0      0       0      0       0      0
 99%      0  10058      0   10525   14543  11109    6833     23       0      0     5     94%    0%  -    62%     467      0      0       0      0       0      0
 99%      0  11120      0   11464   14919  12255    8571      0       0      0     5     93%    0%  -    68%     344      0      0       0      0       0      0
 99%      0  10998      0   11300   14546  13123   13818      0       0      0     5     95%    0%  -    60%     302      0      0       0      0       0      0
 99%      0  10788      0   11073   15117  25396   10564     24       0      0     5     94%    0%  -    55%     285      0      0       0      0       0      0
 99%      0  10247      0   10554   15283  13886    5918      8       0      0     5     94%    0%  -    53%     307      0      0       0      0       0      0
 99%      0  10655      0   10946   14773  13464    9328      0       0      0     5     94%    0%  -    53%     291      0      0       0      0       0      0
 99%      0  10934      0   11202   14922  14431    7829     23       0      0     5     96%    0%  -    58%     268      0      0       0      0       0      0
 99%      0  10938      0   11231   15056  14614    9252      0       0      0     5     94%    0%  -    63%     293      0      0       0      0       0      0
 99%      0   9231      0    9481   14474  11111   21770  15027       0      0     5     94%   37%  Hn   74%     250      0      0       0      0       0      0
 99%      0   9099      0    9356   14393  12056    9450  27887       0      0     5     93%  100%  :s   65%     257      0      0       0      0       0      0
 99%      0  11375      0   11694   15044  18042   10828  31500       0      0     5     94%  100%  :s   75%     319      0      0       0      0       0      0
 99%      0   9923      0   10190   15860  13203   17842  39739       0      0     5     93%  100%  :f   77%     267      0      0       0      0       0      0
 99%      0   9647      0    9907   14775  14922   13837  37908       0      0     5     94%  100%  :f   63%     260      0      0       0      0       0      0
 99%      0   9741      0   10015   14944  12935    8012  37807       0      0     5     94%  100%  :f   63%     274      0      0       0      0       0      0
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 98%      0  10736      0   11095   14996  11413    9023  43060       0      0     5     92%  100%  :f   74%     359      0      0       0      0       0      0
 99%      0  10650      0   11007   14700   9307    5590  43071       0      0     5     92%  100%  :f   75%     357      0      0       0      0       0      0
 99%      0  10776      0   11053   15422  12206    9532  30542       0      0     5     93%  100%  :f   67%     277      0      0       0      0       0      0
 99%      0   9784      0   10071   14640  10984    7011  31531       0      0     5     93%  100%  :f   61%     287      0      0       0      0       0      0
 99%      0   9483      0    9751   15391  11126    7787    126       0      0     5     95%   55%  :    63%     268      0      0       0      0       0      0
 99%      0   8852      0    9139   14523  10998    8075      0       0      0     5     92%    0%  -    68%     287      0      0       0      0       0      0
 99%      0  10117      0   10412   15439  11833    8735      8       0      0     5     92%    0%  -    74%     295      0      0       0      0       0      0
 99%      0  10173      0   10527   18793  12591    7611     16       0      0     5     93%    0%  -    67%     354      0      0       0      0       0      0
 99%      0   9249      0    9555   21114  10490    6461      0       0      0     5     93%    0%  -    63%     306      0      0       0      0       0      0
 99%      0  10721      0   11012   28768   9551    4043     32       0      0     5     93%    0%  -    65%     291      0      0       0      0       0      0
 99%      0  10085      0   10393   26983   8593    4379      0       0      0     5     94%    0%  -    63%     308      0      0       0      0       0      0
 99%      0   9564      0   10140   26506   8861   11996  13285       0      0     5     93%   23%  Hn   69%     576      0      0       0      0       0      0
100%      0   7429      0    7925   26556   8243   11372  45755       0      0     5     93%  100%  :s   69%     496      0      0       0      0       0      0
 99%      0   9837      0   10492   24493   9399    4760  35276       0      0     5     92%  100%  :s   75%     655      0      0       0      0       0      0
 99%      0   9928      0   10504   20493   9931    8991  41493       0      0     5     95%  100%  :f   73%     576      0      0       0      0       0      0
 99%      0  10624      0   11017   22427  10292   15893  33876       0      0     5     92%  100%  :f   91%     393      0      0       0      0       0      0
 99%      0  10020      0   10364   22728  12959   10035  36020       0      0    33s    94%  100%  :f   74%     344      0      0       0      0       0      0
 99%      0  10220      0   10481   22194  16098   15062  35022       0      0    33s    94%  100%  :f   66%     261      0      0       0      0       0      0
 99%      0  10466      0   10712   20587  18002   14246  38332       0      0    33s    93%  100%  :f   66%     246      0      0       0      0       0      0
 99%      0  10686      0   11027   22862  12662    7372  40383       0      0     6     95%  100%  :f   75%     341      0      0       0      0       0      0
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 99%      0  10142      0   10459   14621  17868    9769  33121       0      0     6     95%  100%  :f   75%     317      0      0       0      0       0      0
 99%      0   9155      0    9632   15180  13075    5602    119       0      0     6     98%   28%  :    63%     477      0      0       0      0       0      0
 99%      0   9843      0   10207   14760  14241    8499      0       0      0     6     97%    0%  -    91%     364      0      0       0      0       0      0
 99%      0  11621      0   11973   15497  15882   10301      0       0      0     6     95%    0%  -    62%     352      0      0       0      0       0      0
 99%      0  11605      0   11953   15155  15212    6576     32       0      0     6     94%    0%  -    65%     348      0      0       0      0       0      0
 99%      0  11114      0   11488   17055  12981    7503      0       0      0     6     95%    0%  -    61%     374      0      0       0      0       0      0
 99%      0  11402      0   11717   16486  14387    7525      0       0      0     6     95%    0%  -    60%     315      0      0       0      0       0      0
 99%      0  11862      0   12196   16769  11731    5095     24       0      0     6     95%    0%  -    63%     334      0      0       0      0       0      0
 99%      0  10352      0   10624   14833   9876   10911  18927       0      0     6     95%   43%  Hn   58%     272      0      0       0      0       0      0
 99%      0  10440      0   10852   17031   9344    6442  31577       0      0     6     95%  100%  :s   58%     412      0      0       0      0       0      0
 99%      0  10696      0   11201   15581  11654   12110  34010       0      0     6     95%  100%  :f   76%     505      0      0       0      0       0      0
 98%      0  11477      0   11817   15734  11264   11917  32500       0      0     6     94%  100%  :f   79%     340      0      0       0      0       0      0
 99%      0  10306      0   10719   17665  14521   20185  46718       0      0     6     95%  100%  :f   87%     413      0      0       0      0       0      0
 99%      0  10161      0   10582   18859  20413   16678  36167       0      0     6     94%  100%  :f   74%     421      0      0       0      0       0      0
 98%      0   8926      0    9448   16089  15312   10495  47077       0      0     6     95%  100%  :f   68%     522      0      0       0      0       0      0
 99%      0  10355      0   10638   16997  16185    9005  31647       0      0     1s    95%  100%  :f   75%     283      0      0       0      0       0      0
 99%      0  10654      0   10942   16215  16539   13761  35143       0      0     1s    95%  100%  :f   69%     288      0      0       0      0       0      0
 99%      0  11244      0   11537   16593  17785    9314  27572       0      0     1s    94%  100%  :f   69%     293      0      0       0      0       0      0
 99%      0   9393      0    9659   15806  18486   15572  24269       0      0     1s    94%  100%  :f   78%     266      0      0       0      0       0      0
 99%      0  10860      0   11087   16707  18782   13787    126       0      0     1s    96%   41%  :    71%     227      0      0       0      0       0      0
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 99%      0  11141      0   11589   16915  16564    6539      0       0      0     1s    95%    0%  -    65%     448      0      0       0      0       0      0
 98%      0  11329      0   11852   17429  23367   17145      0       0      0     1s    95%    0%  -    70%     523      0      0       0      0       0      0
 99%      0  10485      0   10925   17827  33176   24757     24       0      0     1s    96%    0%  -    71%     440      0      0       0      0       0      0
 99%      0  10181      0   10457   15890  28397   24536      0       0      0     1s    95%    0%  -    73%     276      0      0       0      0       0      0
 98%      0  11802      0   12367   15804  20735   15003      8       0      0     1s    95%    0%  -    64%     565      0      0       0      0       0      0
 98%      0  10598      0   11059   16266  34470   20946     24       0      0     1s    96%    0%  -    63%     461      0      0       0      0       0      0
 99%      0  10947      0   11197   15803  24684   23631      0       0      0     1s    95%    0%  -    69%     250      0      0       0      0       0      0
 99%      0  10787      0   11134   15545  24809   15762      0       0      0     1s    97%    9%  Hn   77%     347      0      0       0      0       0      0
 99%      0   8587      0    8912   15460  23630   30652  58905       0      0     1s    95%  100%  :s   82%     325      0      0       0      0       0      0
 99%      0  10855      0   11320   15758  23810   19341  27191       0      0     1s    96%  100%  :s   76%     465      0      0       0      0       0      0
 99%      0  10220      0   10619   16192  24942   19242  27215       0      0    31s    96%  100%  :f   84%     399      0      0       0      0       0      0
 99%      0  10057      0   10357   17095  26470   18380  33939       0      0    13s    96%  100%  :f   85%     300      0      0       0      0       0      0
 99%      0  10537      0   10813   15044  26155   22481  34457       0      0    13s    96%  100%  :f   78%     276      0      0       0      0       0      0
 99%      0  11084      0   11420   15175  23097   17634  42094       0      0     7s    96%  100%  :f   74%     336      0      0       0      0       0      0
 99%      0  10806      0   11051   16107  14516    7430  27226       0      0     7s    94%  100%  :f   80%     245      0      0       0      0       0      0
 99%      0  10415      0   10751   14797  14485    9050  41667       0      0     7s    95%  100%  :f   72%     336      0      0       0      0       0      0
 98%      0  10883      0   11308   15168  15487   11585  54716       0      0     7s    95%  100%  :f   76%     425      0      0       0      0       0      0
 99%      0   9011      0    9489   14424  13626   10624    365       0      0     7s    96%   82%  :    84%     478      0      0       0      0       0      0
 98%      0  10185      0   10550   14440  15152   10525      8       0      0     7s    96%    0%  -    85%     365      0      0       0      0       0      0
 98%      0  11283      0   11751   14855  16054   11037     16       0      0     7s    95%    0%  -    85%     468      0      0       0      0       0      0
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s  Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write    age    hit  time  ty  util                            in    out      in    out
 98%      0  10485      0   10858   14771  13433   10332      8       0      0     7s    93%    0%  -    89%     373      0      0       0      0       0      0
 98%      0  11143      0   11473   14734  14472   10332      0       0      0     7s    94%    0%  -    87%     330      0      0       0      0       0      0
 98%      0   8538      0    9061   15035  11785    6815     23       0      0     7s    94%    0%  -    83%     523      0      0       0      0       0      0
 93%      0   7855      0    8399   14529  12076    7723      8       0      0     7s    94%    0%  -    77%     544      0      0       0      0       0      0
100%      0   7309      0    7782   17726  11689    5572      0       0      0    18s    95%    0%  -    54%     473      0      0       0      0       0      0
 99%      0   9081      0    9638   15935  16535   11219     24       0      0     9s    94%    0%  -    67%     557      0      0       0      0       0      0
 99%      0   9517      0   10246   14630  18190   12269      0       0      0     9s    95%    0%  -    75%     729      0      0       0      0       0      0
100%      0   8404      0    8830   14328  16426   13833  17403       0      0     9s    96%   40%  Tn   80%     426      0      0       0      0       0      0
 99%      0   7491      0    7892   14374  16425   19174  53313       0      0     9s    95%  100%  :s   76%     401      0      0       0      0       0      0
 99%      0  10615      0   11113   15033  19537   15666  28165       0      0     9s    95%  100%  :s   79%     498      0      0       0      0       0      0
 99%      0  10770      0   11179   15024  20655   17789  35380       0      0     9s    95%  100%  :s   79%     409      0      0       0      0       0      0

node1::>

 

 

Only aggr1_nd1 is used in fserver1 which is used as CIFS server.

The aggregate aggr1_nd1 has 14 discs.

As you told me, assuming 100 IOPS per disc, the aggregate aggr1_nd1 IOPS can only be expected to be about 1400 IOPS.

Is this understanding correct?

If this idea is correct, it will be considerably less than 10,000 IOPS.

 

Please let me know if there is a command to check the IOPS for the aggregate.

If possible, I would like to check the IOPS for aggregates and see if IOPS to disks is the bottleneck.

As a result, if IOPS to the aggregate is the bottleneck, I would like to consider adding more discs.

 

Regards,

 

Fabian1993
4,170 Views

Thats correct, to improve the Performance i would be better, to create a Flash Pool.

Expand your existing SATA Disk with some SSDs

You could get some more IOPs if you add some Disk to the existing Aggregate.

Also you can try to move your volumes for CIFS on differents Aggregates, that you have the Performance from both Controllers.

 

Try to analyse the System with the Link from Alex...

AlexDawson
4,167 Views

A couple of things:

 

  • There is no way to show maximum throughput capacity for an aggregate
  • backend IOPS and front-end IOPS are not necessarily identical, there's a lot of caching and re-ordering that happens, so front-end can be more than backend, sometimes quite a lot
  • ONTAP writes data to disk during consistency points, every 10 seconds, or when the NVRAM is at high watermark.

The longer sysstat gives more information - Reviewing https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_are_the_different_Consistency_Point_types_and_how_are_they_measur... - the system is continually flushing to disk because NVRAM is full, and then, taking several seconds to flush it because the disks are busy. While this is happening, the system is not responding very well. 

 

There is another measure you can make, with statit (detailed here - https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/How_to_Assess_Disk_Response_Times_in_ONTAP ), but I'm pretty confident you're disk IO bound.

 

nsky
4,103 Views

Hi, everybody

 

Thank you for letting me know.

I would like to consider adding disks to the aggregate, using the flash pool, and load balancing with two nodes.

 

Sorry for the basic question, but I have a question about calculating and checking NetApp FAS IOPS.

 

Question1:

I was told by Alex Dawson that "sysstat is showing it doing about 10,000 IOPS".

Which parameter of sysstat was calculated from?

 

Question2:

I understand that the "CIFS ops" parameter is the number of CIFS operations per second. How should I understand and handle this "CIFS ops" indicator when considering NetApp FAS performance due to access delays in NetApp FAS systems?

 

Best regards,

 

AlexDawson
4,099 Views

The "Total" column is a number of IOPS.

 

You don't need to worry about the CIFS Ops column, you need to worry about the Disk and CP time columns, which are both way too high, because you haven't got enough disks 🙂 

nsky
4,089 Views

Hi, AlexDawson

 

Thank you for your replay.

In the sysstat command manual, the description of "Total ops / s" is "The total number of operations per second (NFS + CIFS + HTTP)."

For this reason, I understood that this number of operations was, for example, the number of connection establishment requests and read requests in the case of CIFS.

Does this "Total ops / s" mean total IOPS?

I want to understand the meaning of the items displayed by sysstat correctly.

 

Regards,

 

Fabian1993
4,035 Views

Hi @nsky,

 

check out this, here is for each Column a short description:

 

https://library.netapp.com/ecmdocs/ECMP1196890/html/man1/na_sysstat.1.html

nsky
3,790 Views

Hi, Fabian1993

 

Thank you for your reply.

I have a question.

I'm reading the manual you taught me.

The CIFS column is described as follows:

"The number of CIFS operations per second during that time."

 

I want to know if the numbers in the CIFS column indicate IOPS for CIFS.

In the manual, it is described as "CIFS operations", so I think it is different from IOPS.

 

Regards,

 

Fabian1993
3,492 Views

Hi @nsky,

 

try to use the vserver statistics, if you use a dedicated SVM for CIFS:

 

https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-cmpr-940/statistics__vserver__show.html

Public