ONTAP Discussions

Measuring SAS and SATA performance, what commands on NetApp cDOT and Linux level respectively




To measure SAS and SATA aggregates performance, response time, read, write, for instance, what commands will you use on cDOT and Linux level, respectively, aslo in NFS volumes environment.


By most common ways and provided by native commands.





Anybody please help?




You can try the below command,


node run <nodename> sysstat




If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.


Also, try this commands,


statistics show -object volume -counter read_latency

statistics show -object volume -counter write_latency

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.




For NFS volumes (this goes with any volumes, but the counters change), you can use the following in diag mode:


::*> statistics show-periodic -object volume -instance <volume_name> -counter <use_the_following_counters>
nfs_other_latency nfs_other_ops
nfs_protocol_other_latency nfs_protocol_read_latency
nfs_protocol_write_latency nfs_read_data
nfs_read_latency nfs_read_ops
nfs_write_data nfs_write_latency


For aggregate total read and write ops, use the following:

::*> statistics aggregate show


Another helpful command

::*> statistics show-periodic aggregate -instance <aggr_name> -counter <give_counter_name>




If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.


Hi my friend


This is neto from Brazil


How are you?


For blocks...


set diag -confirmations off; statistics show-periodic -node cluster:summary -object fcp_lif:vserver -instance <SVM name> -counter avg_latency|avg_write_latency|avg_read_latency|total_ops|write_ops|read_ops|write_data|read_data -interval 1






::*> statistics show-periodic aggregate -instance <aggr_name> -counter <give_counter_name>


where can I find the list of counter names?

without specifying the counter name, it will list themall, however, the names are twisted on the screen.



the counters available also depend on the ONTAP (8.1, 8.2, 8.3) version you are using.

What Ontap version are you using?



Another useful tool for monitoring performance is QoS on the CLI. (Available from 8.2)

To monitor performance simply create a policy-group:

--> qos policy-group create -policy-group <name> -max-throughput 0-INF

This creates a QoS policy without a limit.

Then apply this policy to the Volume, Vserver, LUN, file you would like to monitor

--> vol modify -vserver <svm_name> -qos-policy-group <name>

and use "qos statistics XXXX show" to monitor latency, throughput and others.



To set up regular monitoring I would also recommend looking into OnCommand Performance Manager:



The metrics you are looking for are available in the current version, the next version will bring even ore enhancements.


hope that helps.


cheers chriz


P.S. if you feel this post is useful, please KUDO or “accept as a solution" so other people may find it faster.


To truly isolate disk performance between the two types of disk, you must run the same tests on each type.  That is pretty obvious, but another thing to consider is that the test should be run once before any measurements are recorded in case the workload for the test is able to be cached.  So run it once as a warm-up and then immediately run it on the target aggregates.  If the target aggregates are on different nodes, each node should get a warm-up run.


The QoS statsistics will help you isolate latency from the disks themselves.  Assign a policy with no limit to the target volume and then go under "qos" - > "statistics" -> latency from disk.  Follow Neto's and Chriz Ott's suggestions on looking at IOPS and concentrate on the latency from disk from QoS stats.  


Always remember that performance is a combination of throughput and latency.  Measuring one without the other is not very useful in most cases.