Software Development Kit (SDK) and API Discussions

NetApp 9 : disk average latency calculation : SDK vs CLI

liok
1,570 Views

Hi guys,

 

On a FAS2650 storage system, I am trying to compute the disks average latencies.

 

When querying the API with

 

<netapp>
  <perf-object-counter-list-info>
    <objectname>disk</objectname>
  </perf-object-counter-list-info>
</netapp>

 

here is what it says about read latency :

 

<counter-info>
	<base-counter>user_read_blocks</base-counter>
	<desc>Average latency per block in microseconds for user read operations</desc>
	<is-deprecated>false</is-deprecated>
	<name>user_read_latency</name>
	<privilege-level>admin</privilege-level>
	<properties>average</properties>
	<unit>microsec</unit>
</counter-info>

 

and here is what it says about write latency :

 

<counter-info>
	<base-counter>user_write_blocks</base-counter>
	<desc>Average latency per block in microseconds for user write operations</desc>
	<is-deprecated>false</is-deprecated>
	<name>user_write_latency</name>
	<privilege-level>admin</privilege-level>
	<properties>average</properties>
	<unit>microsec</unit>
</counter-info>

 

Based on this information, my understanding is, if I want to compute the total average latency of a specific disk, it will go like this :

 

 

disk_total_average_latency = [ [ (user_read_latency at time t2 - user_read_latency at time t1) / (user_read_blocks at time t2 - user_read_blocks at time t1) ] + [ (user_write_latency at time t2 - user_write_latency at time t1) / (user_write_blocks at time t2 - user_write_blocks at time t1) ] ] / 1000

 

The division by 1000 is here to give me the total average latency in milliseconds.

 

Now, when I apply this formula, I get very small values, as compared to what the CLI displays.

As an example, here is the result I get when I run the above formula on one disk at 19:33:48 and 19:36:01 :

 

total_average_latency (19:33:48) = 0.10 ms
total_average_latency (19:36:01) = 0.14 ms

 

And here is what the CLI displays for the same disk :

san::*> statistics disk show -disk DDD

san : 10/24/2018 19:34:51

                            Busy *Total Read Write  Read   Write Latency
               Disk   Node   (%)    Ops  Ops   Ops (Bps)   (Bps)    (us)
------------------- ------ ----- ------ ---- ----- ----- ------- -------
                DDD san-02     2     12    1    11 23552 2608128    6001


san::*> statistics disk show -disk DDD

san : 10/24/2018 19:36:21
Busy *Total Read Write Read Write Latency Disk Node (%) Ops Ops Ops (Bps) (Bps) (us) ------------------- ------ ----- ------ ---- ----- ------ ------- -------
DDD san-02 4 22 6 15 965632 3582976 6555

So with the SDK, the order of magnitude is around 0.1 milliseconds, whereas with the CLI, the order of magnitude is around 6 milliseconds.

 

Here are my questions :

- Am I missing anything or doing anything wrong with my calculations ?

- Is there a known issue regarding the values displayed in the CLI ?

 

I hope my issue is detailed enough for you guys to help me.

Should you need any additional information, please let me know.

 

Thank you for your time.

Regards

0 REPLIES 0
Public