We had a CIFS client outage recently as they were disconnected from the volume/cifs share and when we analyzed data from NMC we saw the excessive latency on te CIFS protocol (250 milliseconds, though prior to this there was a spike of 1500 milliseconds delay at around 1:30pm EST). IOPs were very low/stable and well within the PAM/disk capacity range but the latencies aren’t .This is a problem of identifying the root cause on this issue that the both write/read latency on the na01_customer_data volume are low but “other_latency figure” on the volume latency view on NMC is high (110 milli seconds) and NOT write/read latencies!!. It needs to be worked out what CIFS other_latency is and I can’t tell what volume/qtree or CIFS client is the root cause of them since they don’t have any high IOPS at this moment
Please let me know your thoughts
Actually on NFS client which is accessing to the same volume also we have this error, as we have rblk_nor/s almost 6260.