WAFL Layer shows the work being done by WAFL and includes queue and service time on the d-blade (node that owns the volume) including waiting on disk. IO size is max 64KB, larger client IOs are split into smaller ones before they reach WAFL.
End-to-end QoS shows the work being done by the client and includes network delay (for larger block SAN), throttle, n-blade/scsi-blade (node that owns the lif), d-blade (node that owns the volume) including waiting on disk. IO size is as requested by client (can be MBs in size)
WAFL is a reasonable proxy for the performance of the system internally, while End-to-end is better to estimate end user experience (which might include network or host issues external to storage).
In this post I have a few snippets that talk more about WAFL vs QoS which might be useful to read.
Regarding your screenshot I'm not sure what is happening. I find it quite unlikely that a single volume could do 4000MB/s of work so I'm thinking it could be some sort of counter/collection issue. Or maybe cloning (ODX, VAAI, VSC driven, etc) is reporting here and the throughput really is that high.
What is this volume used for (vol0 for a SVM maybe), does it hold data, do clients access it? Is it a mirror source or destination? Anything else unique about this volume?
Storage Architect, NetApp EMEA (and author of Harvest)
Blog: It all begins with data
If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!