I have a lun which holds the SQL data file and it is showing increased lun latency.
The data is from this month's monthly report for storage, shows LUN latency increasing for the database data file LUNs. Past experience has shown, users become affected at about the 11 ms mark and action should be taken to avoid this scenario. So a NetApp support case was opened to try and address the performance issue.
How do I show if this is a caching or fragmentation issue? I have been looking through the prefstat reports and verything looks healthy apart from the latency. Not sure what "cp_dirty_allocation_blks" is but it is 1000+
Read Write Read Write Average Queue Lun
Ops Ops kB kB Latency Length
0 0 32 0 7.54 0.07 /vol/sqf02/diskf.lun
0 0 37 0 7.00 1.00 /vol/sqf02/diske.lun
Read Write Read Write Average Queue Lun
Ops Ops kB kB Latency Length
1 0 80 0 11.31 0.08 /vol/sqf02/diskf.lun
2 0 123 0 10.96 0.08 /vol/sqf02/diske.lun
CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP Disk FCP iSCSI FCP kB/s iSCSI kB/s
in out read write read write age hit time ty util in out in out
20% 0 1541 0 2140 376 5881 12354 7140 0 0 19 94% 16% F 28% 595 2 9228 5290 1 0
26% 0 1162 0 2172 498 3993 14646 20264 0 0 18 95% 39% 3f 31% 1006 2 18393 8112 1 0
lun:sqf02/diske.lun-XXXXZZZZZ:display_name:/vol/sqf02/diske.lun
lun:sqf02/diske.lun-XXXXZZZZZ:read_ops:0/s
lun:sqf02/diske.lun-XXXXZZZZZ:write_ops:2/s
lun:sqf02/diske.lun-XXXXZZZZZ:other_ops:0/s
lun:sqf02/diske.lun-XXXXZZZZZ:read_data:36798b/s
lun:sqf02/diske.lun-XXXXZZZZZ:write_data:19872b/s
lun:sqf02/diske.lun-XXXXZZZZZ:queue_full:0/s
lun:sqf02/diske.lun-XXXXZZZZZ:avg_latency:22.17ms <-------------------- Why?
lun:sqf02/diske.lun-XXXXZZZZZ:total_ops:3/s
lun:sqf02/diske.lun-XXXXZZZZZ:scsi_partner_ops:0/s
lun:sqf02/diske.lun-XXXXZZZZZ:scsi_partner_data:0b/s
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.0:98%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.1:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.2:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.3:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.4:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.5:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.6:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_align_histo.7:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.0:86%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.1:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.2:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.3:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.4:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.5:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.6:0%
lun:sqf02/diske.lun-XXXXZZZZZ:write_align_histo.7:0%
lun:sqf02/diske.lun-XXXXZZZZZ:read_partial_blocks:1%
lun:sqf02/diske.lun-XXXXZZZZZ:write_partial_blocks:13%
Thanks ifyou know the answer
Bren