<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: bad performance read/write latency and  snapmirror lag in ONTAP Discussions</title>
    <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113997#M24209</link>
    <description>&lt;P&gt;There's too many variables to really narrow it down. &amp;nbsp;The snippet of Graphana output shows some pretty high disk utiliziation, so&amp;nbsp;I'd start with that. &amp;nbsp;Try disabling some tasks...is it ok to disable both SnapMirror&amp;nbsp;and dedupe for a time and see whether performance returns to an acceptable level for the clients? &amp;nbsp;If not, try disabling one or the other and seeing how it affects performance. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Reducing the frequency of SnapMirror jobs could help, so could alternating dedupe and SnapMirror so that they aren't both running at the same time. &amp;nbsp;You said WAN isn't an issue, but you're averaging over 37MB of data coming into the system every second (net KB/s in)...if you're replicating all of that data then you need a WAN pipe which can support at least that much bandwidth (&amp;gt; 300mb/s). &amp;nbsp;Check the SnapMirror transfer sizes to help determine how much bandwidth each volume needs and divide that how much throughput is available to determine windows.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do a reallocate measure on the volumes to determine if reallocation would help. &amp;nbsp;The chains in your statit&amp;nbsp;are ok, not great, but not terrible either...might be worth doing a reallocate measure on the aggregates as well. &amp;nbsp;Be aware that reallocate will consume some IO, so it could impact latency if it's already bad.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you haven't opened a support case, I would do so. &amp;nbsp;Reach out to your account team to have them escalate if needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Andrew&lt;/P&gt;</description>
    <pubDate>Mon, 21 Dec 2015 15:50:15 GMT</pubDate>
    <dc:creator>asulliva</dc:creator>
    <dc:date>2015-12-21T15:50:15Z</dc:date>
    <item>
      <title>bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113873#M24172</link>
      <description>&lt;P&gt;Brief Overview&lt;/P&gt;&lt;P&gt;Netapp 8.1.4 P9&lt;/P&gt;&lt;P&gt;2 x FAS6020 in a flexpod setup (Configured in 2013)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Aggr 1 Fsata0 with Flash Pool &amp;nbsp;(24 volumes) (124tb used) (nfs datastores + vfilercifs )&lt;/P&gt;&lt;P&gt;Aggr Sas0 (48tb used)&lt;/P&gt;&lt;P&gt;Aggr Sata &amp;nbsp;(50tb)&lt;/P&gt;&lt;P&gt;Aggr sata (45tb)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Netapp connected to Nexus 5K via 10Gig (flexpod setup)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Summary of Problems : &amp;nbsp;Bad performance read /write | Snap mirror taking ages to complete | throughput is very low&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have been experincing bad read/write latency since the summer and we upgrade to 8.1.4 P9 in september which made the problem go away for four weeks. Types of problems Users can't write/read small documents such a 1mb word document or it takes upto 1min to open them. The problem seems to be our netapp02 controller which has high ping rates from &amp;nbsp;controller 1 to controller 2 but when you ping anything else on the network you get sub 1ms from controller 2.&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="netapp.png" style="width: 999px;"&gt;&lt;img src="https://community.netapp.com/t5/image/serverpage/image-id/4487i0BBD1BC797DD9197/image-size/large?v=v2&amp;amp;px=999" role="button" title="netapp.png" alt="netapp.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="snapmirrorlag.png" style="width: 879px;"&gt;&lt;img src="https://community.netapp.com/t5/image/serverpage/image-id/4488iD538B4F0F84F0E31/image-size/large?v=v2&amp;amp;px=999" role="button" title="snapmirrorlag.png" alt="snapmirrorlag.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper" image-alt="snapmirror.png" style="width: 691px;"&gt;&lt;img src="https://community.netapp.com/t5/image/serverpage/image-id/4489iDD7103B6CC97170C/image-size/large?v=v2&amp;amp;px=999" role="button" title="snapmirror.png" alt="snapmirror.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Snap mirror lag times are horrdenous &amp;nbsp;please see screenshot.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Netapp can't find anything wrong with perfstats, we are thinking its a bug or our config is incorrect somewhere.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please could you guys help me investigate&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt; sysstat -m
 ANY  AVG  CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7
100%  59%   58%  61%  58%  59%  59%  61%  59%  59%
100%  53%   52%  54%  52%  51%  52%  55%  53%  52%
 99%  50%   49%  51%  49%  49%  50%  53%  50%  50%
100%  57%   56%  58%  56%  56%  57%  59%  57%  57%
 99%  51%   50%  51%  50%  50%  51%  53%  51%  51%
 99%  48%   47%  49%  47%  47%  48%  51%  48%  48%
 99%  51%   49%  52%  49%  49%  51%  54%  51%  50%
 99%  50%   48%  51%  48%  49%  50%  52%  49%  49%
 99%  50%   49%  52%  49%  48%  50%  52%  50%  49%
 99%  48%   47%  50%  47%  48%  48%  51%  48%  49%
 99%  52%   52%  53%  52%  52%  52%  54%  52%  52%
 99%  51%   49%  52%  50%  50%  51%  54%  51%  51%
100%  53%   52%  54%  51%  52%  52%  56%  52%  53%
100%  53%   52%  55%  52%  53%  53%  56%  54%  53%
 99%  49%   48%  51%  48%  49%  49%  52%  49%  49%&lt;/PRE&gt;&lt;PRE&gt;&amp;nbsp;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 22:29:36 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113873#M24172</guid>
      <dc:creator>Umar</dc:creator>
      <dc:date>2025-06-04T22:29:36Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113878#M24173</link>
      <description>&lt;P&gt;Hi,&lt;BR /&gt;I would say that aggregate with SATA disks + flash pool is slowing down your system bit as it's mostly utilized based on provided statit output.&lt;BR /&gt;Flash pool is caching reads and random overwrites(operations smaller then 16KB) &amp;nbsp;- make sense to use it for cifs shares with many small files but not for datastores.&lt;BR /&gt;Usually datastores needs quick read response as OS of VMs is&amp;nbsp;laying on your volumes. For that purpose Flash Cache is better solution.(ideally with dedup enabled)&lt;BR /&gt;Flash Cache is PCI based (much faster then accessing SSD on disk layer as flash pool does) .&lt;BR /&gt;Also in case of Flash pool , &amp;nbsp;all hot blocks(the most accessed blocks) needs to be written to the SSDs&amp;nbsp;during consistency point. It means you need to wait for writing&amp;nbsp;it to the disk to use advantage of cache.&lt;BR /&gt;I would recommend (if you have flash cache&amp;nbsp;installed) to create new datastore on aggregate with SAS disks and migrate some VMs with heavy load there. &amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;Another thing what I would say is not best practice is that you are mixing disk types within one controller. If you have SATA disks and SAS disks on one controller , you are slowing down consistency point as you need to still wait once SATA disks will complete their writes.&lt;BR /&gt;&lt;BR /&gt;That's just my opinion &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt;&lt;BR /&gt;&lt;BR /&gt;&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2015 20:50:01 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113878#M24173</guid>
      <dc:creator>MACAKIGO1</dc:creator>
      <dc:date>2015-12-17T20:50:01Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113886#M24175</link>
      <description>&lt;P&gt;Do you have compression or deduplication jobs which are running at the same time as your snapmirrors? &amp;nbsp;How full are your aggregates? &amp;nbsp;Have you done a reallocate measure to check for noncontiguous free space?&lt;/P&gt;</description>
      <pubDate>Thu, 17 Dec 2015 17:03:58 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113886#M24175</guid>
      <dc:creator>asulliva</dc:creator>
      <dc:date>2015-12-17T17:03:58Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113927#M24186</link>
      <description>&lt;P&gt;Hi&amp;nbsp;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/8479"&gt;@asulliva﻿&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We need to find out why its taking so long to do a snap mirror ? (WAN is not the problem )&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do you have compression or deduplication jobs which are running at the same time as your SnapMirrors?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We have snap mirrors running&amp;nbsp;&lt;SPAN&gt;continuously and de-dup runs concurrently&amp;nbsp;to these if it runs at all.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;Path                           State      Status     Progress
/vol/vol_188_data188_T3_01   Enabled    Idle       Idle for 93:49:11
/vol/vol_dart_data             Disabled   Idle       Idle for 2950:24:43
/vol/vol_188_documentumsql_backup Disabled   Idle       Idle for 3102:04:50
/vol/vol_188_188cvma           Disabled   Idle       Idle for 2943:03:06
/vol/vol_188direct_pool_1_j  Disabled   Idle       Idle for 2950:23:22
/vol/vol_188direct_n         Disabled   Idle       Idle for 2950:22:11
/vol/vol_188direct_pool2_d   Disabled   Idle       Idle for 2949:41:44
/vol/vol_188direct_pool3_e   Disabled   Idle       Idle for 2950:03:09
/vol/vol_188direct_q         Disabled   Idle       Idle for 2950:21:51
/vol/vol_188direct_r         Disabled   Idle       Idle for 2950:16:06
/vol/vol_188icedblive_s        Disabled   Idle       Idle for 2942:49:07
/vol/vol_188icedblive_r        Disabled   Idle       Idle for 2949:30:52
/vol/vol_188_dss_clust         Enabled    Idle       Idle for 66:35:05
/vol/vol_188_dss_rdm_map       Enabled    Idle       Idle for 43:32:23
/vol/vol_vfiler_medical_records_images0 Disabled   Idle       Idle for 2933:17:27
/vol/vol_vfiler_188doccache_cache Disabled   Idle       Idle for 2934:03:20
/vol/vol_188_dss_clust_file188 Enabled    Idle       Idle for 14:53:40
/vol/vol_188_vfilercifs_departments_01 Enabled    Idle       Idle for 120:30:15
/vol/vol_188_vfilercifs_applications_01 Enabled    Idle       Idle for 20:47:53
/vol/vol_vfiler_records_images1 Disabled   Idle       Idle for 2944:24:43
/vol/vol_vfiler_records_images2 Disabled   Idle       Idle for 2944:24:43
/vol/vol_188_vfilercifs_backups_01 Disabled   Idle       Idle for 3120:24:42
/vol/vol_188_vfilercifs_users_01 Enabled    Idle       Idle for 20:42:47
/vol/vol_188_data188_T2_01   Enabled    Idle       Idle for 12:23:52
/vol/vol_vfiler_images0_test Disabled   Idle       Idle for 2944:24:43
/vol/vol_vfiler_images1_train Disabled   Idle       Idle for 2944:23:40
/vol/vol_vfiler_medical_records_dart_images2_live Disabled   Idle       Idle for 2944:23:43
/vol/vol_188_data188_T2_04   Disabled   Idle       Idle for 3076:43:44
/vol/vol_vfiler_retinal_image188 Disabled   Idle       Idle for 2943:28:41
/vol/vol_188_data188_T4_01   Enabled    Idle       Idle for 165:00:53
/vol/vol_188_data188_T4_03   Enabled    Idle       Idle for 148:12:06
/vol/vol_188_vfilercifs_archive_01 Disabled   Idle       Idle for 2941:37:40
/vol/vol_188_vfilercifs_backups_02 Disabled   Idle       Idle for 3123:58:09
/vol/vol_188_data188_T4_06   Disabled   Idle       Idle for 3076:43:54&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;How full are your aggregates?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; NetApp consultant from neos healthcheck said we go upto 95% utilisation &amp;nbsp;on a large aggregate we are currently at 90% for fstat0&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Have you done a reallocate measure to check for noncontiguous free space?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; No we havn't done this&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;netapp02&amp;gt; sysstat -x 2
 CPU    NFS   CIFS   HTTP   Total     Net   kB/s    Disk   kB/s    Tape   kB/s                                                               Cache  Cache    CP  CP  Disk   OTHER    FCP  iSCSI     FCP   kB/s   iSCSI   kB/s
                                       in    out    read  write    read  write                                                                 age    hit  time  ty  util                            in    out      in    out
 93%   7703   7416      0   15438   24008 114883  156912  66706       0      0     1     95%   69%  :    32%       3    316      0     155   2978       0      0
 94%   5332   6664      0   12511   24394 113986  191314  70969       0      0     1     96%   21%  Hn   45%       0    515      0     663   4527       0      0
 93%   6986   8064      0   15282   27105 101081  151629 119054       0      0     0s    94%  100%  :f   32%       0    232      0     120   2354       0      0
 91%   6623   7459      0   14320   24715 133290  197996 116298       0      0     0s    92%  100%  :f   31%       4    234      0     187   2642       0      0
 95%   6043   6910      0   13296   33680 123154  181144   9527       0      0    36s    95%   29%  Hn   36%       0    321      0     212   2306       0      0
 94%   4993   8246      0   13591   27255  66921  144321 145748       0      0     0s    96%  100%  :f   38%       2    350      0     187   3107       0      0
 91%   5346  10869      0   17902   25553  71091  155638 155790       0      0     0s    95%  100%  :v   35%       0   1687      0   14114   1678       0      0
 92%   4914   8204      0   13328   42350  78161  184114 119610       0      0     1     96%   46%  Hs   44%       0    210      0     200   1706       0      0
 84%   5295   7032      0   12760   26394  73475  142920 132128       0      0     1     95%  100%  :f   33%       4    407      0     292   3103       0      0
 88%   6525   8547      0   15443   38743  90686  115278  31376       0      0     1     96%   35%  :    43%     130    241      0     411   2083       0      0
 92%   6419   8500      0   15200   47308 118509  193627  80080       0      0     1     95%   18%  Hn   37%       2    279      0     598   2113       0      0
 92%   5533   8235      0   13942   20524  86481  170934 168575       0      0     1     93%  100%  :f   41%       0    174      0     133   1710       0      0
 92%   8000   6383      0   14973   48441 133375  165988  70304       0      0    47s    95%   66%  :    32%       0    590      0     632   4876       0      0
 97%   7145   6787      0   14285   30185 101420  175611  80692       0      0     1     97%   26%  Hs   42%       2    351      0     301   2997       0      0
 91%   6967   6911      0   14224   46559  96112  152779 122749       0      0     0s    94%  100%  :f   39%       0    346      0     645   2594       0      0
 88%   7634   7319      0   15310   28505 126086  160210 101060       0      0     1     95%  100%  :f   37%       2    355      0     198   3279       0      0
 97%   7885   6401      0   14636   42104 211693  299165 134110       0      0     0s    95%   58%  Hs   50%       3    347      0    1081   2406       0      0
 98%   7764   9916      0   18045  100886  88082  220507 176332       0      0    48s    93%  100%  :f   57%       1    364      0   31110  32819       0      0
 99%   8120   7809      0   16458   58469 115699  300433 213909       0      0     1     96%   99%  Zs   63%       5    524      0   42626  45206       0      0&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 21 Dec 2015 15:10:28 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113927#M24186</guid>
      <dc:creator>Umar</dc:creator>
      <dc:date>2015-12-21T15:10:28Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113928#M24187</link>
      <description>&lt;P&gt;we have &amp;nbsp;a 512gb flash cache card.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I understand your view on same disk types per controller but this would impact on n+1 redundancy netapp&amp;nbsp;? (that's what we were sold)&lt;/P&gt;</description>
      <pubDate>Fri, 18 Dec 2015 15:58:13 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113928#M24187</guid>
      <dc:creator>Umar</dc:creator>
      <dc:date>2015-12-18T15:58:13Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113994#M24206</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/8479"&gt;@asulliva﻿&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/12394"&gt;@MACAKIGO1﻿&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Any advice guys ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do you think we need to stagger our snap mirror schdule because all volumes start snap mirror operations every 15minutes ?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;Umar&lt;/P&gt;</description>
      <pubDate>Mon, 21 Dec 2015 15:10:02 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113994#M24206</guid>
      <dc:creator>Umar</dc:creator>
      <dc:date>2015-12-21T15:10:02Z</dc:date>
    </item>
    <item>
      <title>Re: bad performance read/write latency and  snapmirror lag</title>
      <link>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113997#M24209</link>
      <description>&lt;P&gt;There's too many variables to really narrow it down. &amp;nbsp;The snippet of Graphana output shows some pretty high disk utiliziation, so&amp;nbsp;I'd start with that. &amp;nbsp;Try disabling some tasks...is it ok to disable both SnapMirror&amp;nbsp;and dedupe for a time and see whether performance returns to an acceptable level for the clients? &amp;nbsp;If not, try disabling one or the other and seeing how it affects performance. &amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Reducing the frequency of SnapMirror jobs could help, so could alternating dedupe and SnapMirror so that they aren't both running at the same time. &amp;nbsp;You said WAN isn't an issue, but you're averaging over 37MB of data coming into the system every second (net KB/s in)...if you're replicating all of that data then you need a WAN pipe which can support at least that much bandwidth (&amp;gt; 300mb/s). &amp;nbsp;Check the SnapMirror transfer sizes to help determine how much bandwidth each volume needs and divide that how much throughput is available to determine windows.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Do a reallocate measure on the volumes to determine if reallocation would help. &amp;nbsp;The chains in your statit&amp;nbsp;are ok, not great, but not terrible either...might be worth doing a reallocate measure on the aggregates as well. &amp;nbsp;Be aware that reallocate will consume some IO, so it could impact latency if it's already bad.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;If you haven't opened a support case, I would do so. &amp;nbsp;Reach out to your account team to have them escalate if needed.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Andrew&lt;/P&gt;</description>
      <pubDate>Mon, 21 Dec 2015 15:50:15 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Discussions/bad-performance-read-write-latency-and-snapmirror-lag/m-p/113997#M24209</guid>
      <dc:creator>asulliva</dc:creator>
      <dc:date>2015-12-21T15:50:15Z</dc:date>
    </item>
  </channel>
</rss>

