ONTAP Discussions

Intercluster snapmirror schedule

KernelPanic
2,191 Views

Hello, we currently have a 1 hour schedule for DP snapmirrors between two clusters and it is our aim to reduce the schedule to 15 minutes; can anyone tell me how to go about monitoring this (e.g. intercluster bandwidth used when transfers are happening) to see if its possible?

 

Using the cluster peer ping command I have determined that the RTT is 0.2 ms between the clusters. I did read the best practices doc (https://www.netapp.com/us/media/tr-4015.pdf) and it does recommend the 'System Performance Modeler' for sizing guidlines but it appears I'm not authorised to use this.

 

We also don't have network compression enabled in the snapmirror policy, should this be enabled?

The intercluster links are 10 Gb with an MTU of 1500, can / should this be upped to 9000?

 

Thanks for any help

 

 

1 ACCEPTED SOLUTION

GidonMarcus
2,128 Views

Hi

 

As for monitoring. OCUM doing pretty good job of monitoring LAGS and failed runs.

Don't worry to much about sizing unless you have very large data sets (Millions of files, TB's upon TB's of changes every day)..  if you see perf degradation or the updates are not fast as you would expect. capture a perfstat and let NetApp point to bottleneck. one recommendation i can give - is don't run it frequently (around 10-15 min or less) on volumes with Millions of files (i now fighting disswizeling issue on one of my clusters because of it) .

 

 

As for MTU between sites - not recommended. as you may find that it's not supported on all the network devices and on the service providers in the way. that will cause overload on the network equipment and slow down you connectivity.. even if you do set it up - tue return is still pretty small - see:

"Partial obsolescence" - https://en.wikipedia.org/wiki/Jumbo_frame 

http://www.mirazon.com/jumbo-frames-do-you-really-need-them/

https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech

 

 

As for compression. if your data already compressed on the disk i think it's stay compressed on the wire with DP type of mirroring.  if you have 10GB i'll say don't bother with compression anyway...

 

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

View solution in original post

1 REPLY 1

GidonMarcus
2,129 Views

Hi

 

As for monitoring. OCUM doing pretty good job of monitoring LAGS and failed runs.

Don't worry to much about sizing unless you have very large data sets (Millions of files, TB's upon TB's of changes every day)..  if you see perf degradation or the updates are not fast as you would expect. capture a perfstat and let NetApp point to bottleneck. one recommendation i can give - is don't run it frequently (around 10-15 min or less) on volumes with Millions of files (i now fighting disswizeling issue on one of my clusters because of it) .

 

 

As for MTU between sites - not recommended. as you may find that it's not supported on all the network devices and on the service providers in the way. that will cause overload on the network equipment and slow down you connectivity.. even if you do set it up - tue return is still pretty small - see:

"Partial obsolescence" - https://en.wikipedia.org/wiki/Jumbo_frame 

http://www.mirazon.com/jumbo-frames-do-you-really-need-them/

https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech

 

 

As for compression. if your data already compressed on the disk i think it's stay compressed on the wire with DP type of mirroring.  if you have 10GB i'll say don't bother with compression anyway...

 

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
Public