Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello, we currently have a 1 hour schedule for DP snapmirrors between two clusters and it is our aim to reduce the schedule to 15 minutes; can anyone tell me how to go about monitoring this (e.g. intercluster bandwidth used when transfers are happening) to see if its possible?
Using the cluster peer ping command I have determined that the RTT is 0.2 ms between the clusters. I did read the best practices doc (https://www.netapp.com/us/media/tr-4015.pdf) and it does recommend the 'System Performance Modeler' for sizing guidlines but it appears I'm not authorised to use this.
We also don't have network compression enabled in the snapmirror policy, should this be enabled?
The intercluster links are 10 Gb with an MTU of 1500, can / should this be upped to 9000?
Thanks for any help
Solved! See The Solution
1 ACCEPTED SOLUTION
KernelPanic has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
As for monitoring. OCUM doing pretty good job of monitoring LAGS and failed runs.
Don't worry to much about sizing unless you have very large data sets (Millions of files, TB's upon TB's of changes every day).. if you see perf degradation or the updates are not fast as you would expect. capture a perfstat and let NetApp point to bottleneck. one recommendation i can give - is don't run it frequently (around 10-15 min or less) on volumes with Millions of files (i now fighting disswizeling issue on one of my clusters because of it) .
As for MTU between sites - not recommended. as you may find that it's not supported on all the network devices and on the service providers in the way. that will cause overload on the network equipment and slow down you connectivity.. even if you do set it up - tue return is still pretty small - see:
"Partial obsolescence" - https://en.wikipedia.org/wiki/Jumbo_frame
http://www.mirazon.com/jumbo-frames-do-you-really-need-them/
https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech
As for compression. if your data already compressed on the disk i think it's stay compressed on the wire with DP type of mirroring. if you have 10GB i'll say don't bother with compression anyway...
Gidi
Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
1 REPLY 1
KernelPanic has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi
As for monitoring. OCUM doing pretty good job of monitoring LAGS and failed runs.
Don't worry to much about sizing unless you have very large data sets (Millions of files, TB's upon TB's of changes every day).. if you see perf degradation or the updates are not fast as you would expect. capture a perfstat and let NetApp point to bottleneck. one recommendation i can give - is don't run it frequently (around 10-15 min or less) on volumes with Millions of files (i now fighting disswizeling issue on one of my clusters because of it) .
As for MTU between sites - not recommended. as you may find that it's not supported on all the network devices and on the service providers in the way. that will cause overload on the network equipment and slow down you connectivity.. even if you do set it up - tue return is still pretty small - see:
"Partial obsolescence" - https://en.wikipedia.org/wiki/Jumbo_frame
http://www.mirazon.com/jumbo-frames-do-you-really-need-them/
https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech
As for compression. if your data already compressed on the disk i think it's stay compressed on the wire with DP type of mirroring. if you have 10GB i'll say don't bother with compression anyway...
Gidi
Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK