Hi
As for monitoring. OCUM doing pretty good job of monitoring LAGS and failed runs.
Don't worry to much about sizing unless you have very large data sets (Millions of files, TB's upon TB's of changes every day).. if you see perf degradation or the updates are not fast as you would expect. capture a perfstat and let NetApp point to bottleneck. one recommendation i can give - is don't run it frequently (around 10-15 min or less) on volumes with Millions of files (i now fighting disswizeling issue on one of my clusters because of it) .
As for MTU between sites - not recommended. as you may find that it's not supported on all the network devices and on the service providers in the way. that will cause overload on the network equipment and slow down you connectivity.. even if you do set it up - tue return is still pretty small - see:
"Partial obsolescence" - https://en.wikipedia.org/wiki/Jumbo_frame
http://www.mirazon.com/jumbo-frames-do-you-really-need-them/
https://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=2770&context=cstech
As for compression. if your data already compressed on the disk i think it's stay compressed on the wire with DP type of mirroring. if you have 10GB i'll say don't bother with compression anyway...
Gidi