SnapMirror Replication Calculator

by on ‎2009-03-03 10:24 AM - edited on ‎2014-09-26 12:54 PM by Community Manager

I created an Excel spreadsheet as I was tasked with creating a SnapMirror replication schedule for a customer that only had an 8meg line. The problem was that the replication would fail if more than 1 transferred happened at the same time. So I needed a way of calculating the rate of change, and designing the replication based on this.

 

The spreadsheet will calculate the snapmirror.conf schedule, but also give you the commands to create, restrict and initialize the volumes.

 

I’ve not protected the spreadsheet, so adapt it all you like, but I’d appreciate any feedback from people that have taken this and improved or adapted it.

 

Sections that are meant to be editable…

 

Source Filer: Enter the name of the SnapMirror source filer. The spreadsheet is designed to work per destination filer, so put in as many different sources as you need, each line is calculated independently.

 

Destination Filer: The spreadsheet is designed to be per destination filer, so just fill in the top record. Potentially you can change this per line if you want.

 

Volume: This is the name of the primary volume. The destination volume is created as sourcefiler_sourcevolume as this was created as a shared platform, so I needed to be able to differentiate.

 

Size: The size in GB of the source volume. The spreadsheet shows most of it’s workings, so you can see how it breaks the GB size down into MB, then rate of change.

 

Start Time (24 hour): This allows you to alter the start time of the replication schedule. You may have a tape backup that needs to run at midnight, or another operation so you want to start the replication later in the morning, or any time of day to be honest!

 

Rate of Change: This is calculated from the right hand percentage value, but you may want to manually enter a more accurate rate of change if it’s known.

 

Over the far right of the spreadsheet is some constants that apply to the whole page.

 

Global Daily Rate of Change: A percentage rate of change. This was useful as I didn’t know the actual rate of change at the time. Ideally you would know the exact rate of change and fill in the column to the left.

 

Bandwidth Available: This is in megabits/sec, the spreadsheet converts this into megabytes for it’s calculations.

 

Variant on Schedule: This allows you to put padding around each schedule in case of errors on the line, or variations in the rate of change.

 

Thin Provisioned Destination Volume: This is the size that the “vol create” command will thin provision the volumes to. I never create the destination volume to the same size, and always thin provision so as changes to the primary volume will automatically allow the destination to grow to within the thin provisioned limits.

 

Hopefully you all find this useful, but feedback is much appreciated.

 

 

Comments

Hi Chris,

Excellent document... but is there a command to accurately work out the daily rate of change for the whole filer. I have been manually measuring this running the 'snap delta' command - but this will only show the hourly snapshot for each volume individually.

Thanks in advance,

You should be able to pull out some information from Operations Manager, but globally across the filer would probably be an issue. Unfortunately "snap delta" can be quite intensive on the filer as it fully inspects the snapshots, so this has never been included by default in any other way. Potentially you could script this and collate it within the script.

Sorry, not very helpful advice!

Thanks Chris,

How would you best recommend measuring the rate of change?  Unfortunately, we do need to be quite strict as we're in the middle of deepest darkest Cambridgeshire and bandwidth is very expensive!  Currently the proposed pipe is 6mb (not a lot, I know).  I admit, it doesn't look good - according to your doco, 5% change on 200gb will take just shy of an hour!  Thanks again.

Sam

To be certain, I'd definitely recommend doing a "snap delta" against the volumes and adding it up. It's the best way to be sure, although I'd agree it's definitely the longest way!

jondtabar

6Mbps is not that slow.  We have some sites with 1Mbps links .  We calculate our rate of change via the snap delta command, and scheduled our snapmirror replications at different times throughout the night.  We also use Quality of Service (QoS) on each site's router to give SnapMirror transfers a low priority over any other WAN traffic.  Works out well for us.  Our data doesn't change very much (1-2GB per volume per night), so we don't have any issues.

It is all relative really. I've started another DR project with only an 8Mbps line, but the rate of change is 10-20g per volume, across 50 volumes, so the line struggles. Also we need to complete replication overnight within a certain window before QoS reduces the traffic during the day. SnapMirror Compression has helped a lot, but it's still nice to have a helper when trying to schedule these things.

Going through 50-100 volumes, calculating the snap delta for them all and then scheduling them all can be a right pain! If you can get an idea of the average rate of change, and then just pump it into a canned spreadsheet that does the leg-work for you, you save yourself a fair amount of hassle, and you also have a reference document to when your replication will happen without having to reference the filer.

But I agree, if you have a small rate of change and a small number of volumes, it's much more accurate to do it manually, and you'll probably improve your replication window as you can calculate the replication times more accurately. If you have a large number, it can be unwieldly! and considering the snap delta can keep a filer busy for awhile on bigger / fuller systems, it can take awhile to generate this output!

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.