ONTAP Discussions

SnapMirror Best Practices

moncyvarghese
5,439 Views

Hi Gurus,

How do we schedule snapmirror for an enterprise customer who has atleast 100 volumes . And how do we know when snapmirror add a performance tax on the system .i have often heard support telling me that you will need to reschedule snapmirror schedules.

Any feed back on this will hihgly be appreciated.

regards

6 REPLIES 6

radek_kubka
5,439 Views

Hi,

Have you seen this thread?

http://communities.netapp.com/message/13035#13035

Regards,

Radek

BrendonHiggins
5,439 Views

We have 100+ snapmirrors running and on earlier releases of DoT pre 7.3 you could see the CPU load that these placed on the filers but the issue is much reduced now.  A filer can only have a limited number of concurrent snapmirror at any one time so be sure to spread the scheduling out.  You can increase this amount by adding a 'free' Nearstore personality to the filer, via a SIS license.

The last problem will be monitoring snapmirror lag, ie the backups are taking place within a useful time frame.  We are currently looking into appliance watch and SCOM to do this for use but tuning the alerts is a challenge.

Hope it helps

Bren

martin_fisher
5,439 Views

Hi moncyvarghese

Simple rules for the snapmirrors also help, i.e:

  • Do you need to snapmirror all volumes? are they All business critical?
  • Snapmirror out of hours (11pm to 4am for example)
  • Use snapmirror throttles if the transfers are within business hours and the appliance is using a shared network link.
  • Snapmirror other volumes, less critcal volumes at longer internals, eg 24 or 48 hours.

If the client requires all the volumes to be mirrored, they could even look into purchasing a dedicated comms link for the snapmirror transfers. This would also prevent potential interface bottlenecks on the filer, if no throttles are used. Once most of the baseline transfers (snapmirror initialization) are complete, most incrementals/updates should be reasonable small, so should have a huge impact on the appliance (unless the data churn is huge!).

Hope this helps.

Regards Martin

moncyvarghese
5,439 Views

Hi gurus,

what i want to understand  how do i fine tune my snampmirror schedules , how do i know that snapmirror transfer less data on an hourly bassis as opposed to shceuling it every four hours.how do i calculate that is it based on snap delta

Apart from that client already has oracle archive log volume that generate archive logs like every minute and in hour it almost 100mb  , what do i do for such kind of volume will snapmirror compression help. will zipping the archive logs using script help.

please gurus share you experince with other folks that are still newbies to netapp world.

regards

MAC

martin_fisher
5,439 Views

A snap delta will tell you the size difference in KB between one snapshot and another (or the volume). If approx 100mb is generated in a hour, then approx 400mb could be generated in 4 hours, you need to gauge the data growth rate of the oracle system. Snapmirroring the logs every hour will provide you will better recovery points, ie your DB could be recovered to within 1 or 2 hours (DB and replaying the logs), instead of only 4 hours.

You could test the times of the snapmirror transfers by performing 1 after an hour and 1 after 4hours and see which takes longer, however as before this is also impacted by other activities, such as other transfers, activity on the filer, or network activity. (the transfer may complete faster at 3am in the morning than 11am during the day).

The data/logs could be compressed, but then you would need to uncompress this at the destination to use for DR, which would be a bit of a ballache.

regards Martin

aborzenkov
5,439 Views

If approx 100mb is generated in a hour, then approx 400mb could be generated in 4 hours


Not neccessary. If every hour the same 100MB are overwritten, both 1 and 4 hours snapshots will have exactly the same "size". In general, longer intervals tend to have "compression" effect.

Unfortunately this is not true for the case of Oracle archive logs because here nothing is ever overwritten; so the amount to transfer will always be exactly size of archive logs since last run.

Public