SME 6.0 backup Exchange 2010 passive database


We have several questions regarding SME 6.0 in Exchange 2010 environment (DAG):

1) Although it is possible to backup passive databases (TR-3824 in P17) in the same way as active databases, what is the NetApp recommendation concerning the backup of the passive databases?
This seems to cause issue with the snapinfo directory:

2) In case of  active database crash and therefore failover to the passive database (becoming the active database), what do SME if before it was doing the backup of the passive database?
  is this DAG failover (passive to active) are handled automatically by SME?  do the backups of databases via SME will continue to operate or require a reconfiguration of backup jobs.?

Thank you in advance for your answers


Re: SME 6.0 backup Exchange 2010 passive database

For question#1, NetApp recommendation will be the same as MSFT, since from SME point of view, there is no much difference in terms of where the backup will be created. Normally people would suggest running backup on passive databases so it will not put load on the active copy of the databases.

In terms of the snapinfo directory issue raised in the blog, it said "So if you're on the active and want to do a partial restore to a particular point in time for whatever reason you may not be able to go all the way up to that point…", I am not sure why you need to restrict yourself to the "active" for running the restore. You can do the restore on a node where you have the backup and the log files backed by SME, and restore database to a point in time there. Then you can failback the database to the original node if needed after the restore. So I am not sure why that is an issue.

For questions#2, yes, SME scheduled job can handle the database state change if you use SME GUI to schedule the job from DAG level, and have -clusteraware switch in the Powershell cmdlet. You don't need to reconfigure the backup jobs. When SME is scheduling backup job, it creates same backup jobs on all the DAG nodes, but only one job will be actually running during the run time, since each job will check to see if it is running on the active node of the cluster, if it is not, it will stop immediately.