Data Backup and Recovery

Snapmanager for Exchange DAG


I recently added a couple extra Mailbox servers and created my DAG with replication running. I currently run snapmanager as a local backup of the 2 main mailbox servers and now want to change that so I can run the DAG backups with Snapmanager. I have tried to do it and failed misserably because I want to run multiple database copies and when I do that it fails with invalid configuration errors.

Any help is much appreciated.





First hint - did you add DAG instance to SME, so it appears as a manageable object alongside with your mailbox servers?





Radek is correct - you should be able to add the Cluster name into SME, however this will only show the server that is owner of the cluster at that time.

It can be a tad confusing, as the windows server that holds the cluster resource, may not be the active mailbox DAG.

What invalid errors do you receive when the additional are being added to SME? Has the SME config wizard been run and all the DB's TXLogs etc are in the correct place?


The DAG object has been added to the SME for each Mailbox server. Do I run the cinfig wizard for the DAG on each server or just one?


Config wizard should be run on each DAG member.


Each server that is a member of the DAG needs the SME config wizard run on it and the DB's and TX's logs moved to separate LUN's. The Snapinfo folders are normally with the TX logs, if not on a separate LUN/Drive.

Once the Config wizard has been run on all nodes and completes with no errors, run the backup wizard and connect to the active server in the cluster. When you run through the backup wizard, it will prompt you and ask if you wish to create the backup job on all nodes of the DAG. Click yes . (This part is nearer the end).

You can use the wizard to create a SME job that exists on all the DAG servers. The scheduled job will run from the windows server that holds the windows cluster resource, but is not necesarily the active Exchange Mailbox server.

The SME job is cluster aware, so all even though the jobs technically start on the other servers, because the switch "-clusteraware" is used in the job syntax, the jobs dont run, except on the primary cluster resource owner. therefore if the primary cluster resource is moved to another server, the SME job would then run on there


Hi - did you get this working in the end? Is the issue/question answered Thanks