ONTAP Discussions

Exchange 2010, SME, and Protection Manager

wmccormick
4,739 Views

Hi everyone,

I have a DAG with four databases setup across three servers.  I have SnapDrive 6.2 (installed with Protection Manager integration enabled), SME 6.0.  Protection Manager is 3.8.1.  LUNs are provisioned from a filer running 7.3.4P3 (soon to be 8.0.1).

In SME I have the DAG setup.  To create a dataset I had to create a dataset against a server, so it's on mbox01 server.  This dataset has a remote backup policy and schedule and has been successfully baselined.  No local backups have been created in either the DAG or the server dataset within SME.

So my questions:

1) To get backups vaulted via SnapVault do I need to create local backup schedule in the DAG or the server?

2) If the remote vaulting is configured via the server, what happens when that server goes down for maintenance or failure?

3) How do I restore to a different machine in the DAG from the remote vault?

Are there any documented best practices for this?  I'm finding the section in the SME manual on Dataset and SnapVault integration to be somewhat lacking in details.  To top it all off, I'm not an Exchange guy either.

Any information would be appreciated.


Thanks!

Wayne

5 REPLIES 5

wmccormick
4,739 Views

Looking at other threads around here, none specific to Exchange 20101, let me see if I'm getting close to the proper solution...

  1. I create backups in the DAG against the active databases.  I could just as easily pick the passive databases as well.  The backup will be configured to backup and truncate the log files.  I will use a schedule of three backups per day (8am, noon, 4pm) retained for three days, automatic snapshot deletion.
  2. I create a dataset backup against each member server in the DAG complete with its own policies, no local backups configured.  It will look for backup sets against the databases that it is configured to see on the local host (I would put all the databases that are in the DAG in this dataset).  In this way, no matter where the active databases is running at any given time, I’ll still get the backups associated with it.
  3. The vault will be scheduled to occur at 12:30am nightly.  It will look at all the databases on the member server and vault only those that have a backup within it.

Or do I need a schedule for the remote backup?  This thread indicates that I don't, that they occur after every local backup.  I just need to set the retention schedule.

wmccormick
4,737 Views

I'm working through solving my own problem.

What I have now...

  • SME has a DAG and a member server configured.  The member server is configured with a dataset in Protection Manager.
  • Configure backups in the DAG against the active database (this is important if you want to vault it).  Select the option to vault the backup. (see attached)
  • Setup the backup schedule.

Done.

I still have questions though.  When the active database moves from one server to another, how does it get backed up?  Or does it?  I still need to test this.

balbeer
4,738 Views

Hi,

good to see that you are close to or may have already solved your own problem!

When the DAG failsover to a DAG copy on a different server, then, the backups WILL NOT kick-in! The reason is teh schedule backups are tied to a DAG server, in your case, it was always teh active node. If you want to protect in a scenario where standby DAG becomes active, and then contionue to take backups, then, you need to write a script with logic as simple as, if the active node in the DAG cluster is node1, then always run backup on node1 (this is what you have), else if, the active node is not running the dag database, and its standby DAG is running the passive database, then, run SME backups on the active node2.

The above deals only with local snapshot backups, NOT the vault. For vault backups, you wil have to re-initate from node2 controller, this means a new baseline! But i guess, you can have a business process in here to make this decision!

By the way, the retention for vaults are set in the protection policy that you have defined in NMC (NetApp Mgmt Console), and the vault backups are like "on-demand" backups, i.e.: once the local SME backup (say nightly in your case) is completed, then it will via the policy performs SnapVault backup updates, where the host running SME via SnapDrive makes a call to Prot. Mgr. to perform an update of vault from the unique snapshot that SME took on the local (primary) controller.

Cheers, balbeer         

rmharwood
4,738 Views

Sorry, I've just stumbled upon this thread having just asked the similar question regarding the vault/archive backup.

The fact that a re-baseline is required if the database changes node is crazy! That means an additional copy of the database is needed on the snapvault secondary and that wastes a lot of space. Does it make sense to run and keep all of the backups on the primary volumes and just not bother with archiving at all?

marnold
4,738 Views

Here's another way, and a broad process, to handle a big change if you need to make big changes to the server side if a vaulting relationship is somewhere in the mix. It's all PowerShell capable, uses nothing on the storage side and is 100% supported by NetApp and, more importantly, Microsoft. See what you think.

If you want to archive a DAG copy then the easiest thing is to run SME jobs against that particular node rather than to run it against the Active or Passive Copy Pref 2 (or 3 etc). That let's the vault run in a consistent and stable manner from one location to the other. Remember that if you're not vaulting all the stores you can have jobs that run against the node (always) for vaulting and then some that run against the DAG parameters for non vaulted stores. It's pretty flexible.

Ahh, but I now have an infrastructure change (nothing to do with the Exchange application, this is pure server infrastructure) and take a database away from one server, placing it on another. How do I handle that without causing re-baseline mayhem? Easy......

If the copy you're vaulting happens to be active you simply activate the other local copy. Then you have something passive to work with.

You then suspend seeding.

You then establish a new copy of the store on the new server but use the -seedingpostponed switch so that the database copy does not happen.

You then swing the LUNs from the old server to the new server.

You then remove the database copy from the server you just took the LUNs away from and you resume replication on the new server.

Exchange sees that the database is good, that a handful of logs are missing and takes care of it at the application level.

Finally, you do what you need to do in order to do SME backups and vaults from that server.

Nothing in terms of the back-end relationships changed.

Public