2011-09-14 01:29 PM - edited 2015-12-18 01:24 AM
For illustration purposes, I'm trying to snapshot database volumes on a 2 node DAG where each node has copies of all DBs.
So DagNode1 has DB1(active) and DB2, and DagNode2 has DB1 and DB2(active). All on NetApp disk.
Is there any reason not to snapshot DB1 and DB2 on both nodes other than potentially wasted disk space? I thought I read something about problems where snapshotting the passive DB copy resulted in problems with up to the minute recovery of the active copy because of where the SnapInfo was located.
The other question I had was in terms of configuring the snapshot tasks. I see you can configure the jobs per node, or all via the DAG name. Does this make any real difference?
I was hoping to snapshot the active DB1 on DagNode1 and active DB2 on DagNode2, and if one node failed, run the snapshots both on the same node (now DBs active on it). It doesn't look like this is possible, but might be irrelevant based on the answer to my other questions.
Solved! SEE THE SOLUTION
2011-09-14 01:37 PM
Hi and welcome to the Communities!
Have you seen Qing's response in this thread?
I am not sure why you need to restrict yourself to the "active" for running the restore. You can do the restore on a node where you have the backup and the log files backed by SME, and restore database to a point in time there. Then you can failback the database to the original node if needed after the restore.
2011-09-16 07:01 AM
I read through that post, and I think that my environment is behaving in a different way than he describes. It appears that even though my snaps are configured as "-clusteraware" they occur on both nodes, snapping all disks, not just some on one node and some on the other. So in this case, I'm concerned that since the snaps occur at slightly different times the logs are truncated in a way which can cause data loss. So the snap on DagNode1 DB1 completes first clearing logs up to the point where it started, but then the Snap on DagNode2 DB1 clearing logs up to a few seconds later. Isn't then DagNode1 missing a couple of log files?
Am I supposed to limit the cluster aware backups manually to half the DBs on DagNode1 and half on the other? Does it then perform all snaps on one server if the other server fails?
2011-09-21 12:10 PM
So I attempted to test this over the last two days with some scheduled Snaps and I don't seem to get any failover capability.
I set it up to do the snapshot for DB1 on DagNode1, and DB2 on DagNode2 and when I shut down Node2, that Snap fails, while the one on the other node still works.
Am I incorrect in my understanding that it should Snapshot the DB on a different node if the preferred one is unavailable?
2011-10-05 09:29 AM
Found the setting which handles this scenario.
so the command line looks something like
new-backup -Server DAGNAME -ClusterAware -StorageGroup 'DB1\Server1','DB1\Server2','DB2\Server1','DB2\Server2' -ActiveDatabaseOnly
I've left out most of the other cmd line switches to highlight the DAG and Activation awareness. In this case, you list all the DB copies but tell it to only snap the Active ones via -ActiveDatabaseOnly. You set up the job on all nodes. The -ClusterAware switch prevents the job from running on nodes other than the one which owns the Cluster Group.
So normally if DB1 is active on Server1 and DB2 is active on Server2, the snapshots occur DB1 on S1 and DB2 on S2, if both DBs are active on Server2, both snapshots occur on Server2.