ONTAP Discussions

Sync Mirror : plaxe failover

deepak1224
5,477 Views

Is there any way to failover between plexes in SyncMirror. I wanted to use SyncMirror in HA environment.

Like snapmirror, Is manual interaction require to make plex available in case any one of of the plexes gets unavailable. Or both plexes are available for write at any time.

1 ACCEPTED SOLUTION

adamfox
5,477 Views

Let me try to make the major distinction between SnapMirror and SyncMirror clear.

With SnapMirror, typically a different node owns each copy.  It can be the same node, but it's a separate volume or qtree, but usually it's a different controller.

With SyncMirror, both plexes (copies of the data) are owned by the same controller.  So failover is automatic.  You can break off a plex manually if you like, but it's kind of like RAID-1.

For HA, you can have a HA pair where the 2nd plex lives next to the other controller, but that 2nd plex is still owned by the primary controller and the DR controller only touches it in the case of a cluster failover as it would all of the other disks.

So there's not a real need for a manual failover per se.  Although you could by simply breaking the mirror.  But in that case the same controller still runs the volumes in that aggegate.

Hope this helps..

View solution in original post

5 REPLIES 5

aborzenkov
5,477 Views

Both plexes are written simultaneously; failure of one plex is handled transparently, there is no need for manual intervention.

deepak1224
5,477 Views

Thanks a lot for reply.

Is there any notion of “Failover or Switchover w/ SyncMirror” like SnapMirror?

I went through command and docs but didn't find any. So just want to confirm.

aborzenkov
5,477 Views

Not that I know of. On user level you see just the same (single) aggregate; whether it consists of one or two mirrored plexes is mostly irrelevant.

adamfox
5,478 Views

Let me try to make the major distinction between SnapMirror and SyncMirror clear.

With SnapMirror, typically a different node owns each copy.  It can be the same node, but it's a separate volume or qtree, but usually it's a different controller.

With SyncMirror, both plexes (copies of the data) are owned by the same controller.  So failover is automatic.  You can break off a plex manually if you like, but it's kind of like RAID-1.

For HA, you can have a HA pair where the 2nd plex lives next to the other controller, but that 2nd plex is still owned by the primary controller and the DR controller only touches it in the case of a cluster failover as it would all of the other disks.

So there's not a real need for a manual failover per se.  Although you could by simply breaking the mirror.  But in that case the same controller still runs the volumes in that aggegate.

Hope this helps..

chris_algar
5,477 Views

By HA environment do you mean metro cluster, or v-series metrocluster?

We run a lot of v-series metrocluster and have occasional plex failures do to short lived timeouts, or longer duration building power downs. In general you do not need to take any action, the plexes will re-mirror when both are available.

There have been occasions where a failure on the storage array brings into doubt the consistence of one of the version of the data, outside of DataOntap's scope. Like a dual controller failure on an array, so writes AKed to the filer might not have made it to disk. In these cases we remove the offending plex with aggr split, and force a full re-mirror with aggr mirror. If you did not want to re-mirror but just wanted to check consistency you can use aggr verify and indicate which plex is authoritative if a difference is detected.

Chris

Public