ONTAP Hardware

What happens if source fails during qtree snapmirror incremental update?

FCITDEPTAMATTHEYCOM
3,041 Views

Hi,

Sorry if this is a basic question but im not sure where else to look for the answer.

We have two FAS2040 in different locations. We are planning to snap mirror from one location to another for redundancy purposes. I was going to use Qtree snap mirroring so I can mirror an individual VM if required. I've read that doing snapmirroring of a qtree only gives you one snapshot at the target location. My question is what happens if the source FAS fails during the middle of a snapmirror update? Will the destination be broken? Obviously this isnt the first initial snap mirror. This would be one of the incremental updates to keep the mirrors in sync. If so then would a volume snap mirror be more protected from this?

Many thanks

Graham

4 REPLIES 4

shaunjurr
3,041 Views

Hi,

Basically, if you already have a snapmirror relationship established (it's only minimally different if you don't) and an update fails, the next scheduled update (or a manual update) will simply continue from the last checkpoint and continue to complete the mirroring of the last snapshot.

If you want multiple copies on the destination, you simply need to schedule normal snapshots on the destination (hourly, nightly, weekly).  This isn't necessarily ideal, because the two snapshots won't necessarily contain the same information.  This works ok for normal user data (unstructured CIFS or NFS) but would be problematic for data that has to be "crash consistant".  The tool for this, at the moment at least until QSM and Snapvault are merged, is to use Snapvault, because Snapvault creates a local snapshot on the destination when the transfer is completed (and can start SIS jobs automatically after the transfer is finished as well).  You could probably script somthing that would transfer and then create a snapshot, but you'd have to create your own delete/retention schedule as well.

FCITDEPTAMATTHEYCOM
3,041 Views

Hi, Many thanks for your reply.

I need to know what happens when the source FAS2040 can never come back up. Will the destination qtree snapmirror be ok in the event that the source FAS is destroyed during the middle of snapmirror update? We have the various netapp software to make sure systems like SAP are in a consistent state before snapshotting so that part is ok.

The reason we have two locations is in the event of fire or water damage.

Kind Regards

Graham

shaunjurr
3,041 Views

I guess that depends a bit on how you do your snapmirror setup.  In a situation where you just want sort of identical copies in both locations, then volume snapmirror is probably a better idea.  Then you have block identical copies on both source and destination in all snapshots.  You can always break the mirror and roll back to the last snapshot to start up your systems in the state of that last backup. How much data you can lose and how fast you have to have things going again sort of determine how often you take snapshots, besides the load on the systems.

If you really need very high availability, then a MetroCluster is probably the best idea (no 2040 MetroClusters), but rarely necessary for vmware setups.  There are disaster recover mechanisms (site recovery something or other) for making such migrations less painful, but these aren't free either.  The best thing is probably test this on a test datastore and get to know what to expect and what you will need to script or document in procedures.

The destination filesystem will be ok if you have a snapshot there that is based on a consistant snapshot on the source.  That means volume snapmirror or snapvault or a combination of qtree snapmirror and some scripting.

FCITDEPTAMATTHEYCOM
3,041 Views

Hi,

Why would volume snapmirror be better in this case? I know the various pros and cons of both type of mirroring but what I dont know is does qtree snapmirror have any safety mechanism where if the source FAS dies for whatever reason during an update of the mirror is data on the destination still the same as before the snapmirror update started or will it be a mishmash of both?

The main reason I wanted to do qtree mirroring is...

I can update mirrors of indivudla VM's instead of doing all of them. Unless I did a volume per VM which would need lots of NFS links in VMware which is why I descided against it.

With qtree mirrors I can have one volume per FAS controller. Since both FAS boxes will be active I would have to have 2 volumes per FAS controller if I used volume snapmirror. One for live data and one for mirrors. With qtree snapmirror they can share the same volume and keep things simple.

I would love to have 1 volume per VM but the amount of NFS links it would require in VMware would get silly.

Regards

Graham

Public