Tech ONTAP Blogs

Fun with automation – ONTAP Consistency Groups

steiner
NetApp
1,911 Views

Fun with automation – ONTAP Consistency Groups

 

There's a lot to this post. I'll cover what the heck Consistency Groups (CGs) are all about, how to automate CG operations via the REST API, how to convert existing volume snapmirrors into a CG configuration without a requirement to retransfer the whole data set, and finally how to do it all via the CLI. 

 

Some of the content below is copied directly from https://community.netapp.com/t5/Tech-ONTAP-Blogs/Consistency-Groups-in-ONTAP/ba-p/438567. I did that in order to have all the key concepts in the same place.

 

Consistency Groups in ONTAP

 

There’s a good reason you should care about CGs – it’s about manageability.

 

If you have an important application like a database, it probably involves multiple LUNs or multiple filesystems. How do you want to manage this data? Do you want to manage 20 LUNs on an individual basis, or would you prefer just to manage the dataset as a single unit? 

 

Volumes vs LUNs

 

If you’re relatively new to NetApp, there’s a key concept worth emphasizing – volumes are not LUNs.

 

Other vendors use those two terms synonymously. We don’t. A Flexible Volume, also known as a FlexVol, or usually just a “volume,” is just a management container. It’s not a LUN. You put data, including NFS/SMB files, LUNs, and even S3 objects, inside of a volume. Yes, it does have attributes such as size, but that’s really just accounting. For example, if you create a 1TB volume, you’ve set an upper limit on whatever data you choose to put inside that volume, but you haven’t actually allocated space on the drives.

 

This sometimes leads to confusion. When we talk about creating 5 volumes, we don’t mean 5 LUNs. Sometimes customers think that they create one volume and then one LUN within that volume. You can certainly do that if you want, but there’s no requirement for a 1:1 mapping of volume to LUN. The result of this confusion is that we sometimes see administrators and architects designing unnecessarily complicated storage layouts. A volume is not a LUN.

 

Okay then, what is a volume?

 

If you go back about eighteen years, an ONTAP volume mapped to specific drives in a storage controller, but that’s ancient history now.

 

Today, volumes are there mostly for your administrative convenience. For example, if you have a database with a set of 10 LUNs, and you want to limit the performance for the database using a specific quality of service (QoS) policy, you can place those 10 LUNs in a single volume and slap that QoS policy on the volume. No need to do math to figure out per-LUN QoS limits. No need to apply QoS policies to each LUN individually. You could choose to do that, but if you want the database to have a 100K IOPS QoS limit, why not just apply the QoS limit to the volume itself? Then you can create whatever number of LUNs that are required for the workload.

 

Volume-level management

 

Volumes are also related to fundamental ONTAP operations, such as snapshots, cloning, and replication. You don’t selectively decide which LUN to snapshot or replicate, you just place those LUNs into a single volume and create a snapshot of the volume, or you set a replication policy for the volume. You’re managing volumes, irrespective of what data is in those volumes.

 

It also simplifies how you expand the storage footprint of an application. For example, if you add LUNs to that application in the future, just create the new LUNs within the same volume. They will automatically be included in the next replication update, the snapshot schedule will apply to all the LUNs, including the new ones, and the volume-level QoS policy will now apply to IO on all the LUNs, including the new ones.

 

You can selectively clone individual LUNs if you like, but most cloning workflows operate on datasets, not individual LUNs. If you have an LVM with 20 LUNs, wouldn’t you rather just clone them as a single unit than perform 20 individual cloning operations? Why not put the 20 LUNs in a single volume and then clone the whole volume in a single step?

 

Conceptually, this makes ONTAP more complicated, because you need to understand that volume abstraction layer, but if you look at real-world needs, volumes make life easier. ONTAP customers don’t buy arrays for just a single LUN, they use them for multiple workloads with LUN counts going into the 10’s of thousands.

 

There’s also another important term for a “volume” that you don’t often hear from NetApp. The term is “consistency group,” and you need to understand it if you want maximum manageability of your data.

 

What’s a Consistency Group?

 

 In the storage world, a consistency group (CG) refers to the management of multiple storage objects as a single unit. For example, if you have a database, you might provision 8 LUNs, configure it as a single logical volume, and create the database. (The term CG is most often used when discussing SAN architectures, but it can apply to files as well.)

 

What if you want to use array-level replication to protect that database? You can’t just set up 8 individual LUN replication relationships. That won’t work, because the replicated data won’t be internally consistent across volumes. You need to ensure that all 8 replicas of the source LUN are consistent with one another, or the database will be corrupt.

 

This is only one aspect of CG data management. CGs are implemented in ONTAP in multiple ways. This shouldn’t be surprising – an ONTAP system can do a lot of different things. The need to manage datasets in a consistent manner requires different approaches depending on the chosen NetApp storage system architecture and which ONTAP feature we’re talking about.

 

Consistency Groups – ONTAP Volumes

 

The most basic consistency group is a volume. A volume hosting multiple LUNs is intrinsically a consistency group. I can’t tell you how many times I’ve had to explain this important concept to customers as well as NetApp colleagues simply because we’ve historically never used the term “consistency group.”

 

Here’s why a volume is a consistency group:

 

If you have a dataset and you put the dataset components (LUNs or files) into a single ONTAP volume, you can then create snapshots and clones, perform restorations, and replicate the data in that volume as a single consistent unit. A volume is a consistency group. I wish we could update every reference to volumes across all the ONTAP documentation in order to explain this concept, because if you understand it, it dramatically simplifies storage management.

 

Now, there are times where you can’t put the entire dataset in a single volume. For example, most databases use at least two volumes, one for datafiles and one for logs. You need to be able to restore the datafiles to an earlier point in time without affecting the logs. You might need some of that log data to roll the database forward to the desired point in time. Furthermore, the retention times for datafile backups might differ from log backups.

 

Native ONTAP Consistency Groups

 

ONTAP also allows you to configure advanced consistency groups within ONTAP itself. The results are similar to what you’d get with the API calls I mentioned above, except now you don’t have to install extra software like SnapCenter or write a script.

 

For example, I might have an Oracle database with datafiles distributed across 4 volumes located on 4 different controllers. I often do that to ensure my IO load is guaranteed to be evenly distributed across all controllers in the entire cluster. I also have my logs in 3 different volumes, plus I have a volume for my Oracle binaries.

 

I can still create snapshots, create clones, and replicate that entire 4-controller configuration. All I have to do is define a consistency group. I’ll be writing more about ONTAP consistency groups in the near future, but I’ll start with an explanation of how to take existing flat volumes replicated with regular asynchronous SnapMirror and convert it into consistency group replication without having to perform a new baseline transfer.

 

SnapMirror -> CG SnapMirror conversion

 

Why might you do this? Well, let’s say you have an existing 100TB database spread across 10 different volumes and you’re protecting it with snapshots. You might also be replicating those snapshots to a remote site via SnapMirror. As long as you’ve created those snapshots correctly, you have recoverability at the remote site. The problem is you might have to perform some snaprestore operations to make that data usable.

 

The point of CG snapmirror is to make a replica of a multi-volume dataset where all the volumes are in lockstep with another. That yields what I call “break the mirror and go!” recoverability. If you break the mirrors, the dataset is ready without a need for additional steps. It’s essentially the same as recovering from a disaster using synchronous mirroring. That CG snapmirror replica represents the state of your data at a single atomic point in time.

 

Critical note: when deleting existing SnapMirror relations be extremely careful with the API and CLI calls. If you use the wrong JSON with the API calls or the wrong arguments using the CLI, you will delete all common snapshots on the source and destination volumes. If this happens you will have to perform a new baseline transfer of all data.

 

SnapMirror and the all-important common snapshot.

 

The foundation of snapmirror is two volumes with the same snapshot. As long as you have two volumes with the exact same snapshot, you can incrementally update one of those volumes using the data in the other volume. The logic is basically this:

 

  • Create a new snapshot on the source.
  • Identify the changes between that new snapshot and the older common snapshot that exists in both the source and target volumes.
  • Ship the changes between those two snapshots to the target volume.

 

Once that’s complete, the state of the target volume now matches the content of that newly created snapshot at the source. There’s a lot of additional capabilities regarding storing and transferring other snapshots, controlling retention policies, and protecting snapshots from deletion. The basic logic is the same, though – you just need two volumes with a common snapshot.

 

Initial configuration - volumes

 

Here's my current 5 volumes being replicated as 5 ordinary snapmirror replicas:

 

rtp-a700s-c02::> snapmirror show -destination-path jfs_svm2:jfs3*
Source Path Destination Mirror Path Status
------------------- ----------------------- --------------
jfs_svm1:jfs3_dbf1 jfs_svm2:jfs3_dbf1_mirr Snapmirrored
jfs_svm1:jfs3_dbf2 jfs_svm2:jfs3_dbf2_mirr Snapmirrored
jfs_svm1:jfs3_logs1 jfs_svm2:jfs3_logs1_mirr Snapmirrored
jfs_svm1:jfs3_logs2 jfs_svm2:jfs3_logs2_mirr Snapmirrored
jfs_svm1:jfs3_ocr jfs_svm2:jfs3_ocr_mirr Snapmirrored

 

Common snapshots

 

Here’s the snapshots I have on the source:

 

rtp-a700s-c01::> snapshot show -vserver jfs_svm1 -volume jfs3*
Vserver  Volume      Snapshot                                 
-------- --------    -------------------------------------
jfs_svm1 jfs3_dbf1   snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520140.2024-02-23_190259
         jfs3_dbf2   snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520141.2024-02-23_190315
         jfs3_logs1  snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520142.2024-02-23_190257
         jfs3_logs2  snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520143.2024-02-23_190258
         jfs3_ocr    snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520139.2024-02-23_190256

 

And here’s the snapshots on my destination volumes:

 

rtp-a700s-c02::> snapshot show -vserver jfs_svm2 -volume jfs3*
Vserver  Volume           Snapshot                                
-------- --------         -------------------------------------
jfs_svm2 jfs3_dbf1_mirr   snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520140.2024-02-23_190259
         jfs3_dbf2_mirr   snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520141.2024-02-23_190315
         jfs3_logs1_mirr  snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520142.2024-02-23_190257
         jfs3_logs2_mirr  snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520143.2024-02-23_190258
         jfs3_ocr_mirr    snapmirror.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520139.2024-02-23_190256

 

See the common snapshot in each volume? As long as those snapshots exist, I can do virtually anything I want to these volumes and I’ll still be able to resynchronize the replication relationships without a total retransfer of everything.

 

Do it with REST

 

The customer request was to automate the conversion process. The output below used a personal toolbox of mine to issue REST API calls and print the complete debug output. I normally script in Python.

 

The POC code used the following inputs:

 

  • Name of the snapmirror destination server
  • Pattern match for existing snapmirrored volumes
  • Name for the ONTAP Consistency Groups to be created

 

The basic steps are these:

 

  • Enumerate replicated volumes on the target system using the pattern match
  • Identify the name of the source volume and the source SVM hosting that volume
  • Delete the snapmirror relationships
  • Release the snapmirror destination at the source.
  • Define a new CG at the source
  • Define a new CG at the destination
  • Define a CG snapmirror relationship
  • Resync the mirror

 

Caution: Step 4 is the critical step. I'll keep repeating this warning in this post. By default, releasing a snapmirror relationship will delete all common snapshots. You need to use addition, non-default CLI/REST arguments to stop that from happening. If you make an error, you’ll lose your common snapshots.

 

In the following sections, I’ll walk you through my POC script and show you the REST conversation happening along the way.

 

The script

 

Here’s the first few lines:

 

#! /usr/bin/python3
import sys
sys.path.append(sys.path[0] + "/NTAPlib")
import doREST

svm1='jfs_svm1'
svm2='jfs_svm2'

 

The highlights are that I’m importing my doREST module and defining a couple of variables with the names of the svm’s I’m using. The svm jfs_svm1 is the source of the target SVM relationship, and jfs_svm2 is the destination SVM.

 

A note about doREST. It’s a wrapper for ONTAP APIs that is designed to package up the responses in a standard way. It also has a credential management system and hostname registry. I use this module to string together multiple calls and build workflows. It also handles calls synchronously. For calls such as a POST /snapmirror, which is asynchronous, the doREST module will read the job uuid and repeatedly poll ONTAP until the job is complete. It will then return the results. In the examples below, I’ll include the input/output of that looping behavior. If you want to know more, visit my github repo here.

 

You'll see I'm running it in debug mode where the API, JSON, and REST response are printed at the CLI. I've included that information to help you understand how to build your own REST workflows.

 

Enumerate the snapmirror relationships

 

If I'm going to convert a set of snapmirror relationships into a CG configuration, I'll obviously need to know which ones i'm converting. 

 

api='/snapmirror/relationships'
restargs='fields=uuid,' + \
         'state,' + \
         'destination.path,' +  \
         'destination.svm.name,' +  \
         'destination.svm.uuid,' +  \
         'source.path,' + \
         'source.svm.name,' +  \
         'source.svm.uuid' + \
         '&query_fields=destination.path' + \
         '&query=jfs_svm2:jfs3*'

snapmirrors=doREST.doREST(svm2,'get',api,restargs=restargs,debug=2)

 

This code sets up the REST arguments that go with a GET /snapmirror/relationships. I’ve passed a query for a path of jfs_svm2:jfs3* which means the results will only contain the SnapMirror destinations I mentioned earlier in this post. It's a wildcard search.

 

Here’s the debug output that shows the REST conversation with ONTAP:

 

->doREST:REST:API: GET https://10.192.160.45/api/snapmirror/relationships?fields=uuid,state,destination.path,destination.svm.name,destination.svm.uuid,source.path,source.svm.name,source.svm.uuid&query_fields=destination.path&query=jfs_svm2:jfs3*
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "records": [
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "26b40c82-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "source": {
->doREST:REST:RESPONSE: "path": "jfs_svm1:jfs3_ocr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE: "name": "jfs_svm1",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "destination": {
->doREST:REST:RESPONSE: "path": "jfs_svm2:jfs3_ocr_mirr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE: "name": "jfs_svm2",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "state": "snapmirrored",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships/26b40c82-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "2759306a-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "source": {
->doREST:REST:RESPONSE: "path": "jfs_svm1:jfs3_logs1",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE: "name": "jfs_svm1",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "destination": {
->doREST:REST:RESPONSE: "path": "jfs_svm2:jfs3_logs1_mirr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE: "name": "jfs_svm2",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "state": "snapmirrored",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships/2759306a-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "27fdd036-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "source": {
->doREST:REST:RESPONSE: "path": "jfs_svm1:jfs3_logs2",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE: "name": "jfs_svm1",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "destination": {
->doREST:REST:RESPONSE: "path": "jfs_svm2:jfs3_logs2_mirr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE: "name": "jfs_svm2",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "state": "snapmirrored",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships/27fdd036-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "28a265e8-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "source": {
->doREST:REST:RESPONSE: "path": "jfs_svm1:jfs3_dbf1",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE: "name": "jfs_svm1",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "destination": {
->doREST:REST:RESPONSE: "path": "jfs_svm2:jfs3_dbf1_mirr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE: "name": "jfs_svm2",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "state": "snapmirrored",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships/28a265e8-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "320db78d-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "source": {
->doREST:REST:RESPONSE: "path": "jfs_svm1:jfs3_dbf2",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE: "name": "jfs_svm1",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "destination": {
->doREST:REST:RESPONSE: "path": "jfs_svm2:jfs3_dbf2_mirr",
->doREST:REST:RESPONSE: "svm": {
->doREST:REST:RESPONSE: "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE: "name": "jfs_svm2",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: },
->doREST:REST:RESPONSE: "state": "snapmirrored",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships/320db78d-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: ],
->doREST:REST:RESPONSE: "num_records": 5,
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/snapmirror/relationships?fields=uuid,state,destination.path,destination.svm.name,destination.svm.uuid,source.path,source.svm.name,source.svm.uuid&query_fields=destination.path&query=jfs_svm2:jfs3*"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Highlights:

 

  • The uuid of the snapmirror relationship uuid are in red
  • Snapmirror sources are highlighted in in purple
  • Snapmirror destinations are in blue

 

Delete the snapmirror relationships

 

for record in snapmirrors.response['records']:
   delete=doREST.doREST(svm2,'delete','/snapmirror/relationships/' + record['uuid'] + '/?
destination_only=true',debug=2)

 

This block extracts the records returned by the prior GET /snapmirror/relationships and extracts the uuid. It then deletes all 5 of the relationships.

 

Caution: the destination_only=true argument is required to stop ONTAP from deleting the common snapshots. Do not overlook this parameter.

 

->doREST:REST:API: DELETE https://10.192.160.45/api/snapmirror/relationships/26b40c82-d27e-11ee-a514-00a098af9054/?destination_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "d905b4e3-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/d905b4e3-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/d905b4e3-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "d905b4e3-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/d905b4e3-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.45/api/snapmirror/relationships/2759306a-d27e-11ee-a514-00a098af9054/?destination_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "d9ad1f48-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/d9ad1f48-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/d9ad1f48-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "d9ad1f48-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/d9ad1f48-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.45/api/snapmirror/relationships/27fdd036-d27e-11ee-a514-00a098af9054/?destination_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "da546656-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/da546656-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/da546656-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "da546656-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/da546656-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.45/api/snapmirror/relationships/28a265e8-d27e-11ee-a514-00a098af9054/?destination_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "daf9c09a-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/daf9c09a-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/daf9c09a-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "daf9c09a-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/daf9c09a-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.45/api/snapmirror/relationships/320db78d-d27e-11ee-a514-00a098af9054/?destination_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "dba0429b-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dba0429b-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/dba0429b-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dba0429b-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dba0429b-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200

 

You can see in the above output that the actual DELETE /snapmirror/relationships operation was asynchronous. The REST call returned a status of 202, which means the operation was accepted, but is not yet complete.

 

The doREST module then captured the uuid of the job and polled ONTAP until complete.

 

Release the snapmirror relationships

 

The next part of the script is almost identical to the prior snippet, except this time it’s doing a snapmirror release operation.

 

The relationship itself was deleted in the prior step, and deletion of the relationship stops updates.  That deletion operation was executed against the destination controller and it included an argument destination_only=true.

 

The next deletion operation will target the source and will include source_info_only=true. Asynchronous SnapMirror is a pull technology, so deleting the relationship in the prior step halted further updates. We still need to de-register the destination from the source, which is what the next step does.

 

Caution: the source_info_only=true argument is required to stop ONTAP from deleting the common snapshots. Do not overlook this parameter.

 

 

for record in snapmirrors.response['records']:
   delete=doREST.doREST(svm1,'delete','/snapmirror/relationships/' + record['uuid'] + '/?
source_info_only=true',debug=2)

 

->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.40/api/snapmirror/relationships/26b40c82-d27e-11ee-a514-00a098af9054/?source_info_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "dc4fcade-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dc4fcade-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dc4fcade-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dc4fcade-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dc4fcade-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.40/api/snapmirror/relationships/2759306a-d27e-11ee-a514-00a098af9054/?source_info_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "dcfd165f-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dcfd165f-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dcfd165f-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dcfd165f-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dcfd165f-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.40/api/snapmirror/relationships/27fdd036-d27e-11ee-a514-00a098af9054/?source_info_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "ddac905c-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/ddac905c-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/ddac905c-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "ddac905c-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/ddac905c-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.40/api/snapmirror/relationships/28a265e8-d27e-11ee-a514-00a098af9054/?source_info_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "de9526a2-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/de9526a2-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/de9526a2-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "de9526a2-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/de9526a2-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: DELETE https://10.192.160.40/api/snapmirror/relationships/320db78d-d27e-11ee-a514-00a098af9054/?source_info_only=true
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "df43391f-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/df43391f-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/df43391f-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "df43391f-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/df43391f-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

 

At this point, the original snapmirror relationships are completely deconfigured, but the volumes still contain a common snapshot, which is all that is required to perform a resync.

 

Create a CG at the source

 

Assuming is hasn’t already been done before, we’ll need to define the source volumes as a CG. The process starts by creating a mapping of source volumes to destination volumes using the information obtained when the original snapmirror data was collected.

 

mappings={}
for record in snapmirrors.response['records']:
    mappings[record['source']['path'].split(':')[1]] = record['destination']['path'].split(':')[1]

 

The mappings dictionary looks like this:

 

{'jfs3_ocr': 'jfs3_ocr_mirr', 'jfs3_logs1': 'jfs3_logs1_mirr', 'jfs3_logs2': 'jfs3_logs2_mirr', 'jfs3_dbf1': 'jfs3_dbf1_mirr', 'jfs3_dbf2': 'jfs3_dbf2_mirr'}

 

The next step is to create the consistency group using the keys from this dictionary, because the keys are the volumes at the source. Note that I’m naming the cg jfs3, which is the name of the host where this database resides.

 

vollist=[]
for srcvol in mappings.keys():
    vollist.append({'name':srcvol,'provisioning_options':{'action':'add'}})
api='/application/consistency-groups'
json4rest={'name':'jfs3', \
           'svm.name':'jfs_svm1', \
           'volumes': vollist}
cgcreate=doREST.doREST(svm1,'post',api,json=json4rest,debug=2)

 

->doREST:REST:API: POST https://10.192.160.40/api/application/consistency-groups
->doREST:REST:JSON: {'name': 'jfs3', 'svm.name': 'jfs_svm1', 'volumes': [{'name': 'jfs3_ocr', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_logs1', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_logs2', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_dbf1', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_dbf2', 'provisioning_options': {'action': 'add'}}]}
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "job": {
->doREST:REST:RESPONSE: "uuid": "dfe481c8-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dfe481c8-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "running",
->doREST:REST:RESPONSE: "message": "Unclaimed",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dfe481c8-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "running",
->doREST:REST:RESPONSE: "message": "Unclaimed",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dfe481c8-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "running",
->doREST:REST:RESPONSE: "message": "Creating consistency group volume record - 3 of 5 complete.",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK
->doREST:REST:API: GET https://10.192.160.40/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE: "uuid": "dfe481c8-d27e-11ee-a161-00a098f7d731",
->doREST:REST:RESPONSE: "state": "success",
->doREST:REST:RESPONSE: "message": "success",
->doREST:REST:RESPONSE: "_links": {
->doREST:REST:RESPONSE: "self": {
->doREST:REST:RESPONSE: "href": "/api/cluster/jobs/dfe481c8-d27e-11ee-a161-00a098f7d731"
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Create a CG at the destination

 

The next step is to create a CG at the destination:

 

The list of volumes is also taken from the mappings dictionary, except rather than using the keys, I’ll use the values of the keys. Those are the snapmirror destination volumes discovered in the first step.

 

vollist=[]
for srcvol in mappings.keys():
    vollist.append({'name':mappings[srcvol],'provisioning_options':{'action':'add'}})
api='/application/consistency-groups'
json4rest={'name':'jfs3', \
           'svm.name':'jfs_svm2', \
           'volumes': vollist}

cgcreate=doREST.doREST(svm2,'post',api,json=json4rest,debug=2)

 

->doREST:REST:API: POST https://10.192.160.45/api/application/consistency-groups
->doREST:REST:JSON: {'name': 'jfs3', 'svm.name': 'jfs_svm2', 'volumes': [{'name': 'jfs3_ocr_mirr', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_logs1_mirr', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_logs2_mirr', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_dbf1_mirr', 'provisioning_options': {'action': 'add'}}, {'name': 'jfs3_dbf2_mirr', 'provisioning_options': {'action': 'add'}}]}
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "job": {
->doREST:REST:RESPONSE:   "uuid": "e25c2f6f-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:   "_links": {
->doREST:REST:RESPONSE:    "self": {
->doREST:REST:RESPONSE:     "href": "/api/cluster/jobs/e25c2f6f-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:    }
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/e25c2f6f-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "uuid": "e25c2f6f-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:  "state": "success",
->doREST:REST:RESPONSE:  "message": "success",
->doREST:REST:RESPONSE:  "_links": {
->doREST:REST:RESPONSE:   "self": {
->doREST:REST:RESPONSE:    "href": "/api/cluster/jobs/e25c2f6f-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Create the consistency group mirror

 

To define the CG mirror, I need to built the CG snapmirror map. Order matters. I need a list of source volumes and destination volumes, and then ONTAP will match element X of the first list to element X of the second list. That’s how you control which volume in the source CG should be replicated to which volume in the destination CG.

 

for record in snapmirrors.response['records']:
    mappings[record['source']['path'].split(':')[1]] = record['destination']['path'].split(':')[1]

srclist=[]
dstlist=[]

for srcvol in mappings.keys():
    srclist.append({'name':srcvol})
    dstlist.append({'name':mappings[srcvol]})

 

Now I can create the mirror of the jfs3 CG on the source to the jfs3 CG on the destination

 

api='/snapmirror/relationships'
json4rest={'source':{'path':'jfs_svm1:/cg/jfs3', \
                     'consistency_group_volumes' : srclist}, \
           'destination':{'path':'jfs_svm2:/cg/jfs3', \
                          'consistency_group_volumes' : dstlist}, \
           'policy':'Asynchronous'}

cgsnapmirror=doREST.doREST(svm2,'post',api,json=json4rest,debug=2)

 

->doREST:REST:API: POST https://10.192.160.45/api/snapmirror/relationships
->doREST:REST:JSON: {'source': {'path': 'jfs_svm1:/cg/jfs3', 'consistency_group_volumes': [{'name': 'jfs3_ocr'}, {'name': 'jfs3_logs1'}, {'name': 'jfs3_logs2'}, {'name': 'jfs3_dbf1'}, {'name': 'jfs3_dbf2'}]}, 'destination': {'path': 'jfs_svm2:/cg/jfs3', 'consistency_group_volumes': [{'name': 'jfs3_ocr_mirr'}, {'name': 'jfs3_logs1_mirr'}, {'name': 'jfs3_logs2_mirr'}, {'name': 'jfs3_dbf1_mirr'}, {'name': 'jfs3_dbf2_mirr'}]}, 'policy': 'Asynchronous'}
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "job": {
->doREST:REST:RESPONSE:   "uuid": "e304e8d8-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:   "_links": {
->doREST:REST:RESPONSE:    "self": {
->doREST:REST:RESPONSE:     "href": "/api/cluster/jobs/e304e8d8-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:    }
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/e304e8d8-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "uuid": "e304e8d8-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:  "state": "success",
->doREST:REST:RESPONSE:  "message": "success",
->doREST:REST:RESPONSE:  "_links": {
->doREST:REST:RESPONSE:   "self": {
->doREST:REST:RESPONSE:    "href": "/api/cluster/jobs/e304e8d8-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Retrieve the UUID

 

My final setup step will be to resync the relationship as a CG replica using the previously existing common snapshots, but in order to do that I need the uuid of the CG snapmirror I created. I’ll reuse the same query as before. Strictly speaking, I don’t need all these fields for this workflow, but for the sake of consistency and futureproofing, I’ll gather all the core information about the snapmirror relationship in a single call.

 

Note that I’ve changed my query to jfs_svm2:/cg/jfs3. This is the syntax for addressing a CG snapmirror.

 

 svm:/cg/[cg name]

 

api='/snapmirror/relationships'
restargs='fields=uuid,' + \
         'state,' + \
         'destination.path,' +  \
         'destination.svm.name,' +  \
         'destination.svm.uuid,' +  \
         'source.path,' + \
         'source.svm.name,' +  \
         'source.svm.uuid' + \
         '&query_fields=destination.path' + \
         '&query=jfs_svm2:/cg/jfs3'

cgsnapmirror=doREST.doREST(svm2,'get',api,restargs=restargs,debug=2)
cguuid=cgsnapmirror.response['records'][0]['uuid']

 

->doREST:REST:API: GET https://10.192.160.45/api/snapmirror/relationships?fields=uuid,state,destination.path,destination.svm.name,destination.svm.uuid,source.path,source.svm.name,source.svm.uuid&query_fields=destination.path&query=jfs_svm2:/cg/jfs3
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "records": [
->doREST:REST:RESPONSE:   {
->doREST:REST:RESPONSE:    "uuid": "e304e0fe-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:    "source": {
->doREST:REST:RESPONSE:     "path": "jfs_svm1:/cg/jfs3",
->doREST:REST:RESPONSE:     "svm": {
->doREST:REST:RESPONSE:      "uuid": "ac509ea6-fa33-11ed-ae6e-00a098f7d731",
->doREST:REST:RESPONSE:      "name": "jfs_svm1",
->doREST:REST:RESPONSE:      "_links": {
->doREST:REST:RESPONSE:       "self": {
->doREST:REST:RESPONSE:        "href": "/api/svm/peers/2fc4ddfd-fb05-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE:       }
->doREST:REST:RESPONSE:      }
->doREST:REST:RESPONSE:     }
->doREST:REST:RESPONSE:    },
->doREST:REST:RESPONSE:    "destination": {
->doREST:REST:RESPONSE:     "path": "jfs_svm2:/cg/jfs3",
->doREST:REST:RESPONSE:     "svm": {
->doREST:REST:RESPONSE:      "uuid": "ca77cf7f-fa33-11ed-993a-00a098af9054",
->doREST:REST:RESPONSE:      "name": "jfs_svm2",
->doREST:REST:RESPONSE:      "_links": {
->doREST:REST:RESPONSE:       "self": {
->doREST:REST:RESPONSE:        "href": "/api/svm/svms/ca77cf7f-fa33-11ed-993a-00a098af9054"
->doREST:REST:RESPONSE:       }
->doREST:REST:RESPONSE:      }
->doREST:REST:RESPONSE:     }
->doREST:REST:RESPONSE:    },
->doREST:REST:RESPONSE:    "state": "snapmirrored",
->doREST:REST:RESPONSE:    "_links": {
->doREST:REST:RESPONSE:     "self": {
->doREST:REST:RESPONSE:      "href": "/api/snapmirror/relationships/e304e0fe-d27e-11ee-a514-00a098af9054/"
->doREST:REST:RESPONSE:     }
->doREST:REST:RESPONSE:    }
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  ],
->doREST:REST:RESPONSE:  "num_records": 1,
->doREST:REST:RESPONSE:  "_links": {
->doREST:REST:RESPONSE:   "self": {
->doREST:REST:RESPONSE:    "href": "/api/snapmirror/relationships?fields=uuid,state,destination.path,destination.svm.name,destination.svm.uuid,source.path,source.svm.name,source.svm.uuid&query_fields=destination.path&query=jfs_svm2:/cg/jfs3"
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Resync

 

Now I’m ready to resync with a PATCH operation. I’ll take the first record from the prior operation and extract the uuid. If I was doing this in production code, I’d validate the results to ensure that the query returned one and only one record. That ensures I really do have the CG uuid for the CG I created.

 

api='/snapmirror/relationships/' + cguuid
json4rest={'state':'snapmirrored'}
cgresync=doREST.doREST(svm2,'patch',api,json=json4rest,debug=2)

 

->doREST:REST:API: PATCH https://10.192.160.45/api/snapmirror/relationships/e304e0fe-d27e-11ee-a514-00a098af9054
->doREST:REST:JSON: {'state': 'snapmirrored'}
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "job": {
->doREST:REST:RESPONSE:   "uuid": "e3b577a8-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:   "_links": {
->doREST:REST:RESPONSE:    "self": {
->doREST:REST:RESPONSE:     "href": "/api/cluster/jobs/e3b577a8-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:    }
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 202
->doREST:REASON: Accepted
->doREST:REST:API: GET https://10.192.160.45/api/cluster/jobs/e3b577a8-d27e-11ee-a514-00a098af9054?fields=state,message
->doREST:REST:JSON: None
->doREST:REST:RESPONSE: {
->doREST:REST:RESPONSE:  "uuid": "e3b577a8-d27e-11ee-a514-00a098af9054",
->doREST:REST:RESPONSE:  "state": "success",
->doREST:REST:RESPONSE:  "message": "success",
->doREST:REST:RESPONSE:  "_links": {
->doREST:REST:RESPONSE:   "self": {
->doREST:REST:RESPONSE:    "href": "/api/cluster/jobs/e3b577a8-d27e-11ee-a514-00a098af9054"
->doREST:REST:RESPONSE:   }
->doREST:REST:RESPONSE:  }
->doREST:REST:RESPONSE: }
->doREST:RESULT: 200
->doREST:REASON: OK

 

Done. I can now see a healthy CG snapmirror relationship.

 

rtp-a700s-c02::> snapmirror show -destination-path jfs_svm2:/cg/jfs3

Source Path: jfs_svm1:/cg/jfs3
Destination Path: jfs_svm2:/cg/jfs3
Relationship Type: XDP
Relationship Group Type: consistencygroup
SnapMirror Schedule: -
SnapMirror Policy Type: mirror-vault
SnapMirror Policy: Asynchronous
Tries Limit: -
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Percent Complete for Current Status: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: snapmirrorCG.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520139.2024-02-23_190812
Newest Snapshot Timestamp: 02/23 19:09:12
Exported Snapshot: snapmirrorCG.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520139.2024-02-23_190812
Exported Snapshot Timestamp: 02/23 19:09:12
Healthy: true
Unhealthy Reason: -
Destination Volume Node: -
Relationship ID: e304e0fe-d27e-11ee-a514-00a098af9054
Current Operation ID: -
Transfer Type: -
Transfer Error: -
Current Throttle: -
Current Transfer Priority: -
Last Transfer Type: resync
Last Transfer Error: -
Last Transfer Size: 99.81KB
Last Transfer Network Compression Ratio: 1:1
Last Transfer Duration: 0:1:5
Last Transfer From: jfs_svm1:/cg/jfs3
Last Transfer End Timestamp: 02/23 19:09:17
Progress Last Updated: -
Relationship Capability: 8.2 and above
Lag Time: 3:24:1
Identity Preserve Vserver DR: -
Volume MSIDs Preserved: -
Is Auto Expand Enabled: true
Backoff Level: -
Number of Successful Updates: 0
Number of Failed Updates: 0
Number of Successful Resyncs: 1
Number of Failed Resyncs: 0
Number of Successful Breaks: 0
Number of Failed Breaks: 0
Total Transfer Bytes: 102208
Total Transfer Time in Seconds: 65
FabricLink Source Role: -
FabricLink Source Bucket: -
FabricLink Peer Role: -
FabricLink Peer Bucket: -
FabricLink Topology: -
FabricLink Pull Byte Count: -
FabricLink Push Byte Count: -
FabricLink Pending Work Count: -
FabricLink Status: -

 

I would still need to ensure I have the correct snapmirror schedules and policies, but that’s all essentially the same procedures used for regular volume-based asynchronous snapmirror. The primary difference is you reference the the paths, if necessary, using the svm:/cg/[cg name] syntax. Start here https://docs.netapp.com/us-en/ontap/data-protection/create-replication-job-schedule-task.html for those details.

 

CLI procedure

 

If you’re using ONTAP 9.14.1 or higher, you can do everything via the CLI or SystemManager too.

 

Delete the existing snapmirror relationships

 

rtp-a700s-c02::> snapmirror delete -destination-path jfs_svm2:jfs3_ocr_mirr
Operation succeeded: snapmirror delete for the relationship with destination "jfs_svm2:jfs3_ocr_mirr".

rtp-a700s-c02::> snapmirror delete -destination-path jfs_svm2:jfs3_dbf1_mirr
Operation succeeded: snapmirror delete for the relationship with destination "jfs_svm2:jfs3_dbf1_mirr".

rtp-a700s-c02::> snapmirror delete -destination-path jfs_svm2:jfs3_dbf2_mirr
Operation succeeded: snapmirror delete for the relationship with destination "jfs_svm2:jfs3_dbf2_mirr".

rtp-a700s-c02::> snapmirror delete -destination-path jfs_svm2:jfs3_logs1_mirr
Operation succeeded: snapmirror delete for the relationship with destination "jfs_svm2:jfs3_logs1_mirr".

rtp-a700s-c02::> snapmirror delete -destination-path jfs_svm2:jfs3_logs2_mirr
Operation succeeded: snapmirror delete for the relationship with destination "jfs_svm2:jfs3_logs2_mirr".

 

Release the snapmirror destinations

 

Don’t forget the "-relationship-info-only true"!

 

rtp-a700s-c01::> snapmirror release -destination-path jfs_svm2:jfs3_ocr_mirr -relationship-info-only true
[Job 4984] Job succeeded: SnapMirror Release Succeeded

rtp-a700s-c01::> snapmirror release -destination-path jfs_svm2:jfs3_dbf1_mirr -relationship-info-only true
[Job 4985] Job succeeded: SnapMirror Release Succeeded

rtp-a700s-c01::> snapmirror release -destination-path jfs_svm2:jfs3_dbf2_mirr -relationship-info-only true
[Job 4986] Job succeeded: SnapMirror Release Succeeded

rtp-a700s-c01::> snapmirror release -destination-path jfs_svm2:jfs3_logs1_mirr -relationship-info-only true
[Job 4987] Job succeeded: SnapMirror Release Succeeded

rtp-a700s-c01::> snapmirror release -destination-path jfs_svm2:jfs3_logs2_mirr -relationship-info-only true
[Job 4988] Job succeeded: SnapMirror Release Succeeded                                                                                                                

 

Create a CG at the source

 

rtp-a700s-c01::> consistency-group create -vserver jfs_svm1 -consistency-group jfs3 -volumes jfs3_ocr,jfs3_dbf1,jfs3_dbf2,jfs3_logs1,jfs3_logs2
  (vserver consistency-group create)
[Job 4989] Job succeeded: Success

 

Create a CG at the destination

 

rtp-a700s-c02::> consistency-group create -vserver jfs_svm2 -consistency-group jfs3 -volumes jfs3_ocr_mirr,jfs3_dbf1_mirr,jfs3_dbf2_mirr,jfs3_logs1_mirr,jfs3_logs2_mirr
  (vserver consistency-group create)
[Job 5355] Job succeeded: Success

 

Create the CG snapmirror relationships

 

rtp-a700s-c02::> snapmirror create -source-path jfs_svm1:/cg/jfs3 -destination-path jfs_svm2:/cg/jfs3 -cg-item-mappings jfs3_ocr:@jfs3_ocr_mirr,jfs3_dbf1:@jfs3_dbf1_mirr,jfs3_dbf2:@jfs3_dbf2_mirr,jfs3_logs1:@jfs3_logs1_mirr,jfs3_logs2:@jfs3_logs2_mirr

Operation succeeded: snapmirror create for the relationship with destination "jfs_svm2:/cg/jfs3".

 

Perform the resync operation

 

rtp-a700s-c02::> snapmirror resync -destination-path jfs_svm2:/cg/jfs3

Operation is queued: snapmirror resync to destination "jfs_svm2:/cg/jfs3".

 

Done!

 

rtp-a700s-c02::> snapmirror show -destination-path jfs_svm2:/cg/jfs3

Source Path: jfs_svm1:/cg/jfs3
Destination Path: jfs_svm2:/cg/jfs3
Relationship Type: XDP
Relationship Group Type: consistencygroup
SnapMirror Schedule: -
SnapMirror Policy Type: mirror-vault
SnapMirror Policy: MirrorAndVault
Tries Limit: -
Throttle (KB/sec): unlimited
Mirror State: Snapmirrored
Relationship Status: Idle
File Restore File Count: -
File Restore File List: -
Transfer Snapshot: -
Snapshot Progress: -
Total Progress: -
Percent Complete for Current Status: -
Network Compression Ratio: -
Snapshot Checkpoint: -
Newest Snapshot: snapmirrorCG.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520144.2024-02-26_005106
Newest Snapshot Timestamp: 02/26 00:52:06
Exported Snapshot: snapmirrorCG.ca77cf7f-fa33-11ed-993a-00a098af9054_2161520144.2024-02-26_005106
Exported Snapshot Timestamp: 02/26 00:52:06
Healthy: true
Unhealthy Reason: -
Destination Volume Node: -
Relationship ID: 15f75947-d441-11ee-a514-00a098af9054
Current Operation ID: -
Transfer Type: -
Transfer Error: -
Current Throttle: -
Current Transfer Priority: -
Last Transfer Type: resync
Last Transfer Error: -
Last Transfer Size: 663.3KB
Last Transfer Network Compression Ratio: 1:1
Last Transfer Duration: 0:1:5
Last Transfer From: jfs_svm1:/cg/jfs3
Last Transfer End Timestamp: 02/26 00:52:11
Progress Last Updated: -
Relationship Capability: 8.2 and above
Lag Time: 0:0:21
Identity Preserve Vserver DR: -
Volume MSIDs Preserved: -
Is Auto Expand Enabled: true
Backoff Level: -
Number of Successful Updates: 0
Number of Failed Updates: 0
Number of Successful Resyncs: 1
Number of Failed Resyncs: 0
Number of Successful Breaks: 0
Number of Failed Breaks: 0
Total Transfer Bytes: 679208
Total Transfer Time in Seconds: 65
FabricLink Source Role: -
FabricLink Source Bucket: -
FabricLink Peer Role: -
FabricLink Peer Bucket: -
FabricLink Topology: -
FabricLink Pull Byte Count: -
FabricLink Push Byte Count: -
FabricLink Pending Work Count: -
FabricLink Status: -

 

Public