Tech ONTAP Blogs
Tech ONTAP Blogs
In today's fast-paced digital world, keeping your data safe and always available is more critical than ever. As enterprises increasingly depend on Kubernetes to deploy and manage their applications, having a robust disaster recovery plan is essential. Enter NetApp® Trident™ with asynchronous SnapMirror® volume replication to keep your data secure and your mind at ease.
With the release of v25.06, Trident now supports NVMe/TCP back ends for volume replication using SnapMirror, offering faster and more efficient data transfer capabilities. This blog takes you through the exciting journey of setting up asynchronous SnapMirror volume replication using Trident. So buckle up and let’s dive in! 🌊
NetApp SnapMirror technology is a game changer for disaster recovery, enabling efficient data replication between NetApp ONTAP® clusters. With Trident, you can establish mirror relationships between PersistentVolumeClaims (PVCs) on different ONTAP clusters, so that your data is always protected and available. Here’s why you should be excited about these mirror relationships:
Before we get started, make sure that you have the following in place:
cron create -name five_minute -minute 0,5,10,15,20,25,30,35,40,45,50,55
# Kubernetes secret required for creating Trident backend from TBC
[root@scs000646264 artifacts]# cat primary-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: primary-tbc-secret
type: Opaque
stringData:
username: <username>
password: <password>
[root@scs000646264 artifacts]# kubectl create -f primary-secret.yaml -n trident
secret/primary-tbc-secret created
# Kubernetes CR TridentBackendConfig (TBC)
[root@scs000646264 artifacts]# cat primary-tbc.yaml---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: primary-backend-tbc
spec:
version: 1
storageDriverName: ontap-san
sanType: nvme
managementLIF: 8.8.8.8
svm: svm0
credentials:
name: primary-tbc-secret
[root@scs000646264 artifacts]# kubectl create -f primary-tbc.yaml -n trident
tridentbackendconfig.trident.netapp.io/primary-backend-tbc created
[root@scs000646264 artifacts]# kubectl get tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
primary-backend-tbc primary-backend-tbc a9b1a3a7-66a8-4d3b-984a-b87d851387c7 Bound Success
# Or, Trident backend json
[root@scs000646264 artifacts]# cat backend.json
{
"version": 1,
"storageDriverName": "ontap-san",
"managementLIF": "8.8.8.8",
"backendName": "primary-backend"
"svm": "svm0",
"username": "<username>",
"password": "<password>",
"sanType": "nvme"
}
[root@scs000646264 artifacts]# tridentctl create b -f primary-backend.json -n trident
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| primary-backend | ontap-san | 6458337e-a27e-4cde-8707-0f6218214356 | online | normal | 0 |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
kind: StorageClass
metadata:
name: primary-sc
provisioner: csi.trident.netapp.io
parameters:
backendType: ontap-san
storagePools: primary-backend-tbc:.*
allowVolumeExpansion: true
root@scs000646264 artifacts]# kubectl create -f primary-sc.yaml
storageclass.storage.k8s.io/primary-sc created
[root@scs000646264 artifacts]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
primary-sc csi.trident.netapp.io Delete Immediate true 2s
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: primary-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Mi
storageClassName: primary-sc
[root@scs000646264 artifacts]# kubectl create -f primary-pvc.yaml
persistentvolumeclaim/primary-pvc created
[root@scs000646264 artifacts]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
primary-pvc Bound pvc-bd77f21b-4522-41e9-bfa1-fca7cf8af672 20Mi RWO primary-sc <unset> 4s
[root@scs000646264 artifacts]# cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: source-pod
spec:
containers:
- image: nginx:alpine
name: nginx
volumeMounts:
- mountPath: /pv/pvc
name: local-storage
command: ["/bin/ash", "-c"]
args:
- |
while true; do
echo `date +%Y-%m-%d.%H:%M:%S` >> /pv/pvc/data.txt;
fsync /pv/pvc;
fsync /pv/pvc/data.txt;
tail -n 1 /pv/pvc/data.txt;
sleep 20;
done
nodeSelector:
kubernetes.io/arch: amd64
kubernetes.io/os: linux
volumes:
- name: local-storage
persistentVolumeClaim:
claimName: primary-pvc
[root@scs000646264 artifacts]# kubectl create -f pod.yaml
pod/source-pod created
[root@scs000646264 artifacts]# kubectl get po
NAME READY STATUS RESTARTS AGE
source-pod 1/1 Running 0 15s
[root@scs000646264 artifacts]# kubectl logs source-pod
2025-05-21.07:30:12
2025-05-21.07:30:32
kind: TridentMirrorRelationship
apiVersion: trident.netapp.io/v1
metadata:
name: source-tmr
spec:
state: promoted
volumeMappings:
- localPVCName: primary-pvc
[root@scs000646264 artifacts]# kubectl create -f source-tmr.yaml
tridentmirrorrelationship.trident.netapp.io/source-tmr created
[root@scs000646264 artifacts]# kubectl get tmr
NAME DESIRED STATE LOCAL PVC ACTUAL STATE MESSAGE
source-tmr promoted primary-pvc promoted
[root@scs000646264 artifacts]# kubectl get tmr source-tmr -o yaml
apiVersion: trident.netapp.io/v1
kind: TridentMirrorRelationship
metadata:
creationTimestamp: "2025-05-21T07:26:15Z"
finalizers:
- trident.netapp.io
generation: 2
name: source-tmr
namespace: default
resourceVersion: "4573176"
uid: 7d0cb8e2-434a-4c1b-b3a8-8d181f744f82
spec:
replicationPolicy: ""
replicationSchedule: ""
state: promoted
volumeMappings:
- localPVCName: primary-pvc
promotedSnapshotHandle: ""
remoteVolumeHandle: ""
status:
conditions:
- lastTransitionTime: "2025-05-21T07:26:15Z"
localPVCName: primary-pvc
localVolumeHandle: svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672
message: ""
observedGeneration: 2
remoteVolumeHandle: ""
replicationPolicy: ""
replicationSchedule: ""
state: promoted
# Kubernetes secret required for creating Trident backend from TBC
[root@scs000646264 artifacts]# cat secondary-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: secondary-tbc-secret
type: Opaque
stringData:
username: <username>
password: <password>
[root@scs000646264 artifacts]# kubectl create -f secondary-secret.yaml -n trident
secret/secondary-tbc-secret created
[root@scs000646264 artifacts]# cat secondary-tbc.yaml
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: secondary-backend-tbc
spec:
version: 1
storageDriverName: ontap-san
sanType: nvme
managementLIF: 7.7.7.7
svm: svm1
replicationPolicy: MirrorAllSnapshots
replicationSchedule: five_minute
credentials:
name: secondary-tbc-secre
[root@scs000646264 artifacts]# kubectl create -f secondary-tbc-tamu.yaml -n trident
tridentbackendconfig.trident.netapp.io/secondary-backend-tbc created
[root@scs000646264 artifacts]# kubectl get tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
primary-backend-tbc primary-backend-tbc a9b1a3a7-66a8-4d3b-984a-b87d851387c7 Bound Success
secondary-backend-tbc secondary-backend-tbc c4fbde1a-cf6c-4ee4-8474-488454c926d1 Bound Success
# Or, Trident backend json
[root@scs000646264 artifacts]# cat backend.json
{
"version": 1,
"storageDriverName": "ontap-san",
"managementLIF": "7:7:7:7",
"backendName": "secondary-backend"
"svm": "svm0",
"username": "<username>",
"password": "<password>",
"sanType": "nvme"
"replicationPolicy": "MirrorAllSnapshots"
"replicationSchedule": "five_minute"
}
[root@scs000646264 artifacts]# tridentctl create b -f primary-backend.json -n trident
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| secondary-backend | ontap-san | 2345337e-e67e-4bdm-8707-1d5644326548 | online | normal | 0 |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
[root@scs000646264 artifacts]# cat secondary_sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: secondary-sc
provisioner: csi.trident.netapp.io
parameters:
backendType: ontap-san
storagePools: secondary-backend-tbc:.*
allowVolumeExpansion: true
[root@scs000646264 artifacts]# kubectl create -f secondary_sc.yaml
storageclass.storage.k8s.io/secondary-sc created
[root@scs000646264 artifacts]# kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
primary-sc csi.trident.netapp.io Delete Immediate true 11s
secondary-sc csi.trident.netapp.io Delete Immediate true 3s
[root@scs000646264 artifacts]# cat secondary-pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
annotations:
trident.netapp.io/mirrorRelationship: dest-tmr
name: secondary-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 20Mi
storageClassName: secondary-sc
[root@scs000646264 artifacts]# kubectl create -f secondary-pvc.yaml
persistentvolumeclaim/secondary-pvc created
[root@scs000646264 artifacts]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
primary-pvc Bound pvc-bd77f21b-4522-41e9-bfa1-fca7cf8af672 20Mi RWO primary-sc <unset> 43s
secondary-pvc Pending secondary-sc <unset> 4s
kind: TridentMirrorRelationship”
apiVersion: trident.netapp.io/v1
metadata:
name: dest-tmr
spec:
state: established
volumeMappings:
- localPVCName: secondary-pvc
remoteVolumeHandle: "svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672"
[root@scs000646264 artifacts]# kubectl create -f dest-tmr.yaml
tridentmirrorrelationship.trident.netapp.io/dest-tmr created
[root@scs000646264 artifacts]# kubectl get tmr
NAME DESIRED STATE LOCAL PVC ACTUAL STATE MESSAGE
source-tmr promoted primary-pvc promoted
dest-tmr established secondary-pvc established
Verify the SnapMirror relationship and volume type (DP) on the destination cluster:
stiA300-2491746692774::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672
XDP svm1:trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
Snapmirrored
Idle - true -
stiA300-2491746692774::> volume show -vserver svm1
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
svm1 svm0_root stiA300_250_aggr1
online RW 1GB 972.4MB 0%
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
stiA300_249_aggr1
online DP 22MB 20.42MB 7%
2 entries were displayed.
[root@scs000646264 artifacts]# cat snapshot-class.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: csi-snapclass
driver: csi.trident.netapp.io
deletionPolicy: Delete
[root@scs000646264 artifacts]# kubectl create -f snapshot-class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-snapclass created
[root@scs000646264 artifacts]# kubectl get volumesnapshotclasses
NAME DRIVER DELETIONPOLICY AGE
csi-snapclass csi.trident.netapp.io Delete 5s
stiA300-2491746692774::> snapshot show
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_032847
216KB 1% 40%
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_033000
136KB 1% 29%
2 entries were displayed.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: "snap1"
spec:
volumeSnapshotClassName: csi-snapclass
source:
persistentVolumeClaimName: primary-pvc
root@scs000646264 artifacts]# kubectl create -f snapshot.yaml
volumesnapshot.snapshot.storage.k8s.io/snap1 created
[root@scs000646264 artifacts]# kubectl get volumesnapshots
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
snap1 true primary-pvc 20Mi csi-snapclass snapcontent-7a3b0380-6d13-4cb9-88a3-df9528be73a9 5s 6s
stiA300-2491746692774::> snapshot show
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_033000
216KB 1% 19%
snapshot-7a3b0380-6d13-4cb9-88a3-df9528be73a9
268KB 1% 22%
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_033740
180KB 1% 16%
3 entries were displayed.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
name: "snap1-new"
spec:
volumeSnapshotClassName: csi-snapclass
source:
persistentVolumeClaimName: primary-pvc
root@scs000646264 artifacts]# kubectl create -f snapshot-1.yaml
volumesnapshot.snapshot.storage.k8s.io/snap1-new created
[root@scs000646264 artifacts]# kubectl get vs
NAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE
snap1 true primary-pvc 20Mi csi-snapclass snapcontent-7a3b0380-6d13-4cb9-88a3-df9528be73a9 6m33s 6m34s
snap1-new true primary-pvc 20Mi csi-snapclass snapcontent-56cf40dc-f968-4aeb-9ffd-ea1f7bcf0942 4s 5s
stiA300-2491746692774::> snapshot show
---Blocks---
Vserver Volume Snapshot Size Total% Used%
-------- -------- ------------------------------------- -------- ------ -----
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
snapshot-7a3b0380-6d13-4cb9-88a3-df9528be73a9
268KB 1% 20%
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_033740
276KB 1% 21%
snapshot-56cf40dc-f968-4aeb-9ffd-ea1f7bcf0942
264KB 1% 20%
snapmirror.33ffbac2-300c-11f0-8dc5-00a098f48d3e_2155208310.2025-05-21_034000
272KB 1% 21%
4 entries were displayed.
stiA300-2491746692774::> volume show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
stiA300-249
vol0 aggr0_stiA300_249
online RW 348.6GB 291.9GB 11%
stiA300-250
vol0 aggr0_stiA300_250
online RW 348.6GB 291.7GB 11%
svm1 svm0_root stiA300_250_aggr1
online RW 1GB 972.4MB 0%
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
stiA300_249_aggr1
online DP 22MB 19.93MB 9%
vs0 root_vs0 stiA300_249_aggr1
online RW 1GB 972.1MB 0%
5 entries were displayed.
The TMR CRD has the following states to manage the replication relationship:
kind: TridentMirrorRelationship
apiVersion: trident.netapp.io/v1
metadata:
name: dest-tmr
spec:
state: promoted
volumeMappings:
- localPVCName: secondary-pvc
remoteVolumeHandle: "svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672"
[root@scs000646264 artifacts]# kubectl apply -f dest-tmr.yaml
tridentmirrorrelationship.trident.netapp.io/dest-tmr configured
[root@scs000646264 artifacts]# kubectl get tmr
NAME DESIRED STATE LOCAL PVC ACTUAL STATE MESSAGE
source-tmr promoted primary-pvc promoted
dest-tmr promoted secondary-pvc promoted
Once the destination TMR is promoted, the destination volume becomes the RW volume with no mirror relationship currently in effect.stiA300-2491746692774::> snapmirror show
This table is currently empty.
stiA300-2491746692774::> volume show
Vserver Volume Aggregate State Type Size Available Used%
--------- ------------ ------------ ---------- ---- ---------- ---------- -----
stiA300-249
vol0 aggr0_stiA300_249
online RW 348.6GB 291.9GB 11%
stiA300-250
vol0 aggr0_stiA300_250
online RW 348.6GB 291.7GB 11%
svm1 svm0_root stiA300_250_aggr1
online RW 1GB 972.4MB 0%
svm1 trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
stiA300_249_aggr1
online RW 22MB 19.93MB 9%
vs0 root_vs0 stiA300_249_aggr1
online RW 1GB 972.1MB 0%
5 entries were displayed.
When TMR is in promoted state, you can mount applications with the destination PVC.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: nginx:alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- mountPath: /mnt/test-path
name: test-volume
readOnly: false
restartPolicy: Never
volumes:
- name: test-volume
persistentVolumeClaim:
claimName: secondary-pvc
readOnly: false
[root@scs000646264 artifacts]# kubectl create -f test-pod.yaml
[root@scs000646264 artifacts]# kubectl get po
NAME READY STATUS RESTARTS AGE
source-pod 1/1 Running 0 12m
test-pod 1/1 Running 0 8s
kind: TridentMirrorRelationship
apiVersion: trident.netapp.io/v1
metadata:
name: dest-tmr
spec:
state: reestablished
volumeMappings:
- localPVCName: secondary-pvc
remoteVolumeHandle: "svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672"
[root@scs000646264 artifacts]# kubectl apply -f dest-tmr.yaml
tridentmirrorrelationship.trident.netapp.io/dest-tmr configured
stiA300-2491746692774::> snapmirror show
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm0:trident_pvc_bd77f21b_4522_41e9_bfa1_fca7cf8af672
XDP svm1:trident_pvc_32d31b4d_ec91_41cd_a95e_9e74bd6b2d9b
Snapmirrored
Idle - true -
With NetApp Trident and SnapMirror, you’ve built a robust disaster recovery pipeline for your Kubernetes applications. Every file—secrets, back ends, storage classes, PVCs, TMRs, Snapshot copies, and pods—plays a critical role in making sure that your data is replicated, protected, and ready for any scenario. Follow the steps in this guide and your clusters will be prepared to handle disasters with confidence.
Additionally, check out Trident protect, which helps you to manage SnapMirror and fail over the data, and also to manage the complete application. For more detailed information on using SnapMirror replication with Trident protect, check out the NetApp Trident protect documentation.
Want to dive deeper? Check out the NetApp Trident documentation for more insights. Happy replicating!