Tech ONTAP Blogs
Tech ONTAP Blogs
NetApp Backup and Recovery for Kubernetes is now generally available, building on last year’s preview and delivering enterprise-grade data protection for both containerized applications and virtual machines on Kubernetes, including Red Hat OpenShift and OpenShift Virtualization. Besides support for managing K8s clusters that have Trident protect installed already, the GA release now also officially supports CLI- and custom resource (CR) based operations on K8s cluster that are protected with NetApp Backup and Recovery for Kubernetes. This makes automation of application protection and restore operations via GitOps and other automation solutions substantially easier.
In this blog post, we’ll look at how some of the application protection and restore tasks can be run via the Trident protect CLI or by creating custom resources.
The K8s cluster sks5037 we want to use with Backup and Recovery has NetApp Trident 25.10 installed.
$ kubectl get all -n trident
NAME READY STATUS RESTARTS AGE
pod/trident-controller-6f844947f4-7vjbl 6/6 Running 0 31d
pod/trident-node-linux-28lmk 2/2 Running 5 (31d ago) 31d
pod/trident-node-linux-tlkpr 2/2 Running 4 (31d ago) 31d
pod/trident-node-linux-wq2ln 2/2 Running 8 (31d ago) 41d
pod/trident-node-linux-x25wv 2/2 Running 4 (31d ago) 31d
pod/trident-operator-7b4897b9cb-76jx7 1/1 Running 0 31d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/trident-csi ClusterIP 10.110.197.56 <none> 34571/TCP,9220/TCP 41d
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/trident-node-linux 4 4 4 4 4 <none> 41d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/trident-controller 1/1 1 1 41d
deployment.apps/trident-operator 1/1 1 1 41d
NAME DESIRED CURRENT READY AGE
replicaset.apps/trident-controller-6f844947f4 1 1 1 41d
replicaset.apps/trident-operator-7b4897b9cb 1 1 1 41d
Persistent storage is provisioned by Trident’s CSI provisioner via the storage class ontap-vsim-nas, backed by ONTAP storage:
$ k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ontap-vsim2-nas (default) csi.trident.netapp.io Delete Immediate false 19h
$ tridentctl get backends
+-----------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+-----------------+----------------+--------------------------------------+--------+------------+---------+
| ontap-nas-vsim2 | ontap-nas | ed3ae3e5-8ee1-49fb-a21a-d5a53b17f73d | online | normal | 4 |
+-----------------+----------------+--------------------------------------+--------+------------+---------+
As data-rich sample applications, we installed two simple NGINX deployments with one persistent volume in the namespaces web1 and web2, respectively.
$ kubectl get all, pvv -n web1
NAME READY STATUS RESTARTS AGE
pod/web-7994c6f99b-6m825 1/1 Running 0 5d2h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 5d2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-7994c6f99b 1 1 1 5d2h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/nginxdata Bound pvc-1c78b34a-ac70-4037-be41-426bb44fb6b9 2Gi RWO ontap-vsim2-nas <unset> 5d2h
$ kubectl get all, pvv -n web2
NAME READY STATUS RESTARTS AGE
pod/web-7994c6f99b-j4n8t 1/1 Running 0 5d2h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 5d2h
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-7994c6f99b 1 1 1 5d2h
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/nginxdata Bound pvc-ed7ca8be-f9f2-4828-b93b-4f9bae50c133 2Gi R
The cluster sks5037 is already added to the Backup and Recovery Console and is listed in the Inventory’s list of clusters:
As a first step, we define the applications in Backup and Recovery, so that it can discover the K8s resources we want it to protect. We can do this both from the Console UI and the CLI. Let’s define the application web1 representing resources in the namespace web1 using the UI:
We don’t protect the application yet, so its protection status is Unprotected.
With the Trident protect CLI we check the application state and the application CR:
$ tridentctl-protect get app -A
+-----------+------+------------+-------+-------+
| NAMESPACE | NAME | NAMESPACES | STATE | AGE |
+-----------+------+------------+-------+-------+
| web1 | web1 | web1 | Ready | 6m33s |
+-----------+------+------------+-------+-------+
$ k -n web1 get application web1 -o yaml
apiVersion: protect.trident.netapp.io/v1
kind: Application
metadata:
annotations:
protect.trident.netapp.io/correlationid: 25e4bbb1-a2af-421a-8931-911408ffbb95
creationTimestamp: "2026-02-10T16:05:14Z"
finalizers:
- protect.trident.netapp.io/finalizer
generation: 1
name: web1
namespace: web1
resourceVersion: "15720492"
uid: 46ed2f16-bfdc-4db1-b50e-eb92094d3f20
spec:
includedNamespaces:
- namespace: web1
resourceFilter: {}
status:
conditions:
- lastTransitionTime: "2026-02-10T16:05:14Z"
message: ""
reason: Ready
status: "True"
type: Ready
protectionHealthState: None
protectionState: None
Let’s create the application web2 (corresponding to the web2 namespace) with the Trident protect CLI now and check its application state and CR definition:
$ tridentctl-protect create app web2 --namespaces web2 -n web2
Application "web2" created.
$ tridentctl-protect get app -A
+-----------+------+------------+-------+--------+
| NAMESPACE | NAME | NAMESPACES | STATE | AGE |
+-----------+------+------------+-------+--------+
| web1 | web1 | web1 | Ready | 13m34s |
| web2 | web2 | web2 | Ready | 18s |
+-----------+------+------------+-------+--------+
$ k -n web2 get application web2 -o yaml
apiVersion: protect.trident.netapp.io/v1
kind: Application
metadata:
annotations:
protect.trident.netapp.io/correlationid: d5c4b821-bede-4005-bec5-769a93f666e7
creationTimestamp: "2026-02-10T16:18:30Z"
finalizers:
- protect.trident.netapp.io/finalizer
generation: 1
name: web2
namespace: web2
resourceVersion: "15726256"
uid: 7193b647-04fd-4c4d-ab9b-da230fddb951
spec:
includedNamespaces:
- namespace: web2
resourceFilter: {}
status:
conditions:
- lastTransitionTime: "2026-02-10T16:18:30Z"
message: ""
reason: Ready
status: "True"
type: Ready
protectionHealthState: None
protectionState: None
The web2 application shows up in the Console’s application inventory. Note that its protection state will remain empty in the UI until we start to protect it.
Now we’re all set to start protecting our application with one of the Backup and Recovery protection policies.
Let’s protect our sample applications with already configured protection policies in the Console then:
To protect an application, find it in the application inventory and select the associated Actions menu, then select Protect.
For web1, we select the pu-web1-321 protection policy from the dropdown list of existing policies:
After clicking Done, Backup and Recovery will immediately start protecting the application by starting an initial backup.
With the Trident protect CLI, we can see that the appVault CR bucket-azure-r1zexovs6t-sd8ys9wjpk representing the object storage backup target was created by Backup and Recovery on the K8s cluster.
$ tridentctl-protect get appvault
+------------------------------------+----------+-----------+-------+---------+--------+
| NAME | PROVIDER | STATE | ERROR | MESSAGE | AGE |
+------------------------------------+----------+-----------+-------+---------+--------+
| bucket-azure-r1zexovs6t-sd8ys9wjpk | Azure | Available | | | 2m |
+------------------------------------+----------+-----------+-------+---------+--------+
The web1 application does now have two protection schedules assigned to it, one for the daily backup runs and one for the hourly runs, as defined in the protection policy.
$ tridentctl-protect get schedules -n web1
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+-------+
| NAME | APP | SCHEDULE | ENABLED | STATE | ERROR | AGE |
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+-------+
| web1-schedule-20260210172834-l8l3v8xzu9 | web1 | DTSTART:20260210T172834Z | true | | | 9m43s |
| | | RRULE:FREQ=DAILY;BYHOUR=0;BYMINUTE=0 | | | | |
| web1-schedule-20260210172834-rdj3cbm6h5 | web1 | DTSTART:20260210T172834Z | true | | | 9m43s |
| | | RRULE:FREQ=HOURLY;BYMINUTE=0 | | | | |
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+-------+
$ k -n web1 get schedules web1-schedule-20260210172834-l8l3v8xzu9 -o yaml | yq '.spec'
appVaultTargetsRef: avt-n9tua4ecb3-hbap09yh5h
applicationRef: web1
backupReclaimPolicy: Retain
backupRetention: "4"
dataMover: CBS
dayOfMonth: ""
dayOfWeek: ""
enabled: true
granularity: Custom
hour: ""
minute: ""
recurrenceRule: |-
DTSTART:20260210T172834Z
RRULE:FREQ=DAILY;BYHOUR=0;BYMINUTE=0
replicateSnapshotReclaimPolicy: Retain
replicationRetention: "4"
runImmediately: false
snapshotReclaimPolicy: Delete
snapshotRetention: "3"
$ k -n web1 get schedules web1-schedule-20260210172834-rdj3cbm6h5 -o yaml | yq '.spec'
appVaultTargetsRef: avt-n9tua4ecb3-hbap09yh5h
applicationRef: web1
backupReclaimPolicy: Retain
backupRetention: "5"
dataMover: CBS
dayOfMonth: ""
dayOfWeek: ""
enabled: true
granularity: Custom
hour: ""
minute: ""
recurrenceRule: |-
DTSTART:20260210T172834Z
RRULE:FREQ=HOURLY;BYMINUTE=0
replicateSnapshotReclaimPolicy: Retain
replicationRetention: "5"
runImmediately: true
snapshotReclaimPolicy: Delete
snapshotRetention: "3"
After some minutes, the initial backup and snapshot finished successfully and are represented by their CRs on the cluster:
$ tridentctl-protect get backup -n web1
+-----------------------------+------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+------+----------------+-----------+-------+--------+
| custom-47de4-20260210172835 | web1 | Retain | Completed | | 14m21s |
+-----------------------------+------+----------------+-----------+-------+--------+
$ tridentctl-protect get snapshots -n web1
+---------------------------------------------+------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+---------------------------------------------+------+----------------+-----------+-------+--------+
| backup-070de30d-0637-40a9-931a-3f0a990e857e | web1 | Delete | Completed | | 14m28s |
+---------------------------------------------+------+----------------+-----------+-------+--------+
As we protected the application web1 with a 3-2-1 protection policy, we also have a replicated snapshot now:
$ k -n web1 get replicatesnapshots
NAME RECLAIM POLICY STATE ERROR AGE
replicate-snapshot-4e372d60-3157-4a56-a384-29c8df7f04fd Retain Completed 14m57s
For the web2 application, we follow the same steps in the Console and protect it with the pu-web2-object protection policies, which is of disk to object storage protection type.
Also, for web2, Backup and Recovery creates the two protection schedule CR. As we don’t have replicated snapshots with this protection policy, replicationRetention is set to “0”:
$ tridentctl-protect get schedules -n web2
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+------+
| NAME | APP | SCHEDULE | ENABLED | STATE | ERROR | AGE |
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+------+
| web2-schedule-20260210174933-ffgj6yd3sk | web2 | DTSTART:20260210T174933Z | true | | | 7m6s |
| | | RRULE:FREQ=DAILY;BYHOUR=0;BYMINUTE=0 | | | | |
| web2-schedule-20260210174933-lxq3hyiai0 | web2 | DTSTART:20260210T174933Z | true | | | 7m7s |
| | | RRULE:FREQ=HOURLY;BYMINUTE=0 | | | | |
+-----------------------------------------+------+--------------------------------------+---------+-------+-------+------+
$ k -n web2 get schedule web2-schedule-20260210174933-ffgj6yd3sk -o yaml | yq ‘.spec’
appVaultRef: bucket-azure-r1zexovs6t-sd8ys9wjpk
applicationRef: web2
backupReclaimPolicy: Retain
backupRetention: "4"
dataMover: CBS
dayOfMonth: ""
dayOfWeek: ""
enabled: true
granularity: Custom
hour: ""
minute: ""
recurrenceRule: |-
DTSTART:20260210T174933Z
RRULE:FREQ=DAILY;BYHOUR=0;BYMINUTE=0
replicateSnapshotReclaimPolicy: Retain
replicationRetention: "0"
runImmediately: false
snapshotReclaimPolicy: Delete
snapshotRetention: "3"
$ k -n web2 get schedule web2-schedule-20260210174933-lxq3hyiai0 -o yaml | yq ‘.spec’
appVaultRef: bucket-azure-r1zexovs6t-sd8ys9wjpk
applicationRef: web2
backupReclaimPolicy: Retain
backupRetention: "5"
dataMover: CBS
dayOfMonth: ""
dayOfWeek: ""
enabled: true
granularity: Custom
hour: ""
minute: ""
recurrenceRule: |-
DTSTART:20260210T174933Z
RRULE:FREQ=HOURLY;BYMINUTE=0
replicateSnapshotReclaimPolicy: Retain
replicationRetention: "0"
runImmediately: true
snapshotReclaimPolicy: Delete
snapshotRetention: "3"
After the initial protection run finishes, we can confirm the existence of backup and snapshot CRs in the web2 namespace:
$ tridentctl-protect get backups -n web2
+-----------------------------+------+----------------+-----------+-------+-------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+------+----------------+-----------+-------+-------+
| custom-30d4f-20260210174933 | web2 | Retain | Completed | | 4m48s |
+-----------------------------+------+----------------+-----------+-------+-------+
$ tridentctl-protect get snapshots -n web2
+-----------------------------+------+----------------+-----------+-------+-------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+------+----------------+-----------+-------+-------+
| custom-30d4f-20260210174933 | web2 | Delete | Completed | | 4m33s |
+-----------------------------+------+----------------+-----------+-------+-------+
As expected, no replicated snapshots were created:
$ k -n web2 get replicatesnapshots
No resources found in web2 namespace.
For automation purpose, we don’t want to go to the Console UI and manually assign a protection policy to a newly created application. Therefore, Backup and Recovery offers the possibility to control the protection of an application with annotations in the application CR (yaml manifest).
The Trident protect CLI also can add these annotations already during the application creation using the --annotation flag. In both cases, the protection policy used to protect the application must already exist in Backup and Recovery.
Using a different cluster kevin-ocp06, we quickly demonstrate below how to protect an Alpine-based application in namespace pu-alpine with an already defined protection policy pu-alpine-object of type disk-to-object.
To create the application, assign the protection policy and start protecting the application, we need to add the protect.trident.netapp.io/protection-policy-name and protect.trident.netapp.io/protection-command annotations to the application CR during the application creation:
$ tridentctl-protect create application pu-alpine --namespaces pu-alpine --annotation protect.trident.netapp.io/protection-policy-name=pu-alpine-object --annotation protect.trident.netapp.io/protection-command=protect -n pu-alpine
Application "pu-alpine" created.
The command creates the application pu-alpine with the correct annotations and protection starts immediately:
$ k -n pu-alpine describe application pu-alpine
Name: pu-alpine
Namespace: pu-alpine
Labels: <none>
Annotations: protect.trident.netapp.io/correlationid: ee9d7f5b-6046-47e1-a29f-4444f43422d6
protect.trident.netapp.io/protection-command: protect
protect.trident.netapp.io/protection-policy-name: pu-alpine-object
API Version: protect.trident.netapp.io/v1
Kind: Application
Metadata:
Creation Timestamp: 2026-03-04T13:48:43Z
Finalizers:
protect.trident.netapp.io/finalizer
Generation: 1
Resource Version: 115970735
UID: 5d89d9f4-61d5-4ecc-a58a-80c6608e792c
Spec:
Included Namespaces:
Namespace: pu-alpine
Resource Filter:
Status:
Conditions:
Last Transition Time: 2026-03-04T13:48:43Z
Message:
Reason: Ready
Status: True
Type: Ready
Protection Health State: Unhealthy
Protection State: Partial
Protection State Details:
Scheduled backup unavailable
Resource Count: 67
Storage Capacity Bytes: 2147483648
Storage Used Capacity Bytes: 434176
Events: <none>
This is also reflected in the Backup and Recovery UI:
Backup and Recovery also created the appVault CR needed for the assigned protection policy, as well as the two protection schedules:
$ tridentctl-protect get appvault
+------------------------------------+----------+-----------+-------+---------+-------+
| NAME | PROVIDER | STATE | ERROR | MESSAGE | AGE |
+------------------------------------+----------+-----------+-------+---------+-------+
| bucket-azure-iwmkw429i7-ucbt66982p | Azure | Available | | | 21s |
| bucket-azure-u4obnam0po-cle64agkh9 | Azure | Available | | | 8d19h |
+------------------------------------+----------+-----------+-------+---------+-------+
$ tridentctl-protect get schedules -A
+-----------+----------------------------------------------+-----------+--------------------------------------+-----+---------+-------+-------+-----+
| NAMESPACE | NAME | APP | SCHEDULE | FBR | ENABLED | STATE | ERROR | AGE |
+-----------+----------------------------------------------+-----------+--------------------------------------+-----+---------+-------+-------+-----+
| pu-alpine | pu-alpine-schedule-20260304134847-c47wbadfrb | pu-alpine | DTSTART:20260304T134847Z | | true | | | 16s |
| | | | RRULE:FREQ=HOURLY;BYMINUTE=0 | | | | | |
| pu-alpine | pu-alpine-schedule-20260304134847-w2pua8ik0r | pu-alpine | DTSTART:20260304T134847Z | | true | | | 16s |
| | | | RRULE:FREQ=DAILY;BYHOUR=0;BYMINUTE=0 | | | | | |
+-----------+----------------------------------------------+-----------+--------------------------------------+-----+---------+-------+-------+-----+
Lastly, we confirm the successful creation of the first application snapshot and the ongoing backup:
$ tridentctl-protect get snapshots -n pu-alpine
+-----------------------------+-----------+----------------+-----------+-------+-------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+-----------+----------------+-----------+-------+-------+
| custom-d9a79-20260304134847 | pu-alpine | Delete | Completed | | 6m10s |
+-----------------------------+-----------+----------------+-----------+-------+-------+
$ tridentctl-protect get backups -n pu-alpine
+-----------------------------+-----------+----------------+---------+-------+------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+-----------+----------------+---------+-------+------+
| custom-d9a79-20260304134847 | pu-alpine | Retain | Running | | 6m1s |
Specifying the protection annotations in the application CR or Trident protect CLI as described above provides you with an easy way of creating and protecting applications with the CLI or your automation tool of choice.
With our sample applications now being protected by their respective protection policies, creating regular backups and snapshots, let’s quickly see how we can create additional on-demand snapshots or backups using the CLI.
To create an on-demand backup with the CLI, we need to select CBS (Cloud Backup Services) as data mover in the tridentctl-protect create backup command:
$ tridentctl-protect create backup --app web1 --appvault bucket-azure-r1zexovs6t-sd8ys9wjpk --data-mover CBS -n web1
Backup "web1-78yegm" created.
The on-demand backup is now listed along the scheduled backups:
$ tridentctl-protect get backup -n web1
+-----------------------------+------+----------------+-----------+-------+-------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+------+----------------+-----------+-------+-------+
| custom-47de4-20260211130034 | web1 | Retain | Completed | | 4h9m |
| custom-47de4-20260211140034 | web1 | Retain | Completed | | 3h9m |
| custom-47de4-20260211150034 | web1 | Retain | Completed | | 2h9m |
| custom-47de4-20260211160034 | web1 | Retain | Completed | | 1h9m |
| custom-47de4-20260211170034 | web1 | Retain | Completed | | 9m |
| custom-d6042-20260211000034 | web1 | Retain | Completed | | 17h9m |
| web1-78yegm | web1 | Retain | Completed | | 1m57s |
+-----------------------------+------+----------------+-----------+-------+-------+
In the Console, we find it in the list of the application’s restore points with it’s UID in the backup name:
$ k -n web1 get backup web1-78yegm -o yaml --context sks5037 | yq '.metadata.uid'
375136ee-aa26-443f-8b2e-d19415dee867
Note that Backup and Recovery does not support the Kopia or Restic data movers.
To create on-demand snapshots and replicated snapshots, use the tridentctl-protect create snapshot and tridentctl-protect create replicatesnapshot commands or, as usual, create the CR manifests and apply them.
As with on-demand protection operations, we can also initiate restore operations of a protected application from the Trident protect CLI.
As an example, let’s do a restore from one of the backups of web1:
$ tridentctl-protect get backup -n web1
+-----------------------------+------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-----------------------------+------+----------------+-----------+-------+--------+
| custom-47de4-20260213080034 | web1 | Retain | Completed | | 4h36m |
| custom-47de4-20260213090034 | web1 | Retain | Completed | | 3h36m |
| custom-47de4-20260213100034 | web1 | Retain | Completed | | 2h36m |
| custom-47de4-20260213110034 | web1 | Retain | Completed | | 1h36m |
| custom-47de4-20260213120034 | web1 | Retain | Completed | | 36m33s |
| custom-d6042-20260211000034 | web1 | Retain | Completed | | 2d12h |
| custom-d6042-20260212000034 | web1 | Retain | Completed | | 1d12h |
| custom-d6042-20260213000034 | web1 | Retain | Completed | | 12h36m |
| web1-78yegm | web1 | Retain | Completed | | 1d19h |
+-----------------------------+------+----------------+-----------+-------+--------+
Now we run a restore from backup custom-d6042-20260213000034 into a new namespace web1-restore3:
$ tridentctl-protect create backuprestore --backup web1/custom-d6042-20260213000034 --namespace-mapping web1:web1-restore3 --destination-app-name web1-restore3 -n web1-restore3
BackupRestore "web1-9f4kz4" created.
The restores finishes quickly and the restored NGINX application is running in namespace web1-restore3:
$ kubectl get all,pvc -n web1-restore3
NAME READY STATUS RESTARTS AGE
pod/web-7994c6f99b-nf7lz 1/1 Running 0 4m1s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/web 1/1 1 1 4m1s
NAME DESIRED CURRENT READY AGE
replicaset.apps/web-7994c6f99b 1 1 1 4m2s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/nginxdata Bound pvc-8e2cc47e-ea06-4d7e-9c30-9b8365e1e26e 2Gi RWO ontap-vsim2-nas <unset> 4m5s
The restored application web1-restore3 is listed in the application list both with the CLI and in the Console.
$ tridentctl-protect get apps -A
+---------------+---------------+---------------+-------+-------+
| NAMESPACE | NAME | NAMESPACES | STATE | AGE |
+---------------+---------------+---------------+-------+-------+
| web1-restore1 | web1-restore1 | web1-restore1 | Ready | 1d20h |
| web1-restore2 | web1-restore2 | web1-restore2 | Ready | 22h2m |
| web1-restore3 | web1-restore3 | web1-restore3 | Ready | 5m50s |
| web1 | web1 | web1 | Ready | 2d19h |
+---------------+---------------+---------------+-------+-------+
To restore from a snapshot or replicated snapshot into new namespaces, use the tridentctl-protect create snapshotrestore or tridentctl-protect create replicatesnapshotrestore commands. If you want to restore a protected application into the same namespace from a backup or snapshot, use the tridentctl-protect create backupinplacerestore or tridentctl-protect create snapshotinplacerestore commands.
NetApp Backup and Recovery for Kubernetes brings enterprise-grade data protection to containerized applications and virtual machines on Kubernetes, including Red Hat OpenShift and OpenShift Virtualization. The general availability of this solution introduces support for managing existing Trident protect clusters and enhances automation capabilities with CLI- and custom resource (CR)-based operations.
By integrating your Kubernetes clusters with NetApp Backup and Recovery, you can leverage advanced data protection features, streamline backup and restore processes, and automate application protection workflows. The flexibility provided by the CLI and CR-based operations allows for seamless integration with GitOps and other automation solutions, ensuring that your data protection strategies are both efficient and reliable.
To get started with NetApp Backup and Recovery for Kubernetes:
Embrace the power of NetApp Backup and Recovery for Kubernetes to safeguard your critical applications and data. Start today and take your Kubernetes data protection to the next level. Login to NetApp Console, navigate to Protection --> Backup and Recovery and sign up for a free trial, discover your K8s clusters and bring their protection to the next level!