Tech ONTAP Blogs
Tech ONTAP Blogs
Last week, NetApp previewed its new Backup and Recovery solution for Kubernetes-based containers, offering seamless, unified protection for applications on NetApp storage. This integration is a game-changer for those managing modern applications in their organizations.
Key Highlights:
Preview features are:
Stay tuned for more features being added throughout the preview period!
In this blog post, we’ll guide you through setting up protection policies and protecting and restoring your K8s applications with NetApp backup & recovery.
Ready to elevate your data protection game? Let’s dive in!
For our further demonstrations, we use an RKE2 cluster sks3725 with the latest version of the NetApp Trident CSI provisioner installed, and the NetApp ONTAP 9 simulator to provide backend storage.
After installing the RKE2 snapshot controller and configuring the Trident backend including the corresponding storage and volume snapshot class, we have this storage configuration on our cluster:
$ tridentctl -n trident get backends
+-----------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+-----------------+----------------+--------------------------------------+--------+------------+---------+
| ontap-nas-vsim1 | ontap-nas | 1654b6a5-b588-4d53-859d-20303c6d2f60 | online | normal | 0 |
+-----------------+----------------+--------------------------------------+--------+------------+---------+
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
ontap-vsim1-nas (default) csi.trident.netapp.io Delete Immediate false 18s
$ kubectl get volumesnapshotclass
NAME DRIVER DELETIONPOLICY AGE
trident-snapshotclass csi.trident.netapp.io Delete 57s
For the backup and restore tests, we installed a simple Minio® application:
$ kubectl get all,pvc -n minio
NAME READY STATUS RESTARTS AGE
pod/minio-b6b84f46c-sr2qm 1/1 Running 0 114s
pod/minio-console-cb99559c-zgq76 1/1 Running 0 114s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio ClusterIP 10.98.237.51 <none> 9000/TCP 115s
service/minio-console ClusterIP 10.104.0.87 <none> 9090/TCP 115s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio 1/1 1 1 115s
deployment.apps/minio-console 1/1 1 1 115s
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-b6b84f46c 1 1 1 115s
replicaset.apps/minio-console-cb99559c 1 1 1 115s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/minio Bound pvc-b0fed8f1-c5ed-4000-b938-ba29393864f0 8Gi RWO ontap-vsim1-nas <unset> 116s
Our NetApp Backup and Recovery account and environment are prepared already as per the NetApp Backup and Recovery requirements: My user has the required permissions and a NetApp Backup and Recovery connector is already installed and configured in our on-premises test environment, and the working environment was created and associated with an Azure Storage account.
The typical workflow to protect your K8s applications obviously starts by adding your K8s cluster to NetApp Backup and Recovery, enabling you to then add applications to the cluster and protect the resources hosted by the cluster.
If you are discovering Kubernetes workloads for the first time, in NetApp Backup and Recovery, select Discover and Manage under the Kubernetes workload type. If you have already discovered Kubernetes workloads, in NetApp Backup and Recovery, select Inventory > Workloads and then select Discover resources.
In the next screen, select the Kubernetes workload type.
Enter the cluster name to add to NetApp Backup and Recovery and choose a NetApp Backup and Recovery connector to use with the K8s cluster:
Now NetApp Backup and Recovery creates command line instructions to add the K8s cluster to NetApp Backup and Recovery.
Follow the command line instructions on your K8s cluster:
$ kubectl create namespace trident-protect
namespace/trident-protect created
$ kubectl -n trident-protect create secret generic tp-proxy-api-token --from-literal apiToken=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJhY2NvdW50X2lkIjoiN2YyOTU1YWItZDEwOS00NTk3LThkMzItMzM2M2NmOTVjYTllIiwiYWdlbnRfaWQiOiJjSFBTTFRWRzdXM1FKVnN0czJFSTRXazBUZmx5U2ZiVyIsImlhdCI6MTc1MjY4MTQ1M30.7PcaZ4VwnWCAhA137wV5lgH9cLHDTI8Ngzq9fut3vGY
secret/tp-proxy-api-token created
$ helm repo add --force-update netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart
"netapp-trident-protect" has been added to your repositories
$ helm install trident-protect netapp-trident-protect/trident-protect-connector --version 100.2507.0 --namespace trident-protect --set clusterName=sks3725 --set trident-protect.cbs.accountID=7f2955ab-d109-4597-8d32-3363cf95ca9e --set trident-protect.cbs.connectorID=cHPSLTVG7W3QJVsts2EI4Wk0TflySfbWclients --set trident-protect.cbs.proxySecretName=tp-proxy-api-token --set trident-protect.cbs.proxyHost=http://10.192.65.83/tpproxy
NAME: trident-protect
LAST DEPLOYED: Wed Jul 16 18:10:16 2025
NAMESPACE: trident-protect
STATUS: deployed
REVISION: 1
TEST SUITE: None
These steps install Trident protect and the Trident protect connector on the K8s cluster in the trident-protect namespace, ensuring that NetApp Backup and Recovery can interact with the K8s cluster.
$ kubectl get all -n trident-protect
NAME READY STATUS RESTARTS AGE
pod/trident-protect-connector-566f7c87cb-twc4b 1/1 Running 0 43s
pod/trident-protect-controller-manager-5cbf8b5f57-wpc4p 2/2 Running 1 (38s ago) 43s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tp-webhook-service ClusterIP 10.100.92.100 <none> 443/TCP 44s
service/trident-protect-controller-manager-metrics-service ClusterIP 10.109.117.217 <none> 8443/TCP 44s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/trident-protect-connector 1/1 1 1 44s
deployment.apps/trident-protect-controller-manager 1/1 1 1 44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/trident-protect-connector-566f7c87cb 1 1 1 44s
replicaset.apps/trident-protect-controller-manager-5cbf8b5f57 1 1 1 44s
After you complete the steps, select Discover. The cluster is added to the inventory.
Select View in the associated Kubernetes workload to see the list of applications, clusters, and namespaces for that workload. We see our newly added cluster sks3725 in the list of managed clusters:
With our test cluster sks3725 added to NetApp Backup and Recovery, we can now add our sample Minio application to NetApp Backup and Recovery, making NetApp Backup and Recovery aware of the running application on the Kubernetes cluster. In NetApp Backup and Recovery, select Inventory, then select the Applications tab and click on Create application.
Now add your name for the application, select the K8s cluster it’s running on, and the namespaces that contain the app’s resources. Optionally you can also include resources based on label and GroupVersionKind (GVK) filters and add cluster scoped resources to the NetApp Backup and Recovery application definition. In our case, we simply add the minio namespace to the application definition.
We now hit Search to get the list of application resources NetApp Backup and Recovery discovers and check the list for completeness.
In the next window, we could add pre- and postscripts (execution hooks the run before and after snapshots/backups) and assign a protection policy to the application. As we don’t need pre- and postscripts and want to create a protection policy separately, we skip both steps and create the application.
The minio application is created and appears in the list of applications in the Applications tab of the Kubernetes inventory with the protection status “Unprotected” and application status “Ready”.
Next, let’s create a new protection policy. We start from the list of applications in the Backup & recovery Inventory, find our still unprotected minio application in the list of applications and select
Next, let’s create a new protection policy. We start from the list of applications in the Backup & recovery Inventory, find our still unprotected minio application in the list of applications and select Protect from the associated Actions menu.
From there, we can create a new protection policy (or could add an existing one).
In the first step, we select Kubernetes as the workload type and set the policy name as pu-minio-11.
Then we select the backup architecture. As we want to protect our application also against for example the failure of the K8s cluster, the underlying ONTAP storage, or the whole data center, we select “Disk to object storage” as data flow model, which protects the application data by local snapshots and copies of the data in an offsite object storage.
Now we define the protection schedules, sticking with the default schedules of hourly snapshots and one daily snapshot, keeping the last three snapshots of each cycle.
Next, we select the object storage location to store the Kubernetes application resources during each snapshot cycle, selecting an already configured Azure backup target.
Then we select the object storage location to store the actual application backup data, in our case we use the same target as for the application resources.
In the last step, we can adjust the schedule for the backup schedules if needed and click Create to create the protection schedule.
To protect our sample application, we go back to the list of applications in the Backup & recovery Inventory, select our minio application and select Protect from the associated Actions menu.
Now we search for the newly created protection policy pu-minio-11 in the list of existing policies and select it.
As we don’t want to add any pre- or postscript, we select Done and the policy is attached to the application.
NetApp Backup and Recovery now immediately starts a baseline backup, a full backup of the application. Any future incremental backups are created based on the schedule that you define in the protection policy associated with the application. We can track the progress of the backup by selecting Track progress.
This leads to the Monitoring tab, where we see the details of the protection job, which succeeds after a short while.
Checking on the K8s cluster with the Trident protect CLI, we see the baseline backup and snapshot.
$ tridentctl-protect get backup -n minio
+-------------------------------------+------------------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-------------------------------------+------------------+----------------+-----------+-------+--------+
| baseline-transfer-backup-9hviqxzflt | minio-bjo9nkv54p | Retain | Completed | | 16m26s |
+-------------------------------------+------------------+----------------+-----------+-------+--------+
$ tridentctl-protect get snapshot -n minio
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
| backup-e97a09a2-7b1f-4296-b718-b8c3f648a29f | minio-bjo9nkv54p | Delete | Completed | | 16m35s |
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
We can also manually create a backup of the sample application if needed, for example to make sure the most recent data is protected. Again, from the list of application in the Inventory, select the minio application and select Backup now from the Actions menu.
Start the on-demand backup by selecting Back up.
The on-demand backup starts, and we could again track its progress.
Once the backup finishes, we can see it on the cluster listed as an ad-hoc backup.
$ tridentctl-protect get backup -n minio
+-------------------------------------+------------------+----------------+-----------+-------+-------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-------------------------------------+------------------+----------------+-----------+-------+-------+
| ad-hoc-backup-drpz2so916 | minio-bjo9nkv54p | Retain | Completed | | 2m8s |
| baseline-transfer-backup-9hviqxzflt | minio-bjo9nkv54p | Retain | Completed | | 39m8s |
+-------------------------------------+------------------+----------------+-----------+-------+-------+
From View and restore in the Actions menu of our application, we can check the details of the application protection, and the list of available restore points.
We see three restore points, as meanwhile the schedule kicked in an created the first scheduled backup.
This can also be seen with the CLI, showing the three available backups and snapshots.
$ tridentctl-protect get backup -n minio
+-------------------------------------+------------------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+-------------------------------------+------------------+----------------+-----------+-------+--------+
| ad-hoc-backup-drpz2so916 | minio-bjo9nkv54p | Retain | Completed | | 27m27s |
| baseline-transfer-backup-9hviqxzflt | minio-bjo9nkv54p | Retain | Completed | | 1h4m |
| custom-eb7d6-20250724090012 | minio-bjo9nkv54p | Retain | Completed | | 12m28s |
+-------------------------------------+------------------+----------------+-----------+-------+--------+
$ tridentctl-protect get snapshot -n minio
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
| NAME | APP | RECLAIM POLICY | STATE | ERROR | AGE |
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
| backup-e2f70988-8b2e-4d6f-9ee0-332f186a8c72 | minio-bjo9nkv54p | Delete | Completed | | 27m30s |
| backup-e97a09a2-7b1f-4296-b718-b8c3f648a29f | minio-bjo9nkv54p | Delete | Completed | | 1h4m |
| custom-eb7d6-20250724090012 | minio-bjo9nkv54p | Delete | Completed | | 12m31s |
+---------------------------------------------+------------------+----------------+-----------+-------+--------+
Now we’re all set to test a restore of the minio sample application. Out of the many possible restore (and failure) scenarios, let’s see in more detail how we can recover our application to another cluster in case of a failure of the original cluster hosting the application.
For a realistic test scenario, let’s first simulate a cluster failure by stopping the Trident protect connector on the cluster. We scale down the trident-protect-connector deployment to zero and wait for the associated pod to stop.
$ kubectl -n trident-protect scale deployment.apps/trident-protect-connector --replicas=0
deployment.apps/trident-protect-connector scaled
$ kubectl get all -n trident-protect
NAME READY STATUS RESTARTS AGE
pod/autosupportbundle-5cdc2a2f-4317-4eed-9c59-6f98d3bf0727-k58xh 1/1 Running 0 9h
pod/trident-protect-controller-manager-78f47d8788-fnlqs 2/2 Running 0 25h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/tp-webhook-service ClusterIP 10.102.57.222 <none> 443/TCP 25h
service/trident-protect-controller-manager-metrics-service ClusterIP 10.103.73.197 <none> 8443/TCP 25h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/trident-protect-connector 0/0 0 0 25h
deployment.apps/trident-protect-controller-manager 1/1 1 1 25h
NAME DESIRED CURRENT READY AGE
replicaset.apps/trident-protect-connector-677554db48 0 0 0 25h
replicaset.apps/trident-protect-controller-manager-78f47d8788 1 1 1 25h
NAME STATUS COMPLETIONS DURATION AGE
job.batch/autosupportbundle-5cdc2a2f-4317-4eed-9c59-6f98d3bf0727 Running 0/1 9h 9h
This leads to NetApp Backup and Recovery losing the communication to the K8s cluster. After some minutes, our cluster goes into the "Removed" state in NetApp Backup and Recovery.
Luckily our application is protected already, and we can easily initiate a restore to another cluster. We select again View and restore from the Actions menu of the minio application in the list of applications.
From the list of available restore points, we select the restore point from which we want to restore form and select Restore from its associated Actions menu.
As the local snapshots are not available anymore due to the cluster failure, we select to restore from object store and choose the destination cluster from the list of managed clusters.
We choose minio-dr as destination namespace on the destination cluster and select Next.
As we want to restore the complete application, we select Restore all resources and then Next.
Finally, we select to restore to the default storage class on the destination cluster and start the restore by selecting Restore.
We can also track the progress of the restore job, which finishes after some minutes.
Finally, we confirm that the minio application is up and running on the destination cluster in the minio-dr namespace.
$ kubectl get all,pvc -n minio-dr --context sks3794-admin@sks3794
NAME READY STATUS RESTARTS AGE
pod/minio-b6b84f46c-v7cxw 1/1 Running 0 4h42m
pod/minio-console-cb99559c-td8f4 1/1 Running 0 4h42m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/minio ClusterIP 10.103.102.117 <none> 9000/TCP 4h42m
service/minio-console ClusterIP 10.108.79.13 <none> 9090/TCP 4h42m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/minio 1/1 1 1 4h42m
deployment.apps/minio-console 1/1 1 1 4h42m
NAME DESIRED CURRENT READY AGE
replicaset.apps/minio-b6b84f46c 1 1 0 4h42m
replicaset.apps/minio-console-cb99559c 1 1 1 4h42m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/minio Bound pvc-d2bbdd4f-9fe1-4fa3-a89d-8c65264b400e 8Gi RWO ontap-vsim4-nas <unset> 4h42m
NetApp has introduced a preview of its NetApp Backup and Recovery solution for Kubernetes workloads, enhancing the protection of Kubernetes-based containers. This new solution offers a unified management console, leveraging NetApp SnapMirror for efficient backups and restores, and integrates seamlessly with NetApp Trident software for streamlined operations. Key features include centralized management, structured incremental backups, and the ability to restore applications across different clusters and namespaces.
This blog post provides a detailed guide on setting up protection policies, protecting, and restoring Kubernetes applications using NetApp Backup and Recovery. It includes step-by-step instructions for configuring the test environment, discovering Kubernetes clusters, creating applications, and implementing protection policies. Additionally, it covers manual and on-demand backups, as well as restoring applications to different clusters in case of failures.
Ready to elevate your data protection strategy? Dive into the comprehensive guide and start leveraging NetApp Backup and Recovery for your Kubernetes workloads today! Ensure your applications are protected against any failures with seamless, efficient, and unified backup solutions. Get started now and experience the future of data protection with NetApp Backup and Recovery!
Complete this form to gain access to the preview program. Upon completing the form, you'll gain access to the service and can start exploring its features and providing feedback within 48-72 hours.