Tech ONTAP Blogs

Automate Cluster Discovery in NetApp Backup and Recovery for Kubernetes

PatricU
NetApp
422 Views

NetApp Backup and Recovery for Kubernetes offers enterprise-grade data protection for both containerized applications and virtual machines on Kubernetes, including Red Hat OpenShift and OpenShift Virtualization. In a recent blog post, we showed how the CLI- and custom resource (CR) based operations in Backup and Recovery for Kubernetes help in automating application protection and restore operations. In this blog, we demonstrate how you can add Kubernetes clusters to Backup and Recovery using the command line, further automating the protection of your cluster and applications.

Scenario and prerequisites

In our walk-through, we’ll add two OpenShift clusters pu-ocp1 and pu-ocp2 to NetApp Backup and Recovery using the command line and the NetApp Console API. NetApp Trident is already installed and configured on both OpenShift clusters. A Console agent (pu-agent-tmelab-v801) was already created in the NetApp Console, enabling the communication between Backup and Recovery and the clusters and the ONTAP storage system providing persistent storage to both clusters. The ONTAP system rtp-a800-c01 was already added to NetApp Console:

Screenshot 2026-03-17 at 18.49.56.png

For the next steps, we need the Id and address of the Console agent connecting our environment to the NetApp Console. We can find them, e.g., from the Console’s Agent overview. Going to the local UI will show us the address of our Agent.

Screenshot 2026-03-17 at 14.06.14.png

We also need the Account/Organization Id of our NetApp Console account, which we find in Administration -> Identity and Access -> Organization:

Screenshot 2026-03-17 at 09.53.07.png

Let’s store all three values in environment variables for later use:

$ export AGENT_ID=”Z6KvCmKutZN9NtkLlUYmaWjq7KGUrFnJclients”
$ export AGENT_ADDRESS=”https://10.192.162.113”
$ export ACCOUNT_ID=”01ff254f-ea5d-4ebb-849d-ed592b5b2c5e”

Create a Service Account

To communicate with Backup and Recovery’s API, we need to first create a Service Account with the necessary permissions. In the Console UI, go to Administration -> Identity and Access -> Members, select the Service accounts tab and select Add service account. We set the Service account name (pu-discover-clusters-tme in our example) and add the roles

“Data Service:Backup and recovery super admin” and “Platform:Organization admin” to the Service account.

Screenshot 2026-03-17 at 09.56.58.png

The Service account creation returns the credentials for the Service account, which we also save in environment variables.

Screenshot 2026-03-17 at 09.57.11.png

$ export SA_CLIENT_ID="<REDACTED>"
$ export SA_CLIENT_SECRET="<REDACTED>"

Now we have all the information available to create a Bearer token to authenticate our further requests against the Backup and Recovery REST API. The token is obtained by making an API call to the Auth0 authentication service with the necessary credentials and parameters:

$ curl --no-progress-meter -X POST https://netapp-cloud-account.auth0.com/oauth/token \
-H "Content-Type: application/x-www-form-urlencoded" \
-d client_id=${SA_CLIENT_ID} \
-d client_secret=${SA_CLIENT_SECRET} \
-d grant_type=client_credentials \
-d audience=https://api.cloud.netapp.com | jq
{
  "access_token": "<REDACTED>",
  "expires_in": 86400,
  "token_type": "Bearer"
}

We store the Bearer token in an environment variable, too:

$ export BEARER_TOKEN="<REDACTED>”

Keep in mind that the Bearer token from the NetApp Console web site has an expiration date. The API response includes an "expires_in" field that states when the token expires (24h). To refresh the token, you'll need to call this API again.

 

Let’s validate our token and the Agent with a simple GET listing the discovered K8s clusters before proceeding:

$ curl -sS --location "https://api.bluexp.netapp.com/backup-recovery/organizations/${ACCOUNT_ID}/v1/workloads/k8s/clusters" \
-H "x-agent-id: ${AGENT_ID}" \
-H "x-account-id: ${ACCOUNT_ID}" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${BEARER_TOKEN}" | jq .
{
  "items": [],
  "totalCount": 0
}

Discover clusters

To add a K8s cluster to Backup and Recovery, we need to create an OCCM credential and store it in a K8s secret.

OCCM (OnCommand Cloud Manager) credentials are used to authenticate and authorize access to various NetApp services and resources in cloud environments. These credentials are essential for managing and automating tasks within NetApp's cloud management solutions.

 

The below curl command creates the OCCM credential tpc-deploy-script-pu-ocp1 for the pu-ocp1 cluster:

$ curl --no-progress-meter -X POST "https://api.bluexp.netapp.com/account/${ACCOUNT_ID}/providers/cloudmanager_occmauth/api/v0.1/services" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${BEARER_TOKEN}" \
-H "X-Account-Id: ${ACCOUNT_ID}" \
-H "x-agent-id: ${AGENT_ID}" \
-d '{
    "name": "tpc-deploy-script-pu-ocp1"
  }' | jq .
{
  "id": "388daae4-f149-4683-a0e7-d2f98f8f589b",
  "name": "tpc-deploy-script-pu-ocp1",
  "clientId": "<REDACTED>",
  "clientSecret": "<REDACTED>"
}

We store the clientId and clientSecret values in environment variables:

$ export CLIENT_ID="<REDACTED>"
$ export CLIENT_SECRET="<REDACTED>"

After creating the trident-protect namespace on the K8s cluster, we store the occm credential in the secret occmauthcreds in the trident-protect namespace:

$ k create ns trident-protect
namespace/trident-protect created
$ kubectl create secret generic occmauthcreds \
--namespace=trident-protect \
--from-literal=client_id=${CLIENT_ID} \
--from-literal=client_secret=${CLIENT_SECRET}
secret/occmauthcreds created

Now everything’s in place to install Trident protect component of Backup and Recovery on the cluster pu-ocp1. The following commands add the Trident protect helm repository and then install Trident protect and the Trident protect connector, connecting the cluster to Backup and Recovery with cluster name pu-ocp1:

$ helm repo add --force-update netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart
"netapp-trident-protect" has been added to your repositories
$ helm upgrade --install trident-protect \
netapp-trident-protect/trident-protect-console \
--version 100.2602.1-console \
--namespace trident-protect \
--set clusterName=pu-ocp1 \
--set trident-protect.cbs.accountID=${ACCOUNT_ID} \
--set trident-protect.cbs.agentID=${AGENT_ID} \
--set trident-protect.cbs.proxySecretName=occmauthcreds \
--set trident-protect.cbs.proxyHostIP=${AGENT_ADDRESS}
Release "trident-protect" does not exist. Installing it now.
NAME: trident-protect
LAST DEPLOYED: Wed Mar 18 09:56:10 2026
NAMESPACE: trident-protect
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete
TEST SUITE: None

After a short while, the Trident protect pods will be up and running on the cluster:

$ kubectl get all -n trident-protect
NAME                                                      READY   STATUS    RESTARTS   AGE
pod/trident-protect-connector-7cbc6f84f4-nvkg9            1/1     Running   0          114s
pod/trident-protect-controller-manager-56fd6f7b5d-lvxl6   1/1     Running   0          114s
NAME                                                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/tp-webhook-service                                   ClusterIP   172.30.191.119   <none>        443/TCP    115s
service/trident-protect-controller-manager-metrics-service   ClusterIP   172.30.45.71     <none>        8443/TCP   115s
NAME                                                 READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/trident-protect-connector            1/1     1            1           115s
deployment.apps/trident-protect-controller-manager   1/1     1            1           115s
NAME                                                            DESIRED   CURRENT   READY   AGE
replicaset.apps/trident-protect-connector-7cbc6f84f4            1         1         1       114s
replicaset.apps/trident-protect-controller-manager-56fd6f7b5d   1         1         1       114s

In the Backup and Recovery Inventory, the cluster pu-ocp1 is now listed in the Connected state:

Screenshot 2026-03-18 at 09.59.57.png

Listing the discovered clusters again with the curl command from above, we can also confirm that pu-ocp1 is connected to Backup and Recovery:

$ curl -sS --location "https://api.bluexp.netapp.com/backup-recovery/organizations/${ACCOUNT_ID}/v1/workloads/k8s/clusters" \ 
-H "x-agent-id: ${AGENT_ID}" \
-H "x-account-id: ${ACCOUNT_ID}" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${BEARER_TOKEN}" | jq '.items[] | {id, name, state}'
{
  "id": "9c8c42a8-87f9-4a24-beb5-c0025c92b782",
  "name": "pu-ocp1",
  "state": "connected"
}

To discover our second cluster pu-ocp2, we follow the same steps. While technically it’s possible to use the same occm credential/secret for the second cluster, it’s best security practice to create dedicated credentials for each cluster:

$ curl --no-progress-meter -X POST "https://api.bluexp.netapp.com/account/${ACCOUNT_ID}/providers/cloudmanager_occmauth/api/v0.1/services" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer ${BEARER_TOKEN}" \
-H "X-Account-Id: ${ACCOUNT_ID}" \
-H "x-agent-id: ${AGENT_ID}" \
-d '{
    "name": "tpc-deploy-script-pu-ocp2"
}' | jq .
{
  "id": "600a88ad-3a4d-4478-92d9-2c6edb9d6569",
  "name": "tpc-deploy-script-pu-ocp2",
  "clientId": "<REDACTED>",
  "clientSecret": "<REDACTED>"
}

We save the new clientID and clientSecret values:

$ export CLIENT_ID=”<REDACTED”
$ export CLIENT_SECRET=”<REDACTED”

After switching the K8s context to the new cluster pu-ocp2 we create the trident-protect namespace and the occmauthcreds secret:

$ k config use-context pu-ocp2
Switched to context "pu-ocp2".

$ k create ns trident-protect
namespace/trident-protect created

$ k create secret generic occmauthcreds \
--namespace=trident-protect \
--from-literal=client_id=${CLIENT_ID} \
--from-literal=client_secret=${CLIENT_SECRET}
secret/occmauthcreds created

Now we can use the same commands as for the first cluster to install Trident protect on our second cluster, specifying the clusterName as pu-ocp2:

$ helm repo add --force-update netapp-trident-protect https://netapp.github.io/trident-protect-helm-chart
"netapp-trident-protect" has been added to your repositories
$ helm upgrade --install trident-protect \
netapp-trident-protect/trident-protect-console \
--version 100.2602.1-console \
--namespace trident-protect \
--set clusterName=pu-ocp2 \
--set trident-protect.cbs.accountID=${ACCOUNT_ID} \
--set trident-protect.cbs.agentID=${AGENT_ID} \
--set trident-protect.cbs.proxySecretName=occmauthcreds \
--set trident-protect.cbs.proxyHostIP=${AGENT_ADDRESS}
Release "trident-protect" does not exist. Installing it now.
NAME: trident-protect
LAST DEPLOYED: Wed Mar 18 15:45:07 2026
NAMESPACE: trident-protect
STATUS: deployed
REVISION: 1
DESCRIPTION: Install complete

Once the installation finishes and the Trident protect pods are running, we see both clusters listed as connected in the Inventory:

$ curl -sS --location "https://api.bluexp.netapp.com/backup-recovery/organizations/$ACCOUNT_ID/v1/workloads/k8s/clusters" -H "x-agent-id: $AGENT_ID" -H "x-account-id: $ACCOUNT_ID" -H "Content-Type: application/json" -H "Authorization: Bearer $BEARER_TOKEN" | jq '.items[] | {id, name, state}'
{
  "id": "9c8c42a8-87f9-4a24-beb5-c0025c92b782",
  "name": "pu-ocp1",
  "state": "connected"
}
{
  "id": "70da163a-083a-4070-96d1-13ce5bf6098c",
  "name": "pu-ocp2",
  "state": "connected"
}

Screenshot 2026-03-18 at 15.47.37.png

The steps shown here can be easily integrated in your cluster deployment processes, allowing you to add newly deployed clusters to Backup and Recovery automatically after cluster deployment without the need to refer to the Console UI.

Conclusion and call to action

The blog post discusses the automation of Kubernetes cluster discovery in NetApp Backup and Recovery, emphasizing the use of command-line tools to add clusters without the Console UI. It outlines steps like setting environment variables, creating service accounts, and configuring OCCM credentials. The process is demonstrated with installations on OpenShift clusters, enabling seamless integration and protection of new clusters.

 

Embrace the power of NetApp Backup and Recovery for Kubernetes to safeguard your critical applications and data. Start today and take your Kubernetes data protection to the next level. Login to NetApp Console, navigate to Protection --> Backup and Recovery and sign up for a free trial, discover your K8s clusters and bring their protection to the next level!

 

Public