Tech ONTAP Blogs

Google Cloud NetApp Volumes with Trident protect

DianePatton
NetApp
478 Views

In today's cloud-centric world, efficient and reliable data management is paramount for businesses of all sizes. Google Cloud NetApp Volumes has robust built-in data protection, including backup policies, snapshot schedules, and replication that you can apply to volumes  through the UI, API, CLI or through automation. However, at times you may need to integrate the data protection directly within your Linux Kubernetes cluster. For example, you might want your DevOps group to complete the tasks, you might need to back up applications together with the volumes, or you might want to self-manage the backups and restorations directly from the cluster to your own self-managed bucket in Google Cloud.   

DianePatton_0-1754076556032.png

 

Google Cloud NetApp Volumes, along with NetApp® Trident and Trident protect, offers a solution to address these needs. This blog covers how to install, configure, and use Trident protect with Google Cloud NetApp Volumes, offering application and data protection schedules for backups, snapshots, and restoration within Kubernetes in your Google Cloud environment. We use Kubernetes custom resources (CRs) as examples, but the Trident protect CLI can be used as well. You can find more information on the Trident protect CLI in the documentation.  

 

What is Google Cloud NetApp Volumes? 

Google Cloud NetApp Volumes is a service that combines the capabilities of NetApp storage and data management with Google Cloud’s infrastructure and services. It allows users to deploy high-performance, scalable storage in the cloud for a wide range of workloads, including databases, analytics, and enterprise applications. Google Cloud NetApp Volumes provides features like cloning, encryption, and data protection to help organizations manage their data effectively in the cloud environment. It also enables users to seamlessly move and manage their data between on-premises environments and Google Cloud, offering a flexible and efficient storage solution for various cloud-based applications. 

 

What is Trident protect? 

Trident protect enhances data management in Kubernetes clusters by providing advanced data protection features not included with Trident or other CSI-compliant provisioners. Although Trident can create manual snapshots, Trident protect can allow you to create a snapshot schedule. In addition, Trident protect allows you to create both volume and application backups in your self-managed bucket in Google Cloud Storage. You can even filter the exact resources you want to back up. You can then restore that backup to either the same or a new Kubernetes cluster. 

 

With Trident protect and NetApp Volumes, you can: 

 

  • Create snapshots. Create and manage consistent point-in-time snapshots of your applications and volumes for quick recovery from accidental deletions or other outages. 
  • Perform backups. Back up the application and the volume together, allowing for easy restoration of the entire service in the same or a new Kubernetes cluster.  
  • Restore. Restore applications and persistent volumes (PVs) from snapshots or backups. 
  • Automate. Automate data protection tasks based on predefined policies created with a CR manifest file, reducing manual effort and potential errors. 

 

Installation and Configuration 

We’ll show you how to complete a basic installation and configuration of Trident protect. Our example uses a Google Kubernetes Engine (GKE) cluster, but other Kubernetes distributions also work with NetApp Volumes. The Kubernetes cluster must be running with Linux worker nodes. (For more detailed information, see the Trident protect documentation.) 

 

 

Prerequisites 

  • kubectl access to the Kubernetes cluster. 
  • Permissions to create service accounts and buckets in Google Cloud.  
  • An installed application with a PV (created by Trident) that you wish to protect in a nondefault namespace. 
  • A snapshot class running on the cluster. You also may need to install a volume snapshot controller if it doesn’t come automatically installed with your Kubernetes distribution. GKE has a snapshot controller installed by default. 

Step 1: Install Trident protect   

After Trident is up and running with a configured back end, install Trident protect as shown below. Replace cluster-1-protect with the name of the cluster to which you’re installing Trident protect:  

 

~$helm install trident-protect netapp-trident-protect/trident-protect --set clusterName=cluster-1-protect --version 100.2506.0 --create-namespace --namespace trident-protect 
NAME: trident-protect 
LAST DEPLOYED: Fri Jul 18 09:56:57 2025 
NAMESPACE: trident-protect 
STATUS: deployed 
REVISION: 1 
TEST SUITE: None 

 

If you need to customize the installation, refer to Customize Trident protect installation. 

Step 2: Create a bucket 

You’ll need a bucket in Google Cloud to store the backups and the metadata from the backups and snapshots. (The snapshots are stored on Google Cloud NetApp Volumes itself).  

 

Create a bucket in Google Cloud storage using the steps in the Create a bucket documentation. If you want to protect and restore clusters across regions, the bucket must be multi-region. An example is shown below.  

 

DianePatton_1-1754076882974.png

 

Step 3: Create a service account in Google Cloud and download a key 

Create a new service account in Google Cloud following the steps in Create service accounts, and assign the Storage Admin role. An example is shown below. 

 

DianePatton_2-1754076940232.png

 

Assign your new service account Storage Admin rights as shown here:  

 

DianePatton_3-1754076986661.png

 

You should see the proper role assigned under IAM. 

 

Create a key for the service account and download the key. Go to IAM & Admin > Service Accounts, select your new service account, and then select Keys. Create a new key, which will automatically download. Keep this key in a safe place. It will be used to create a secret to allow Trident protect access to the bucket. 

 

Step 4: Create an AppVault CR and apply it 

You need to connect Trident protect to the bucket by creating a Kubernetes CR called AppVault. This is where the backups and snapshot metadata are stored.  

 

The backups are created by using a data mover. You have the choice of using Kopia (default) or restic. The Kopia or restic pods are booted only during backups and are removed when the backup is completed.  

 

Optionally, you can also create your own custom password for the data movers. If you choose to create your own password, follow the steps in the Data Mover repository passwords documentation. In this case, you’ll need to create a secret with your Base64-encoded password. The data mover will automatically be configured with this password upon startup. The secret will need to be stated in the AppVault to enable the custom password. Otherwise, a default password will be used.   

 

A secret is also needed to access the bucket. Create a secret from the key you downloaded in step 3. Replace trikey with the name you want for your secret, and replace trident-protect-key.json with the name of your key. 

 

~$kubectl create secret generic trikey --from-file=credentials=trident-protect-key.json -n trident-protect

 

Create a manifest file for the AppVault that references your key, and also your data mover secret, if you created one. (In this example, we didn’t create a data mover password.) Make the appropriate changes for your environment. 

 

appvault.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: AppVault 
metadata: 
  name: gcp-trident-protect-src-bucket 
  namespace: trident-protect 
spec: 
#  dataMoverPasswordSecretRef: my-optional-data-mover-secret 
  providerType: GCP 
  providerConfig: 
    gcp: 
      bucketName: trident_protect_patd 
      projectID: cvs-pm-host-1p 
  providerCredentials: 
    credentials: 
      valueFromSecret: 
        key: credentials 
        name: trikey  

 

Apply the manifest file, and check that it’s active and accessible. 

 

~$kubectl create -f appvault.yaml  
appvault.protect.trident.netapp.io/gcp-trident-protect-src-bucket created 
~$kubectl get appvault -n trident-protect 
NAME                             STATE       ERROR   MESSAGE   AGE 
gcp-trident-protect-src-bucket   Available                     10s 

 

You’re now ready to start protecting applications and their data. 

 

Set up data protection 

 

Trident protect uses CRs to define and protect Kubernetes resources.    

 

Step 1: Define the application 

Create a CR that defines the application to protect. The application can be anything in a specific namespace (or multiple namespaces), specific objects in the namespace (such as persistent volume claims [PVCs] and PVs), or objects identified by labels. You can also add non-namespaced resources such as storage classes to the application CR. (For more information, see the documentation.)  

 

In the following example, we choose to define an application as all objects in the namespace called blog. If a PVC is included, the PV it’s bound to is also included, even though PVs aren’t namespaced.  

 

app.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: Application 
metadata:
  annotations: 
    protect.trident.netapp.io/skip-vm-freeze: "false" 
  name: ghost 
  namespace: blog  
spec: 
  includedNamespaces: 
    - namespace: blog 

Starting with Trident protect 25.06, you can easily back up just PVs and PVCs by using resource filters instead of labels. The following example shows how to create a Trident protect application that consists of only PVs and PVCs located in the blog namespace  

 

volumeonlyapp.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: Application 
metadata: 
  name: ghost-volumeonly 
  namespace: blog 
spec: 
  includedNamespaces: 
  - namespace: blog 
  resourceFilter: 
    resourceMatchers: 
    - kind: PersistentVolumeClaim 
      version: v1 
    - kind: PersistentVolume 
      version: v1 
    - kind: VolumeSnapshotClass 
      version: v1 
    resourceSelectionCriteria: Include 

Step 2: Create a manual snapshot or backup 

 

You can now create additional CRs to do snapshots or backups. An example manual snapshot CR would look like the following manifest. Change the values to your names. 

 

snapshot.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: Snapshot 
metadata: 
  namespace: blog  
  name: ghost-snapshot1 
spec: 
  applicationRef: ghost  
  appVaultRef: gcp-trident-protect-src-bucket 
  reclaimPolicy: Delete 

 

You can then apply or create the manifest file. 

 

~$kubectl create -f snapshotghost.yaml -n blog                               
snapshot.protect.trident.netapp.io/ghost-snapshot1 created 
~$kubectl get snapshot.protect.trident.netapp.io/ghost-snapshot1 -n blog 
NAME              APP     RECLAIM POLICY   STATE     ERROR   AGE 
ghost-snapshot1   ghost   Delete           Running           17s 
~$kubectl get snapshot.protect.trident.netapp.io/ghost-snapshot1 -n blog 
NAME              APP     RECLAIM POLICY   STATE       ERROR   AGE 
ghost-snapshot1   ghost   Delete           Completed           79s 

 

And you can see the snapshot on Google Cloud NetApp Volumes:

 

DianePatton_0-1754087353986.png

 

Similarly, you can use the Backup CR to create a full or incremental backup. Change the values to your own: 

 

backup.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: Backup 
metadata: 
  namespace: blog 
  name: blog-backup1 
  annotations: 
    protect.trident.netapp.io/full-backup: "true" 
spec: 
  applicationRef: ghost 
  appVaultRef: gcp-trident-protect-src-bucket 
  dataMover: Kopia 

 

You’ll see the backup running and then completed. 

 

~$~$kubectl create -f backupghost.yaml -n blog           
backup.protect.trident.netapp.io/blog-backup created 
~$kubectl get backup.protect.trident.netapp.io/blog-backup -n blog 
NAME          APP     RECLAIM POLICY   STATE     ERROR   AGE 
blog-backup   ghost   Retain           Running           15s 
~$kubectl get backup.protect.trident.netapp.io/blog-backup -n blog 
NAME          APP     RECLAIM POLICY   STATE       ERROR   AGE 
blog-backup   ghost   Retain           Completed           4m49s 

 

You should also be able to see the backup in the bucket. In addition to the backup data, you’ll see the metadata for snapshots. The actual backup is in the kopia (or restic) folder, whereas the metadata is in the backups folder. Every time a full backup is created, you’ll see a new kopia (or restic) folder. Incremental backups don’t create new kopia or restic folders. 

 

DianePatton_1-1754087539001.png

 

Step 3: Set up scheduled snapshots or backups  

 

As with manual backups and snapshots, you can set up backup and/or snapshot schedules by using a manifest file.  

 

An example of setting up a backup schedule is shown here. In the example, we create an incremental backup every day at 1 minute past midnight, and full backups every Friday. We keep 15 backups and 15 snapshots. The dayofMonth or dayofWeek value can be used only if we choose monthly or weekly backups. (For more information, see the documentation.) Be sure to change the values to match your own environment.  

 

The following example sets up the backup schedule described above: 

 

backupschedule.yaml

--- 
apiVersion: protect.trident.netapp.io/v1 
kind: Schedule 
metadata: 
  namespace: blog  
  name: ghost-schedule 
  annotations: 
    protect.trident.netapp.io/full-backup-rule: "Friday" 
spec: 
  dataMover: Kopia 
  applicationRef: ghost 
  appVaultRef: gcp-trident-protect-src-bucket 
  backupRetention: "15" 
  snapshotRetention: "15" 
  granularity: Daily 
#  dayOfMonth: "3" 
#  dayOfWeek: "0" 
  hour: "0" 
  minute: "1" 

 

And then create it: 

 

~$kubectl create -f trident-backup-schedule-cr.yaml -n blog 
schedule.protect.trident.netapp.io/ghost-schedule created 
~$kubectl describe schedule.protect.trident.netapp.io/ghost-schedule -n blog 
Name:         ghost-schedule 
Namespace:    blog 
Labels:       <none> 
Annotations:  protect.trident.netapp.io/full-backup-rule: Friday 
API Version:  protect.trident.netapp.io/v1 
Kind:         Schedule 
Metadata: 
  Creation Timestamp:  2025-07-18T15:52:21Z 
  Generation:          2 
  Owner References: 
    API Version:     protect.trident.netapp.io/v1 
    Kind:            Application 
    Name:            ghost 
    UID:             ecc6aac6-1388-416a-9f92-f40fe6cd5b73 
  Resource Version:  1752853941486047002 
  UID:               48e40996-fdbd-4fe0-9b25-629d440a134f 
Spec: 
  App Vault Ref:          gcp-trident-protect-src-bucket 
  Application Ref:        ghost 
  Backup Retention:       15 
  Data Mover:             Kopia 
  Day Of Month:            
  Day Of Week:             
  Enabled:                true 
  Granularity:            Daily 
  Hour:                   0 
  Minute:                 1 
  Recurrence Rule:         
  Replication Retention:  0 
  Snapshot Retention:     15 
Events:                   <none> 

 

Backups are stored in your bucket, up to the maximum specified in your schedule.  

 

Restore the application and volume 

What’s the point of backing up if you can’t restore? Trident protect can restore your application with its data to the same namespace or a different namespace on the same cluster. It can also restore to a new cluster. We’ll show you how to restore to a new cluster. (For information about restoring to the same cluster, see the documentation.)  

 

To restore the application, you need the following prerequisites: 

 

  • A storage class with the same name on the destination cluster. (Alternatively, you could back up the storage class and restore it, but we didn’t do that in the example.) 
  • kubectl access to the Kubernetes cluster restoring the volume 
  • Trident protect installed and running according to step 1 of the “Installation and configuration” section, earlier in this blog.  
  • If you intend to take snapshots or backups, a snapshot class running on the cluster. You might also need to install a volume snapshot controller if it doesn’t come automatically installed with your Kubernetes distribution. GKE has a snapshot controller installed by default. 
  • A namespace to which the application can be restored. 
  • An AppVault configured with access to the same bucket where the backups are stored, as shown in step 4 of the “Installation and configuration” section. 

From the restoration cluster, list the backups and path of the backup you want to restore if needed. This can be done only with the tridentctl-protect CLI plug-in. (For more information on downloading and using the tridentctl-protect CLI plug-in, see the documentation.) 

 

~$tridentctl-protect get appvaultcontent gcp-trident-protect-src-bucket --show-resources backup --show-paths  -n trident-protect 
+---------+-------+--------+-------------+-----------+---------------------------+-----------------------------------------------------------------------------------------------------+ 
| CLUSTER |  APP  |  TYPE  |    NAME     | NAMESPACE |         TIMESTAMP         |                                                PATH                                                 | 
+---------+-------+--------+-------------+-----------+---------------------------+-----------------------------------------------------------------------------------------------------+ 
|         | ghost | backup | blog-backup | blog      | 2025-07-18 15:48:43 (UTC) | ghost_ecc6aac6-1388-416a-9f92-f40fe6cd5b73/backups/blog-backup_287afff7-b3fb-4fc4-8d19-46cf8e8f3b9d | 
+---------+-------+--------+-------------+-----------+---------------------------+-----------------------------------------------------------------------------------------------------+ 

 

Create the manifest file to restore the application. It uses a CR called BackupRestore. Be sure to change the values to your bucket name and path to the backup version you want to restore. Also specify the namespace the source was in and the namespace you want the application to be restored to. 

 

restorebackup.yaml

apiVersion: protect.trident.netapp.io/v1 
kind: BackupRestore 
metadata: 
  name: myrestore 
  namespace: blog 
spec: 
  appVaultRef: gcp-trident-protect-src-bucket 
  appArchivePath: "ghost_ecc6aac6-1388-416a-9f92-f40fe6cd5b73/backups/blog-backup_287afff7-b3fb-4fc4-8d19-46cf8e8f3b9d"  
  namespaceMapping: [{"source": "blog", "destination": "blog"}] 

 

Apply the restore manifest, and you’ll have the application and its volume on the new cluster. You’ll be up and running in no time! 

 

~$kubectl create -f Backuprestore.yaml -n blog 
backuprestore.protect.trident.netapp.io/myrestore created 
~$kubectl get backuprestore.protect.trident.netapp.io/myrestore  -n blog 
NAME        STATE     ERROR   AGE 
myrestore   Running           13s 
~$kubectl get backuprestore.protect.trident.netapp.io/myrestore  -n blog 
NAME        STATE       ERROR   AGE 
myrestore   Completed           115s 
~$kubectl get pods -n blog 
NAME                    READY   STATUS    RESTARTS   AGE 
blog-86694c69bc-lp6fh   1/1     Running   0          72s 
~$kubectl get pvc -n blog 
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        VOLUMEATTRIBUTESCLASS   AGE 
blog-content   Bound    pvc-929a277b-4f63-40da-881e-2d1a1916aa55   10Gi       RWX            gcnv-flex-k8s-nfs   <unset>                 90s 

 

Your volume will have the same contents as the original volume. 

 

Conclusion 

 

Google Cloud NetApp Volumes with Trident protect offers a compelling solution for businesses seeking to modernize their data management and protection strategies on their Kubernetes clusters in Google Cloud. By combining high-performance file storage and advanced data protection features with Kubernetes, this integrated offering provides the reliability, flexibility, and control necessary to thrive in todays dynamic IT landscape. Embrace this powerful duo to secure your data and unlock the full potential of your Google Cloud environment. 

Public