Tech ONTAP Blogs

NetApp Trident with Google Cloud NetApp Volumes

DianePatton
NetApp
76 Views

Great news! NetApp® Trident™ software is now generally available (GA) with Google Cloud NetApp Volumes. This means you can use the same Trident Container Storage Interface (CSI) provisioner you know and love to provision and manage Google Cloud NetApp Volumes persistent volumes on all your Kubernetes clusters in Google Cloud. They can be Google Kubernetes Engine (GKE) standard clusters, Openshift Dedicated, and/or self-managed Kubernetes clusters. So, you can have the performance and reliability of Google Cloud NetApp Volumes with many Kubernetes distributions—it’s entirely up to you.

 

DianePatton_0-1730994899398.png

 

This blog post describes NetApp Trident for Google Cloud NetApp Volumes for NFSv3 and NFSv4.1 and outlines how to configure it to support all the needs of your persistent Kubernetes applications. As an example, it uses a standard Google Kubernetes Engine (GKE) cluster along with NetApp Volumes Standard and Premium service levels backend, although the Extreme and Flex service levels are also supported. For more information on using Flex storage pools, see Seamless Kubernetes Storage with Google Cloud NetApp Volumes Flex and NetApp Trident.

 

What is NetApp Trident?

 

NetApp Trident is an open-source CSI provisioner that’s maintained and supported by NetApp. It automates the provisioning and management of Google Cloud NetApp Volumes resources for Kubernetes, simplifying the setup and teardown of persistent volumes and associated storage. It can even do snapshots of persistent volumes. (You can find more information about Trident at Learn about Trident.)

 

How do I deploy NetApp Trident for Google Cloud NetApp Volumes?

 

You can get all of this up and running in just three easy steps:

  • Install Trident using Helm
  • Create the Trident back end for Google Cloud NetApp Volumes
  • Create storage classes

Step 1: Install Trident using Helm

Before installing Trident version 24.10 or higher, read the prerequisites. Be sure that your Kubernetes cluster is located in a Virtual Private Cloud (VPC) peered to Google Cloud NetApp Volumes VPC.

 

You can use two methods to install Trident: using an operator or using tridentctl. For the operator method, you can use a Helm chart or install manually. The tridentctl application can be downloaded, and it operates similarly to kubectl. In this blog, we’ll cover installing with a Helm chart. (For information about the other installation methods, see Manually deploy the Trident Operator and Install using tridentctl .)

 

First, be sure Helm is installed on your device used to access the kubernetes cluster.  Your kubeconfig needs to be accessing the cluster where you want to install Trident. Add Trident’s Helm repo using the command below:

 

$ helm repo add netapp-trident https://netapp.github.io/trident-helm-chart

 

After the repo is added, use Helm to install Trident on your kubernetes cluster. The command below creates a kubernetes namespace called trident and installs Trident into that namespace. 

 

$ helm install trident netapp-trident/trident-operator --version 100.2410.0 --create-namespace --namespace trident

NAME: trident

LAST DEPLOYED: Wed Nov  6 14:34:30 2024

NAMESPACE: trident

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

Thank you for installing trident-operator, which will deploy and manage NetApp's Trident CSI

storage provisioner for Kubernetes.

 

Your release is named 'trident' and is installed into the 'trident' namespace.

Please note that there must be only one instance of Trident (and trident-operator) in a Kubernetes cluster.

 

To configure Trident to manage storage resources, you will need a copy of tridentctl, which is

available in pre-packaged Trident releases.  You may find all Trident releases and source code

online at https://github.com/NetApp/trident.

 

To learn more about the release, try:

 

  $ helm status trident

  $ helm get all trident

 

After a few moments, Trident should now be installed on your cluster in the trident namespace.

 

$kubectl get pods -n trident

NAME                                                               READY   STATUS    RESTARTS        AGE

trident-controller-58cb765d9c-7z745   6/6     Running   0               7m22s

trident-node-linux-gsnwg                          2/2     Running   1 (6m41s ago)   7m22s

trident-node-linux-j6qxr                             2/2     Running   1 (6m48s ago)   7m22s

trident-node-linux-kpxxp                           2/2     Running   0               7m22s

trident-operator-76578bb8f6-cj6vh       1/1     Running   0               14m

 

Now connect Trident to Google Cloud NetApp Volumes.

 

Step 2: Create the Trident back end

Ensure your Kubernetes cluster is on the same VPC peered to Google Cloud NetApp Volumes, and you have network connectivity.

 

Create secret and backend YAML files that allow Trident access to Google Cloud NetApp Volumes for your persistent storage needs. There are many ways to provision volumes to meet your application needs from different service levels to limiting volume size.

 

As an example, let's set up Trident to create volumes for our Kubernetes applications within the Google Cloud NetApp Volumes Standard and Premium storage pools, although Extreme and Flex could be used as well. The pools we will use need to be already configured in Google Cloud NetApp Volumes before the backend is created as shown below.  Create a Storage Pool gives more information on creating storage pools in Google Cloud NetApp Volumes.

 

DianePatton_0-1730995878376.png

We will next create a secret to allow NetApp Trident access to Google Cloud NetApp Volumes. The secret includes Google credentials to allow Trident to create the volumes. Information needed for the secret is obtained by creating a Google Cloud service account with the Google Cloud NetApp Volumes Admin role and downloading a key. A sample secret is shown below.

 

secret.yaml

---

apiVersion: v1

kind: Secret

metadata:

  name: tbc-gcnv-secret

type: Opaque

stringData:

  private_key_id: 123456789abcdef123456789abcdef123456789a

  private_key: |

     -----BEGIN PRIVATE KEY-----

    znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m

    znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m

    znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m

    znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m

 

    

     -----END PRIVATE KEY-----

---

 

 

Next, we will create the secret in the Trident namespace (or where Trident is running).

 

$kubectl create -f secret.yaml -n trident

secret/tbc-gcnv-secret created

 

The backend YAML file is used to create the backend.  We could use as many service levels as we have storage pools configured in that region. This backend file will direct Trident to create volumes in the specified storage pool in that service level, but that is not necessary. If no storage pool is specified, the volumes will be created in any storage pool with the correct characteristics. A sample backend file using the Standard and Premium service level is shown below, although all four service levels are supported.

 

gcnv.yaml

apiVersion: trident.netapp.io/v1

kind: TridentBackendConfig

metadata:

  name: tbc-gcnv

spec:

  version: 1

  storageDriverName: google-cloud-netapp-volumes

  backendName: volumes-for-kubernetes

  projectNumber: '111122223333'

  location: europe-west6

  apiKey:

    type: service_account

    project_id: xxxx-sandbox

    client_email: gcnvaccount@xxxx-sandbox.iam.gserviceaccount.com

    client_id: '111122223333444455556'

    auth_uri: https://accounts.google.com/o/oauth2/auth

    token_uri: https://oauth2.googleapis.com/token

    auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs

    client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/gcnvaccount%40xxxx-sandbox.iam.gserviceaccount.com 

  credentials:

    name: tbc-gcnv-secret

  storage:

    - labels:

        performance: premium

      serviceLevel: premium

      storagePools:

      - k8s-pool-premium

    - labels:

        performance: standard

      serviceLevel: standard

      storagePools:

      - k8s-pool-standard

 

We will use the backend YAML file to create the backend.

 

$ kubectl create -f gcnv.yaml -n trident

tridentbackendconfig.trident.netapp.io/tbc-gcnv created

 

And let's check to be sure the backend is bound.

 

$kubectl get tridentbackendconfig tbc-gcnv -n trident

 

NAME       BACKEND NAME                                                  BACKEND UUID                           PHASE   STATUS

tbc-gcnv   volumes-for-kubernetes   e092c1da-ce28-4825-975b-a8a2531862fd   Bound   Success

 

The backend will bind only if the storage pools in the appropriate service level have already been created.

 

Step 3: Create storage classes

Create at least one storage class. In the samples below, we created two storage classes, one for each performance level. (For more information about creating storage classes, see Create a storage class.)

 

scstandard.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: gcnv-standard-k8s

  annotations:

    storageclass.kubernetes.io/is-default-class: "true"

provisioner: csi.trident.netapp.io

parameters:

  backendType: "google-cloud-netapp-volumes"

  selector: "performance=standard"

allowVolumeExpansion: true

 

scpremium.yaml

apiVersion: storage.k8s.io/v1

kind: StorageClass

metadata:

  name: gcnv-premium-k8s

provisioner: csi.trident.netapp.io

parameters:

  backendType: "google-cloud-netapp-volumes"

  selector: "performance=premium"

allowVolumeExpansion: true

 

We will create the sample storage classes.

$kubectl create -f scstandard.yaml

storageclass.storage.k8s.io/gcnv-standard-k8s created

$kubectl create -f scpremium.yaml

storageclass.storage.k8s.io/gcnv-premium-k8s created

 

Let's check to be sure the storage classes are available for us.

$kubectl get sc

NAME                     PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE

gcnv-premium-k8s         csi.trident.netapp.io   Delete          Immediate              true                   109m

gcnv-standard-k8s (default) csi.trident.netapp.io   Delete          Immediate              true                   109m

premium-rwo              pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   29h

standard                 kubernetes.io/gce-pd    Delete          Immediate              true                   29h

standard-rwo             pd.csi.storage.gke.io   Delete          WaitForFirstConsumer   true                   29h

 

Now what?

 

Now you’re ready to use your cluster to run stateful Kubernetes applications. Let’s check that out, too, by creating two sample persistent volume claims (PVC), and we can see what happens. 

 

Let’s start with two basic PVCs as shown below. One PVC will map to the standard service level, and one will map to the premium service level. We will use the ReadWriteOnce (RWO) access mode but of course, the ReadWriteMany (RWX), ReadOnlyMany (ROX), and ReadWriteOncePod (RWOP) access modes are also supported.

 

pvcstandard.yaml

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: standard-pvc

spec:

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 100Gi

  storageClassName: gcnv-standard-k8s

 

pvcpremium.yaml

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: premium-pvc

spec:

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 100Gi

  storageClassName: gcnv-premium-k8s

 

$kubectl create -f  pvcstandard.yaml

persistentvolumeclaim/standard-pvc created

 

$kubectl create -f  pvcpremium.yaml

persistentvolumeclaim/premium-pvc created

 

After the PVCs come up, they will be bound to PVs created by Trident.

 

$kubectl get pvc

NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS        AGE

premium-pvc    Bound    pvc-787a51b6-1877-40bd-bc9f-37f8e41b412d   100Gi      RWO            gcnv-premium-k8s    9m3s

standard-pvc   Bound    pvc-b6744d06-2b8a-461e-a92c-a09294c956fb   100Gi      RWO            gcnv-standard-k8s   11m

 

The volumes in Google Cloud NetApp Volumes are shown.

 

DianePatton_0-1730999516254.png

 

Now, all we need to do is attach our application to the PVC and we’ll have high-performance reliable storage for our stateful Kubernetes applications.

 

That’s all! You can now get started with Trident for Google Cloud NetApp Volumes for all your applications that need high-performance storage—it’s super easy. even planning additional features, like Google Cloud workload identity, zone awareness, and auto configuration. Let’s go!

 

Public