Tech ONTAP Blogs
Tech ONTAP Blogs
Ding! Ding! Great news! NetApp® Astra™ Trident™ software is now in technical preview with Google Cloud NetApp Volumes (NetApp Volumes). This means you can use the same Astra Trident Container Storage Interface (CSI) provisioner you know and love to provision and manage Google Cloud NetApp Volumes persistent volumes on all your Kubernetes clusters in Google Cloud. They can be Google Kubernetes Engine (GKE) standard clusters and/or self-managed Kubernetes clusters. So, you can have the performance and reliability of Google Cloud NetApp Volumes with many Kubernetes distributions - it's entirely up to you.
This blog post introduces Astra Trident for Google Cloud NetApp Volumes for NFSv3 and NFSv4.1 and describes how to configure it to support all the needs of your persistent Kubernetes applications. It uses a standard Google Kubernetes Engine (GKE) cluster as an example with standard and premium storage pools, although extreme and flex are also supported. For more information on using flex storage pools, see Seamless Kubernetes Storage with Google Cloud NetApp Volumes Flex and Astra Trident.
If you would like to see a video on how to install Trident with Google Cloud NetApp Volumes, please see How To: Install Astra Trident Tech Preview with Google Cloud NetApp Volumes.
NetApp Astra Trident is an open-source CSI provisioner that’s maintained and supported by NetApp. It automates the provisioning and management of Google Cloud NetApp Volumes resources for Kubernetes, simplifying the setup and teardown of persistent volumes and associated storage. (You can find more information about Astra Trident at Learn about Astra Trident.)
You can get all of this up and running in just three easy steps:
Before downloading and installing Trident version 24.06 or higher, read the prerequisites. Be sure that your Kubernetes cluster is located in a Virtual Private Cloud (VPC) peered to NetApp Volumes VPC.
You can use two methods to install Trident: using an operator or using tridentctl. For the operator method, you can use a Helm chart or install manually. The tridentctl application can be downloaded, and it operates similarly to kubectl. In this blog, we’ll cover installing manually with the operator. (For information about the other installation methods, see Deploy Trident operator using Helm and install using tridentctl.
First, download a copy of Trident to your local computer that has kubectl installed and has kubectl access to your Kubernetes cluster. Be sure to unzip the file after download and go to the directory.
$ wget https://github.com/NetApp/trident/releases/download/v24.06.0/trident-installer-24.06.0.tar.gz
--2024-06-03 16:57:53--
https://github.com/NetApp/trident/releases/download/v24.06.0/trident-installer-24.06.0.tar.gz
Resolving github.com (github.com)... 140.82.112.4
Connecting to github.com (github.com)|140.82.112.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
…
$tar -xf trident-installer-24.06.0.tar.gz
$cd trident-installer
Next, install the custom resource definition (CRD) for the Trident orchestrator custom resource (CR). The YAML file for the CRD is included in the bundle you just downloaded.
$kubectl create -f deploy/crds/trident.netapp.io_tridentorchestrators_crd_post1.16.yaml
customresourcedefinition.apiextensions.k8s.io/tridentorchestrators.trident.netapp.io created
Next, create the trident namespace and deploy the operator along with the service account and role-based access control (RBAC) for the operator.
$kubectl create ns trident
namespace/trident created
$kubectl create -f deploy/bundle_post_1_25.yaml
serviceaccount/trident-operator created
clusterrole.rbac.authorization.k8s.io/trident-operator created
clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
deployment.apps/trident-operator created
You should now see the operator appear in your cluster.
$kubectl get pods -n trident
NAME READY STATUS RESTARTS AGE
trident-operator-76578bb8f6-cj6vh 1/1 Running 0 15s
Deploy the Trident orchestrator CR. This resource will deploy several pods: a controller pod and a pod on each worker node.
$kubectl apply -f deploy/crds/tridentorchestrator_cr.yaml
tridentorchestrator.trident.netapp.io/trident created
$kubectl get pods -n trident
NAME READY STATUS RESTARTS AGE
trident-controller-58cb765d9c-7z745 6/6 Running 0 7m22s
trident-node-linux-gsnwg 2/2 Running 1 (6m41s ago) 7m22s
trident-node-linux-j6qxr 2/2 Running 1 (6m48s ago) 7m22s
trident-node-linux-kpxxp 2/2 Running 0 7m22s
trident-operator-76578bb8f6-cj6vh 1/1 Running 0 14m
Trident is up and running. Let’s now connect Trident to NetApp Volumes.
Ensure your Kubernetes cluster is on the same VPC peered to NetApp Volumes and you have network connectivity.
Create a YAML file that will allow Astra Trident access to NetApp Volumes for your persistent storage needs. There are many ways to provision volumes to meet your application needs from different service levels to limiting volume size.
As an example, we will create volumes for our Kubernetes applications within the NetApp Volumes Standard and Premium storage pools, although Extreme and Flex could be used as well. Be sure that the pools you’ll use are already configured in NetApp Volumes before you create the back end.
Create a secret to allow Astra Trident access to your NetApp Volumes. Be sure to download credentials from a Google service account with rights to Google Cloud NetApp Volumes Admin. The secret will include your Google project information as well as your credentials to create the volumes.
secret.yaml
---
apiVersion: v1
kind: Secret
metadata:
name: tbc-gcnv-secret
type: Opaque
stringData:
private_key_id: 123456789abcdef123456789abcdef123456789a
private_key: |
-----BEGIN PRIVATE KEY-----
znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m
znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m
znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m
znHczZsrrtHisIsAbOguSaPIKeyAZNchRAGzlzZE4jK3bl/qp8B4Kws8zX5ojY9m
...
-----END PRIVATE KEY-----
---
Deploy the secret in the Trident namespace (or where you have Trident running).
$kubectl create -f secret.yaml -n trident
secret/tbc-gcnv-secret created
The back-end YAML file will be used to create the back end.
The following back-end example file includes the ability to create and access volumes in both a Standard and a Premium NetApp Volumes storage pool, although Flex and Extreme could also be swapped/added if you intend to use those as well. Configure the same service levels as you have storage pools configured in that region for the Kubernetes applications. This example backend file below will create a volume in any configured storage pool with the correct characteristics, but you could get more granular by specifying individual storage pool names. Denote the name by adding storagePools to the backend file, under the main area or the virtual pools section. As shown, the back-end file includes your project number, location, back-end name, and so on.
gcnv.yaml
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: tbc-gcnv
spec:
version: 1
storageDriverName: google-cloud-netapp-volumes
backendName: volumes-for-kubernetes
projectNumber: '111122223333'
location: europe-west6
apiKey:
type: service_account
project_id: xxxx-sandbox
client_email: gcnvaccount@xxxx-sandbox.iam.gserviceaccount.com
client_id: '111122223333444455556'
auth_uri: https://accounts.google.com/o/oauth2/auth
token_uri: https://oauth2.googleapis.com/token
auth_provider_x509_cert_url: https://www.googleapis.com/oauth2/v1/certs
client_x509_cert_url: https://www.googleapis.com/robot/v1/metadata/x509/gcnvaccount%40xxxx-sandbox.iam.gserviceaccount.com
credentials:
name: tbc-gcnv-secret
storage:
- labels:
performance: premium
serviceLevel: premium
- labels:
performance: standard
serviceLevel: standard
For more information regarding using Trident with the Flex service level, see
Seamless Kubernetes storage with Google Cloud NetApp Volumes Flex and Astra Trident.
We then can then install the back end using kubectl.
$ kubectl create -f gcnv_backend.yaml -n trident
tridentbackendconfig.trident.netapp.io/tbc-gcnv created
And check to be sure the back end is bound.
$kubectl get tridentbackendconfig tbc-gcnv -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
tbc-gcnv volumes-for-kubernetes e092c1da-ce28-4825-975b-a8a2531862fd Bound Success
In this case, the back end will bind only if the storage pools in the service levels listed in the back-end file have already been created.
You can also create back ends by using tridentctl, which comes bundled with your download. If you are using a Mac, a version of tridentctl for macOS is located in the /extras/macos/bin directory.
Create at least one storage class. In the sample below, we created two storage classes, one for each performance level. (For more information about creating storage classes, see Create a storage class.)
scstandard.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gcnv-standard-k8s
provisioner: csi.trident.netapp.io
parameters:
backendType: "google-cloud-netapp-volumes"
selector: "performance=standard"
allowVolumeExpansion: true
scpremium.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gcnv-premium-k8s
provisioner: csi.trident.netapp.io
parameters:
backendType: "google-cloud-netapp-volumes"
selector: "performance=premium"
allowVolumeExpansion: true
$kubectl create -f scstandard.yaml
storageclass.storage.k8s.io/gcnv-standard-k8s created
$kubectl create -f scpremium.yaml
storageclass.storage.k8s.io/gcnv-premium-k8s created
We can check the storage classes to be sure they are available.
$kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
gcnv-premium-k8s csi.trident.netapp.io Delete Immediate true 109m
gcnv-standard-k8s csi.trident.netapp.io Delete Immediate true 109m
premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 29h
standard kubernetes.io/gce-pd Delete Immediate true 29h
standard-rwo (default) pd.csi.storage.gke.io Delete WaitForFirstConsumer true 29h
Now you’re ready to use your cluster to run stateful Kubernetes applications. Let’s check that out, too, by creating a persistent volume claim (PVC), and we can see what happens. Make sure you have network reachability from your cluster to Google Cloud NetApp Volumes.
Let’s start with two basic PVCs. One PVC will map to the Standard service level, and one will map to the premium service level. Of course, the ReadWriteMany (RWX), ReadOnlyMany (ROX), and ReadWriteOncePod (RWOP) access modes are also supported. For validation, we will create these in the default Kubernetes namespace, although any namespace could be used.
pvcstandard.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: standard-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gcnv-standard-k8s
pvcpremium.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: premium-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: gcnv-premium-k8s
$kubectl create -f pvcstandard.yaml
persistentvolumeclaim/standard-pvc created
$kubectl create -f pvcpremium.yaml
persistentvolumeclaim/premium-pvc created
After the PVCs come up, we can see them bound to Persistent Volumes (PVs) that Astra Trident created for you.
$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
premium-pvc Bound pvc-787a51b6-1877-40bd-bc9f-37f8e41b412d 100Gi RWO gcnv-premium-k8s 9m3s
standard-pvc Bound pvc-b6744d06-2b8a-461e-a92c-a09294c956fb 100Gi RWO gcnv-standard-k8s 11m
We can also see the volumes in NetApp Volumes.
Now, all we need to do is attach our application to the PVC and we’ll have high-performance reliable storage for our stateful Kubernetes applications.
That’s all! You can now get started with Astra Trident for Google Cloud NetApp Volumes for all your applications that need high-performance storage—it’s super easy. General availability of Astra Trident with Google Cloud NetApp Volumes is planned by the end of the year. even planning additional features, like Google Cloud workload identity, zone awareness, and auto configuration. Let’s go!