Tech ONTAP Blogs
Tech ONTAP Blogs
Kubernetes has revolutionized the way we deploy and manage applications on premises and in the cloud, offering unprecedented levels of flexibility and scalability. As organizations diversify their infrastructure across hybrid and multicloud environments, a versatile and resilient storage solution has never been more important.
The ideal Kubernetes storage solution must be inherently scalable, supporting the dynamic nature of containerized applications without introducing bottlenecks or single points of failure. It must offer simplicity in provisioning and management, allowing admins to define storage resources and developers to dynamically provision volumes. Resilience is another key feature; the storage system must ensure data durability and offer robust disaster recovery capabilities to withstand the potential challenges of application and infrastructure failures.
Google Cloud NetApp Volumes (NetApp Volumes) is a fully managed file storage service in Google Cloud, built on NetApp® ONTAP® technology. In addition to the existing Standard, Premium, and Extreme NetApp Volumes service levels, the recently released Flex service level offers software-defined ONTAP storage that’s fine-tuned for Kubernetes workloads and operated by Google Cloud. Combined with NetApp Astra™ Trident™, a Container Storage Interface (CSI) compliant dynamic storage orchestrator for Kubernetes, NetApp Volumes Flex is an ideal storage solution for your Kubernetes workloads.
This blog first delves into why NetApp Volumes Flex and Trident work so well for Kubernetes. It then steps through creating a NetApp Volumes Flex storage pool; configuring a Trident storage backend to make use of the pool (available as a tech preview in June 2024, with general availability slated for later this year); and deploys a sample Kubernetes application that uses Trident’s automated volume provisioning.
When looking for persistent storage solutions for Kubernetes workloads, it's important to consider several attributes to make sure that the storage will meet the needs of your applications and the Kubernetes platform itself. Here are some important attributes to consider, along with additional information about how NetApp Volumes Flex combined with Trident meets and exceeds these requirements.
Google Cloud NetApp Volumes with the Flex service level and NetApp Astra Trident provide robust persistent storage for any Kubernetes workload. Next, let’s see how easy it is to set up a NetApp Volumes Flex storage pool.
To create our Google Cloud NetApp Volumes storage pool, we use the GCP console; however, as mentioned earlier, the gcloud CLI, Rest API, Terraform, and Pulumi are all supported options.
In the GCP console, navigate to the NetApp Volumes storage pools page and click Create Storage Pool.
At the top of the Create Storage Pool page, fill out the following fields:
Scrolling down on the same page, fill out the following fields:
Scrolling further down on the Create Storage Pool page, choose the remaining options and then click Create.
You are redirected to the main Storage Pools page, where you can monitor the deployment of the Flex storage pool.
After about 10 minutes, the Flex storage pool should go into a Ready state.
We’re now ready to move on to the next step and create our Trident backend configuration.
If you’re following along in the remaining sections, make sure that you’re able to meet the following prerequisites:
To create the Trident backend, you need to set a handful of unique values to variables. Run the following commands on your workstation:
PROJECT_NUMBER=$(gcloud projects describe --format='value(projectNumber)' \
$(gcloud config get-value project))
LOCATION="asia-east1"
SA_JSON_PATH="/path/to/gcp-sa-key.json"
With the unique values set as local variables, we can run a command to create the Trident backend configuration file.
cat <<EOF > gcnv-backend.json
{
"version": 1,
"storageDriverName": "google-cloud-netapp-volumes",
"serviceLevel": "flex",
"projectNumber": "$PROJECT_NUMBER",
"location": "$LOCATION",
"apiKey": $(cat $SA_JSON_PATH)
}
EOF
Feel free to view the resulting file; however, the relevant items are that we’re using the “google-cloud-netapp-volumes” storage driver and the “flex” service level. All other items depend on your GCP project and the location of the previously configured storage pool.
Note: Depending on your use case, you can also define multiple service levels within a single backend config, with each service level mapping to its own Kubernetes storage class. This configuration provides greater end-user flexibility while keeping management overhead low.
Finally, create the Trident backend with the following command.
tridentctl create backend -n trident -f gcnv-backend.json
If done correctly, your console should output a table with the successful creation of the backend.
+--------------------------------+-------- ---------------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+--------------------------------+-----------------------------+--------------------------------------+--------+------------+---------+
| googlecloudnetappvolumes_f8e90 | google-cloud-netapp-volumes | 369d9dc4-5868-476c-97a3-672668178927 | online | normal | 0 |
+--------------------------------+-----------------------------+--------------------------------------+--------+------------+---------+
Now that the storage backend is created, we’re ready to create a Kubernetes storage class.
Kubernetes storage classes allow administrators to abstract away the complexity of storage, so that developers can make simple choices when provisioning applications. In this section we’ll create a Kubernetes storage class, which uses our NetApp Volumes storage pool and corresponding Trident backend, and set it as the default storage class.
First, let’s view our current storage classes.
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 31h
standard kubernetes.io/gce-pd Delete Immediate true 31h
standard-rwo (default) pd.csi.storage.gke.io Delete WaitForFirstConsumer true 31h
This GKE cluster has three storage classes, of which standard-rwo is set as the default (yours may be different). Let’s patch the default storage class to remove it as the default.
DEFAULT_SC=$(kubectl get sc | grep \(default\) | awk '{print $1}')
kubectl patch storageclass $DEFAULT_SC -p \
'{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
We’re now ready to create our new GCNF Flex storage class, which we’ll set as our default storage class.
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
allowVolumeExpansion: true
metadata:
name: netapp-gcnv-flex
annotations:
storageclass.kubernetes.io/is-default-class: 'true'
parameters:
backendType: google-cloud-netapp-volumes
provisioner: csi.trident.netapp.io
reclaimPolicy: Delete
volumeBindingMode: Immediate
EOF
Finally, let’s view our current list of storage classes.
$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
netapp-gcnv-flex (default) csi.trident.netapp.io Delete Immediate true 95s
premium-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 2d
standard kubernetes.io/gce-pd Delete Immediate true 2d
standard-rwo pd.csi.storage.gke.io Delete WaitForFirstConsumer true 2d
Our new storage class, netapp-gcnv-flex, is now our default storage class. We’re now ready to deploy a sample application that will use this default.
Now that we’ve created our NetApp Volumes storage pool, Trident backend, and corresponding Kubernetes storage class, we’re ready to deploy a Kubernetes application. We’ll use the reliable WordPress application, because the default helm command can be easily modified to use ReadWriteMany persistent volume claims.
Back in the terminal, run the following command to deploy the application.
helm install wordpress -n wordpress --create-namespace bitnami/wordpress \
--set replicaCount=3 --set persistence.accessModes={ReadWriteMany}
This command installs the default WordPress app, with two modifications:
After several minutes, run the following command to view the status of the WordPress app.
$ kubectl -n wordpress get all,pvc
NAME READY STATUS RESTARTS AGE
pod/wordpress-6cdd969f48-k4kzf 1/1 Running 0 3m45s
pod/wordpress-6cdd969f48-tdj8q 1/1 Running 1 (79s ago) 3m44s
pod/wordpress-6cdd969f48-x7gk2 1/1 Running 1 (2m37s ago) 3m44s
pod/wordpress-mariadb-0 1/1 Running 0 3m45s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/wordpress LoadBalancer 172.17.231.218 104.155.197.6 80:30864/TCP,443:30231/TCP 3m47s
service/wordpress-mariadb ClusterIP 172.17.195.240 <none> 3306/TCP 3m47s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/wordpress 3/3 3 3 3m47s
NAME DESIRED CURRENT READY AGE
replicaset.apps/wordpress-6cdd969f48 3 3 3 3m46s
NAME READY AGE
statefulset.apps/wordpress-mariadb 1/1 3m46s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/data-wordpress-mariadb-0 Bound pvc-6a99d25b-80a5-45f0-a802-655cc41890bb 8Gi RWO netapp-gcnv-flex 3m47s
persistentvolumeclaim/wordpress Bound pvc-45b0bc1c-fc75-4a4c-b240-da7b1bfc608d 10Gi RWX netapp-gcnv-flex 3m49s
If the steps to this point have been completed successfully, you should see the volumes in a Bound state and the pods in a Ready state. Note the 8GiB (RWO) and 10GiB (RWX) volumes, which are a key component of the Flex service level.
Back in the GCP Console, select Volumes in the left pane to view the dynamically provisioned volumes. Note that the volumes belong to the storage pool we previously created, gcnv-flex-asiaeast1.
Click a volume name to view additional details about the volume, and then click the Snapshots tab header. As stated earlier, Snapshot copies are space-efficient copies of the volume, and they can be used to either revert the existing volume to a previous point in time or to restore to a new copy.
Finally, click the Replication tab header. As stated earlier, a volume can be asynchronously replicated to another storage pool in a different region. The destination volume will be in a read-only state while replication is active, but it can be transitioned to read-write by stopping the replication. You can then resume or reverse the replication.
As we've explored throughout this blog post, the integration of the Google Cloud NetApp Volumes Flex service level with NetApp Astra Trident presents a compelling storage solution for Kubernetes environments. We've seen how NetApp Volumes Flex is architected to meet the scalability, simplicity, and resilience demands of modern containerized applications, while Trident's role as a dynamic orchestrator simplifies the management of persistent storage.
By walking through the setup of a NetApp Volumes Flex storage pool, configuring Trident to leverage this pool, and deploying a sample application to demonstrate Trident's automated volume provisioning, we've highlighted the practical steps necessary to get started with this powerful combination.
Whether you're just starting out with Kubernetes or looking to optimize your existing infrastructure, the combination of NetApp Volumes Flex and Trident offers a path to a more streamlined, resilient, and scalable application future. With storage no longer a hurdle, your teams can focus on what truly matters—delivering value and innovation through great software.