Tech ONTAP Blogs
Tech ONTAP Blogs
Imagine your team running a customer‑facing application on Kubernetes, backed by a critical database. As usage grows, so does the data, but storage capacity doesn’t automatically keep up. Traditionally, the team must constantly monitor utilization, watch for alerts, and manually expand volumes before the application is affected. Each intervention carries risk and pulls engineers away from more strategic work. What if you could automate this entire process, allowing storage to grow dynamically as your workloads expand and evolve?
This isn't a futuristic dream; it's a present-day reality thanks to the powerful combination of the NetApp® Trident™ Automatic volume expansion feature along with Google Cloud NetApp Volumes (GCNV). This combination delivers "set it and forget it" storage experience for volumes, eliminating manual intervention ensuring your applications don't run out of space.
Monitoring storage capacity often starts with alerts: a volume is about to exhaust available space. These alerts are essential, but they’re inherently reactive. They surface the problem only after capacity pressure is already building, typically when teams have limited time to respond and workloads are at greater risk of disruption.
In many environments, resolving these alerts is still a largely manual effort. Engineers investigate which workloads are consuming space, determine whether growth is expected or anomalous, and then take corrective action—resizing volumes. Each step requires context, coordination, and careful execution, especially in production systems where mistakes can lead to downtime or data loss.
Over time, this alert‑and‑fix cycle becomes operationally expensive. Teams spend significant effort responding to the same types of capacity warnings, often treating symptoms rather than addressing underlying inefficiencies. Without more automated, proactive capacity management, storage monitoring remains a constant source of interruption rather than a reliably controlled part of the platform.
The Trident Automatic volume expansion feature completely changes the dynamic of storage management for Kubernetes stateful workloads. Instead of reacting to capacity alerts, you can proactively tell Trident how to handle growth. All you need to do is specify at what threshold and how much you want the volume to grow. Once this custom resource policy is applied to a storage class, Trident takes over. It monitors every persistent volume created with that storage class, and, when a volume hits the defined threshold, it automatically and seamlessly increases its size without any manual effort. This simple automation lowers operational overhead and gives you the confidence to scale your operations without compromising reliability.
In the example diagram below, 7Gi of data occupies a 10Gi volume. Since the threshold is set for 70%, the volume automatically expands by 10%. As we write more data (10Gi in total), Trident continues to increase the volume size by 10% until usage drops below the 70% threshold. The utilization will ultimately stabilize between 60% and 70%. Of course, the threshold and expansion amount is configurable.
Configuring Automatic volume expansion on Trident mostly involves adding an Autogrow CR and a new storage class with Autogrow enabled. All the other steps are the same as deploying Trident. Any GCNV GA service level can be used.
Before you begin, ensure you have enough space in your GCNV storage pool for volume expansion.
Create a storage pool on GCNV. This can be done via the UI (shown below), CLI (gcloud commands) or API.
We use Helm to install Trident, but there are many methods to install Trident, choose the one that works best for you. To use Helm, perform the following command from a workstation that has kubectl and helm installed. In our case, we are using GKE with cloud identity, but any kubernetes distribution may be used. For more information on installing via Helm, please see the Trident documentation. It must also be using the kubeconfig that points to your kubernetes cluster.
~$ helm install trident netapp-trident/trident-operator --version 100.2602.0 --create-namespace --namespace trident --set cloudProvider="GCP" --set cloudIdentity="'iam.gke.io/gcp-service-account: trident-gke-cloud-identity@cvs-pm-host-1p.iam.gserviceaccount.com'"
after Trident is running, add the Google Cloud NetApp Volumes Backend and be sure it is Bound.
~$kubectl create -f gcnv_backend1p_cloud_identity.yaml -n trident
tridentbackendconfig.trident.netapp.io/tbc-gcnv-flex-nfs-central1a created
~$kubectl get tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
tbc-gcnv-flex-nfs-central1a volumes-for-kubernetes 47f29230-6bde-4c78-8196-31046e6abba9 Bound Success
Create a TridentAutogrowPolicy Custom Resource (CR). Specify the trigger threshold (70% in this example) and how much it should increase when the threshold is reached. Also specify the maximum size for the volume. Note this CR is not namespaced. Our example is below.
autogrow.yaml
apiVersion: trident.netapp.io/v1
kind: TridentAutogrowPolicy
metadata:
name: grow-volumes
spec:
usedThreshold: "70%"
growthAmount: "10%"
maxSize: "500Gi"
Apply the policy to your cluster.
~$kubectl apply -f autogrow.yaml
Create a Trident storage class. The autogrow policy needs to be included in the annotations as shown below.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gcnv-flex-k8s-nfs-autogrow
annotations:
storageclass.kubernetes.io/is-default-class: "true"
trident.netapp.io/autogrowPolicy: "grow-volumes"
provisioner: csi.trident.netapp.io
parameters:
backendType: "google-cloud-netapp-volumes"
selector: "performance=flex"
trident.netapp.io/nasType: "nfs"
allowVolumeExpansion: true
Create a PVC manifest that uses the autogrow storage class.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: flex-pvc-rwx1
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: gcnv-flex-k8s-nfs-autogrow
Apply the PVC manifest. This invokes Trident to create a PV and an underlying volume on Google Cloud NetApp Volumes.
~$kubectl create -f pvcsampleflexrwx1.yaml
persistentvolumeclaim/flex-pvc-rwx1 created
~$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
flex-pvc-rwx1 Bound pvc-c7ad94f5-1614-463b-9441-2d9fa100e9f1 10Gi RWX gcnv-flex-k8s-nfs-autogrow <unset> 3m39s
We can check it on GCNV. We can see the volume is 10Gi and is currently empty.
That’s it! Your volume is now set to autogrow. Let’s start filling up the volume with data, and watch it grow.
Create a busybox POD and attach it to the PVC you just created.
This is the manifest for our busybox pod. We mount it to the newly created 10Gi volume on /mnt/storage.
busybox.yaml
---
apiVersion: v1
kind: Pod
metadata:
name: busybox
namespace: default
spec:
volumes:
- name: busybox-data
persistentVolumeClaim:
claimName: flex-pvc-rwx1
containers:
- image: busybox:latest
command:
- sleep
- "7200"
imagePullPolicy: IfNotPresent
name: busybox
volumeMounts:
- name: busybox-data
mountPath: /mnt/storage
restartPolicy: Always
We will deploy the pod.
~$kubectl create -f busybox.yaml
pod/busybox created
~$kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox 1/1 Running 0 5m
Attach to the busybox pod and check the volume size. We can see it is 10G, with 256k used for metadata.
~$kubectl exec -it busybox -- sh
/ # df -h
Filesystem Size Used Available Use% Mounted on
overlay 95.8G 6.3G 89.5G 7% /
tmpfs 64.0M 0 64.0M 0% /dev
10.165.129.3:/pvc-c7ad94f5-1614-463b-9441-2d9fa100e9f1
10.0G 256.0K 10.0G 0% /mnt/storage
/dev/root 95.8G 6.3G 89.5G 7% /etc/hosts
/dev/root 95.8G 6.3G 89.5G 7% /dev/termination-log
/dev/root 95.8G 6.3G 89.5G 7% /etc/hostname
/dev/root 95.8G 6.3G 89.5G 7% /etc/resolv.conf
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 2.7G 12.0K 2.7G 0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/interrupts
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/latency_stats
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
Next, let's send traffic to the volume. We send 10G of traffic to the volume using the linux dd (data duplicator) command.
/ # dd if=/dev/urandom of=/mnt/storage/random.txt bs=1M count=10000
10000+0 records in
10000+0 records out
10485760000 bytes (9.8GB) copied, 157.503480 seconds, 63.5MB/s
Let's take another look at that filesystem:
# df -h
Filesystem Size Used Available Use% Mounted on
overlay 95.8G 6.3G 89.5G 7% /
tmpfs 64.0M 0 64.0M 0% /dev
10.165.129.3:/pvc-c7ad94f5-1614-463b-9441-2d9fa100e9f1
15.0G 10.0G 5.0G 67% /mnt/storage
/dev/root 95.8G 6.3G 89.5G 7% /etc/hosts
/dev/root 95.8G 6.3G 89.5G 7% /dev/termination-log
/dev/root 95.8G 6.3G 89.5G 7% /etc/hostname
/dev/root 95.8G 6.3G 89.5G 7% /etc/resolv.conf
shm 64.0M 0 64.0M 0% /dev/shm
tmpfs 2.7G 12.0K 2.7G 0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs 1.9G 0 1.9G 0% /proc/acpi
tmpfs 64.0M 0 64.0M 0% /proc/interrupts
tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs 64.0M 0 64.0M 0% /proc/keys
tmpfs 64.0M 0 64.0M 0% /proc/latency_stats
tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs 1.9G 0 1.9G 0% /proc/scsi
tmpfs 1.9G 0 1.9G 0% /sys/firmware
We can see now that the size grew to 15G, and it hosts 10G worth of traffic.
~$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
flex-pvc-rwx1 Bound pvc-c7ad94f5-1614-463b-9441-2d9fa100e9f1 15Gi RWX gcnv-flex-k8s-nfs-autogrow <unset> 8m45s
And looking at the backend:
The volume grew to 15G, settling in at 67% of capacity. This is just under 70% of the trigger threshold.
Stop reacting to storage capacity issues and start automating your Kubernetes storage environment. By leveraging NetApp Trident with the Autogrow feature on top of the powerful Google Cloud NetApp Volumes service, you can finally achieve a true "set it and forget it" experience for volumes. This allows your team to focus less on infrastructure management and more on delivering value and innovation. Learn more here about Google Cloud NetApp Volumes and Trident with Google Cloud NetApp Volumes!