Tech ONTAP Blogs
Tech ONTAP Blogs
In the dynamic world of Kubernetes, where pods scale and storage demands fluctuate, efficient resource management isn't just a best practice—it's a necessity. Enter NetApp Trident™ , the open-source storage orchestrator that simplifies persistent storage for containerized applications. As a Container Storage Interface (CSI) driver, Trident handles volume provisioning, snapshots, and more across diverse backends like ONTAP, Amazon FSx for NetApp ONTAP (FSxN), Google Cloud NetApp Volumes (GCNV), Azure NetApp Files (ANF). But even the most robust tools like Trident can falter if they're starved for CPU or memory. That's where Kubernetes resource requests and limits come into play, ensuring your storage operations run smoothly without overwhelming your cluster.
In this post, we'll break down the fundamentals of resource requests and limits and explore ways to configure resource requests & limits for Trident controller and node pod containers. Whether you're deploying Trident via Helm or customizing it with the TridentOrchestrator CRD, this feature will boost performance, prevent evictions, and optimize costs. Let's dive in.
At its core, Kubernetes uses resource requests and limits to allocate and cap CPU and memory for containers/pods. These settings live in your pod specs under the resources field and play a dual role: guiding scheduling decisions and enforcing runtime boundaries.
Here's how they differ by resource type:
| Resources | Requests Behavior | Limits Behavior |
|---|---|---|
| CPU | It defines the minimum guaranteed CPU resources. | Throttles usage—no termination, but performance dips. |
| Memory | Reserves space; pods can exceed if free memory exists. | Enforces hard caps via OOM kills when pressure builds. |
Configuring them is straightforward. In a Deployment or Pod spec, here is an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app-container
image: nginx:1.21
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
This ensures each pod requests 0.25 CPU and 64Mi memory but won't exceed 0.5 CPU or 128Mi memory. Without limits, your container could theoretically hog the entire node—risky in shared clusters.
Trident deploys as a controller pod (one instance managing storage operations) and node pods (one per worker node for mounting volumes). These pods include sidecar containers like CSI provisioner, attacher, and resizer, which handle dynamic provisioning and attachment.
Poor resource tuning here can lead to:
By setting requests and limits, you guarantee Trident's core functions (e.g. volume provisioning, volume snapshots) while protecting against rogue consumption—crucial for stateful apps relying on persistent volumes.
Trident applies sensible default resource requests values for both controller and node pod containers, while intentionally omitting default CPU and memory limits.
Trident’s actual resource consumption varies significantly depending on the size and scale of worker nodes and number of persistent volumes. Setting restrictive default limits could easily degrade performance or cause pod evictions in typical production deployments. As a result, Trident ships without default limits out of the box, giving operators maximum flexibility.
When running in resource-constrained or multi-tenant environments where strict containment is required, administrators can (and should) add explicit CPU and memory limits to the Trident controller and node pods to match their specific policies and workload characteristics.
Controller Pod Defaults
| Container | CPU Request | Memory Request | CPU Limit | Memory Limit |
|---|---|---|---|---|
| trident-main | 10m | 80Mi | None | None |
| csi-provisioner | 2m | 20Mi | None | None |
| csi-attacher | 2m | 20Mi | None | None |
| csi-resizer | 3m | 20Mi | None | None |
| csi-snapshotter | 2m | 20Mi | None | None |
| trident-autosupport | 1m | 30Mi | None | None |
Node Pod Defaults (Linux)
| Container | CPU Request | Memory Request | CPU Limit | Memory Limit |
|---|---|---|---|---|
| trident-main | 10m | 60Mi | None | None |
| node-driver-registrar | 1m | 10Mi | None | None |
Node Pod Defaults (Windows)
| Container | CPU Request | Memory Request | CPU Limit | Memory Limit |
|---|---|---|---|---|
| trident-main | 10m | 60Mi | None | None |
| node-driver-registrar | 6m | 40Mi | None | None |
| liveness-probe | 2m | 40Mi | None | None |
To configure Kubernetes resource requests and limits for individual containers in Trident controller and node pods, users can specify CPU and memory constraints in two ways:
Both methods support granular container-level configuration for all Trident containers including trident-main, CSI sidecars (provisioner, attacher, resizer, snapshotter, registrar), and trident-autosupport, across both Linux and Windows node platforms. This enables better resource planning, and prevention of resource contention.
Note: Always preserve exact container names and YAML indentation to avoid parse errors.
resources:
controller:
trident-main:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 100m
memory: 256Mi
# sidecars
csi-provisioner:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: 40m
memory: 100Mi
csi-attacher:
requests:
cpu: 20m
memory: 25Mi
limits:
cpu: 80m
memory: 70Mi
csi-resizer:
requests:
cpu: 40m
memory: 30Mi
limits:
cpu: 90m
memory: 75Mi
csi-snapshotter:
requests:
cpu: 30m
memory: 22Mi
limits:
cpu: 85m
memory: 68Mi
trident-autosupport:
requests:
cpu: 20m
memory: 35Mi
limits:
cpu: 60m
memory: 130Mi
node:
linux:
trident-main:
requests:
cpu: 75m
memory: 100Mi
limits:
cpu: 150m
memory: 200Mi
# sidecars
node-driver-registrar:
requests:
cpu: 20m
memory: 15Mi
limits:
cpu: 55m
memory: 35Mi
windows:
trident-main:
requests:
cpu: 8m
memory: 45Mi
limits:
cpu: 180m
memory: 140Mi
# sidecars
node-driver-registrar:
requests:
cpu: 30m
memory: 45Mi
limits:
cpu: 90m
memory: 135Mi
liveness-probe:
requests:
cpu: 20m
memory: 45Mi
limits:
cpu: 55m
memory: 70Mi
operator:
requests:
cpu: 20m
memory: 60Mi
limits:
cpu: 40m
memory: 120MiapiVersion: trident.netapp.io/v1
kind: TridentOrchestrator
metadata:
name: trident
spec:
debug: true
namespace: trident
windows: true
resources:
controller:
trident-main:
requests:
cpu: 10m
memory: 80Mi
limits:
cpu: 200m
memory: 256Mi
# sidecars
csi-provisioner:
requests:
cpu: 20m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-attacher:
requests:
cpu: 50m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-resizer:
requests:
cpu: 30m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-snapshotter:
requests:
cpu: 25m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
trident-autosupport:
requests:
cpu: 10m
memory: 30Mi
limits:
cpu: 50m
memory: 128Mi
node:
linux:
trident-main:
requests:
cpu: 10m
memory: 60Mi
limits:
cpu: 200m
memory: 256Mi
# sidecars
node-driver-registrar:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 50m
memory: 32Mi
windows:
trident-main:
requests:
cpu: 60m
memory: 40Mi
limits:
cpu: 200m
memory: 128Mi
# sidecars
node-driver-registrar:
requests:
cpu: 40m
memory: 40Mi
limits:
cpu: 100m
memory: 128Mi
liveness-probe:
requests:
cpu: 20m
memory: 40Mi
limits:
cpu: 50m
memory: 64Mi
Post-deployment, you can verify resource values with kubectl describe torc trident -n trident and check status.currentInstallationParams.resources.
User might find the need to adjust CPU and memory requests or limits values for controller or node pod containers. The method to update these resources depends on whether you originally installed Trident using the Trident Operator (TridentOrchestrator CR) or Helm.
If Trident is installed using Trident Operator i.e. via the TridentOrchestrator (TORC) CRD, user can update resource values by running kubectl edit command and then modifying spec.resources section with new values or through kubectl patch command.
For installations done through Helm, Use the helm upgrade command with updated values for CPU and memory in your Helm chart’s values file or directly by using --set flag.
Note: Keep in mind that changing resource requests or limits will trigger a rolling restart of the Trident controller and node pods.
Drawing from Kubernetes wisdom, here's how to get it right for Trident:
Do's
Resource requests and limits are your Kubernetes Swiss Army knife for Trident, turning potential chaos into controlled efficiency. By understanding the basics, leveraging defaults, and customizing thoughtfully, you'll ensure seamless storage orchestration even under load.
Ready to optimize? Grab your Trident Helm chart or CRD, tweak those resources, and deploy. Your cluster—and your apps—will thank you.
For more on Kubernetes resources, check the K8s official docs. Trident setup details are in NetApp Trident's guides.