Tech ONTAP Blogs
Tech ONTAP Blogs
Exciting news! We’re thrilled to announce SMB volume support for Google Cloud NetApp Volumes in Windows containers with the release of NetApp® Trident™ 25.02 software. Trident SMB support allows you to effortlessly provision and manage SMB volumes with Google Cloud NetApp Volumes.
What does this mean for you? Well, it’s a game changer for Windows worker nodes. Previously, we only supported NFS volumes with Linux, but now, the same Trident functionality is extended to SMB volumes with Windows, and you can even run both on the same Kubernetes cluster. This opens a world of possibilities for running your stateful Windows and Linux applications on Kubernetes with exceptional performance.
Although the Trident installation, configuration, and user experience are similar for SMB and NFS, there are a few configuration items you need to add for Windows SMB support. This blog covers how to install, configure, and use SMB volumes for Windows Kubernetes worker nodes with Trident.
We’ll show you how to set it up using Trident cloud identity for Google Kubernetes Engine (GKE), but you can also install and configure Trident for SMB with any self-managed Kubernetes distribution that supports Windows worker nodes by using secrets instead.
After you create a Kubernetes cluster, add at least one Windows-based node pool. We set up one GKE node pool running Windows. The first three nodes in the following example are running Windows.
~$kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-342177-5lh4 Ready <none> 25m v1.31.5-gke.1169000
gke-342177-7x6m Ready <none> 25m v1.31.5-gke.1169000
gke-342177-j2pb Ready <none> 25m v1.31.5-gke.1169000
gke-cluster-1-patd-default-pool-cc7f43ee-3rpq Ready <none> 25m v1.31.5-gke.1169000
gke-cluster-1-patd-default-pool-cc7f43ee-vzjx Ready <none> 25m v1.31.5-gke.1169000
gke-cluster-1-patd-default-pool-cc7f43ee-xqkp Ready <none> 25m v1.31.5-gke.1169000
Install Trident according to the documentation by first downloading the package and then installing the TridentOrchestrator custom resource definition (CRD) that’s supplied in the package.
~$kubectl create -f trident.netapp.io_tridentorchestrators_crd_post1.16.yaml
customresourcedefinition.apiextensions.k8s.io/tridentorchestrators.trident.netapp.io created
Be sure to create a namespace to install Trident. In our example, we create a new namespace called trident.
~$kubectl create ns trident
namespace/trident created
Deploy the operator.
~$kubectl create -f ../bundle_post_1_25.yaml -n trident
serviceaccount/trident-operator created
clusterrole.rbac.authorization.k8s.io/trident-operator created
clusterrolebinding.rbac.authorization.k8s.io/trident-operator created
deployment.apps/trident-operator created
~$kubectl get pods -n trident
NAME READY STATUS RESTARTS AGE
trident-operator-64458cb68f-92thm 1/1 Running 0 15s
Before installing the orchestrator custom resource, be sure to enable Windows in the Trident orchestrator file as shown in the following yaml file. In this example, we use cloud identity for authentication. If you’re also using cloud identity, be sure to use your own cloud identity service account.
tridentorchestrator_cloudidentity_windows.yaml
apiVersion: trident.netapp.io/v1
kind: TridentOrchestrator
metadata:
name: trident
spec:
debug: true
namespace: trident
imagePullPolicy: IfNotPresent
windows: true
cloudProvider: "GCP"
cloudIdentity: 'iam.gke.io/gcp-service-account: trident-gke-cloud-identity@cvs-pm-host-1p.iam.gserviceaccount.com'
Then create the orchestrator custom resource.
~$kubectl create -f tridentorchestrator_cloudidentity_windows.yaml -n trident
tridentorchestrator.trident.netapp.io/trident created
The Trident pods will come up. You’ll see which pods are on Windows worker nodes and which are on the Linux worker nodes.
~$kubectl get pods -n trident
NAME READY STATUS RESTARTS AGE
trident-controller-775fcd7d9b-grszs 6/6 Running 0 2m13s
trident-node-linux-66fmk 2/2 Running 0 2m12s
trident-node-linux-82cjt 2/2 Running 0 2m11s
trident-node-linux-rzb4b 2/2 Running 1 (92s ago) 2m11s
trident-node-windows-4q226 3/3 Running 0 2m10s
trident-node-windows-7jmgh 3/3 Running 0 2m10s
trident-node-windows-w9hzf 3/3 Running 0 2m11s
trident-operator-64458cb68f-92thm 1/1 Running 0 4m14s
The region where your storage pool and Kubernetes cluster will reside must have an Active Directory policy with NetApp Volumes. See Google Cloud NetApp Volumes Active Directory integration on how to set up an Active Directory policy. Your Active Directory must be reachable, online, and configured appropriately.
Create a storage pool of your desired size in the same region as your Kubernetes cluster and Active Directory policy. The following example shows one in the us-central1 region with the Flex service level. We could create a regional redundancy pool or a zonal redundancy pool - we created a zonal redundancy storage pool in zone a.
Be sure to attach the Active Directory policy to the storage pool. A region may have more than one Active Directory policy, so make sure to select the correct one. The following example uses an Active Directory policy named us-central1-ad; change it to the name of your policy.
The storage pool will come up.
Next, attach Trident to the NetApp Volumes storage pool so that Trident can provision volumes. If you’re using cloud identity, be sure to enter the appropriate gcloud commands from the Google CLI identified in Deploying cloud identity with Trident, GKE, and Google Cloud NetApp Volumes. Otherwise, you’ll need to add the secret and apiKey information to the backend, location, and any storage pools as needed. The following example also uses cloud identity for authentication. Be sure to use your project number, location, service level, and storage pool you require although it is not required to name a storage pool.
gcnv_backend1p_windows_ci.yaml
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: tbc-gcnv-flex-windows
spec:
version: 1
storageDriverName: google-cloud-netapp-volumes
backendName: flexsmbvolumes-for-kubernetes
projectNumber: 'xxx'
nasType: smb
location: us-central1-a
storage:
- labels:
performance: flex
serviceLevel: flex
storagePools:
- flex-pool-kubernetes
Create the backend, and it will bind to the NetApp Volumes storage pool.
~$kubectl create -f gcnv_backend1p_windows_ci.yaml -n trident
tridentbackendconfig.trident.netapp.io/tbc-gcnv-flex-windows created
~$kubectl get tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
tbc-gcnv-flex-windows volumes-for-kubernetes eb40831b-4f56-4837-a7e1-b8f86afc95bc Bound Success
To create volumes, Trident needs to authenticate to the Active Directory server. Create a Kubernetes secret that contains an appropriate username and password configured on the Active Directory server. In the following example, we call the secret smbcreds, use the domain and user cvs.internal.demo\patd and password supersecret, and place it in the trident namespace. Be sure to change these values for your environment. You can use any name and place it in any namespace, as long as that namespace is called out in the storage class.
$ kubectl create secret generic smbcreds --from-literal= username='cvs.internal.demo\patd' --from-literal=password='supersecret' -n trident
Next, create a manifest file for a storage class. The following example shows a sample manifest. Be sure to use the correct name for the credentials secret and identify the namespace where it’s located.
scflexsmb.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: gcnv-flex-k8s
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: csi.trident.netapp.io
parameters:
backendType: "google-cloud-netapp-volumes"
selector: "performance=flex"
trident.netapp.io/nasType: "smb"
csi.storage.k8s.io/node-stage-secret-name: "smbcreds"
csi.storage.k8s.io/node-stage-secret-namespace: "trident"
allowVolumeExpansion: true
Create the storage class.
~$kubectl create -f scflexsmb.yaml
storageclass.storage.k8s.io/gcnv-flex-k8s created
Now you’re ready to go and create a Persistent Volume Claim (PVC) that will generate an SMB persistent volume (PV) and volume on Google Cloud NetApp Volumes. Be sure to use your new storage class created in step 5.
Create a manifest file for a PVC. An example is shown here. Change the storage class name to the storage class created in step 5.
pvcsamplesmb.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: smb-pvc-rwx
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
storageClassName: gcnv-flex-k8s
Create the PVC and watch the PV and volume come up automatically.
~$kubectl create -f pvcsamplesmb.yaml
persistentvolumeclaim/smb-pvc-rwx created
~$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
smb-pvc-rwx Pending gcnv-flex-k8s <unset> 6s
~$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
smb-pvc-rwx Bound pvc-7f9874b9-927a-4cb2-b83b-bf7f00669611 10Gi RWX gcnv-flex-k8s <unset> 4m29s
On the Google Cloud NetApp Volumes console, an SMB volume has been created.
You can attach this PVC to any pod or deployment by just referencing the PVC in your application manifest file.
Now that Trident is creating SMB volumes, it can also perform other standard Trident features on SMB volumes, such as expanding volumes, importing volumes, cloning volumes, and creating snapshots, to name a few.
Trident also supports SMB and NFS volumes on the same cluster. So, if you have a cluster with a Linux node pool and a Windows node pool, you can simultaneously run applications that use NFS on the Linux node pool and Windows applications using SMB. Be sure to enable Windows with Trident, and Trident will install on both the Linux nodes and the Windows nodes. In this scenario, you’ll need two back ends (one for NFS and one for SMB) and two storage classes (one for NFS and one for SMB). If you use the correct storage pool with the correct storage type, you’re all set.
In our example, we have an SMB PVC and NFS PVC on the same Kubernetes cluster using the same NetApp Volumes storage pool.
$kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
flex-pvc-rwx1 Bound pvc-52c9704f-a5b7-40a5-a257-d47ed3767c33 10Gi RWX gcnv-flex-k8s-nfs <unset> 5h48m
smb-pvc-rwx Bound pvc-675797e5-123f-4e5d-b3c0-29abffcfbaf6 10Gi RWX gcnv-flex-k8s-smb <unset> 2m9s
Now you can run stateful Kubernetes applications with SMB volumes on Kubernetes and achieve the enterprise-grade performance of NetApp storage in Google Cloud for your Windows applications. You can even run applications in SMB and NFS at the same time on the same cluster. Test it out and let us know how it goes!