Tech ONTAP Blogs

Kubernetes on vSphere (Part 1)

Thoppay
NetApp
5,019 Views

As vSphere administrator, you’ve been managing datastores for a while for handling the storage demands of Virtual Machines. You are comfortable using our ONTAP Tools to provision and monitor datastores from ONTAP systems. VMFS (iSCSI, FC, NVMe-oF) and NFS datastores provide shared data storage for multiple VMs and vVol datastores enable you to define granular policies per virtual hard disk.

Due to growing demands of cloud native applications, Kubernetes (of any flavor) clusters are being deployed on vSphere environments. As you fall in love with ONTAP features, we will explore the top two storage integration options. In this part 1, we will explore how to use vSphere datastores along with ONTAP to address the availability and data protection demands of stateful applications. In part 2, we will have a look at Astra & how it assists with handling additional use cases without the need to manage another storage operating system. Based on use cases, you can consume the Container Storage Interface that meets your demands.

 

Overview:

 

As you deploy Kubernetes (with a recent version) on vSphere, the VMware vSphere Container Storage Plug-in gets deployed by default. You can verify this by running the command. In Kubernetes, pods consume the volume using persistent volume claim (PVC) which is typically associated with a storage class (SC). With the vSphere Container Storage Interface (CSI), the storage class is associated with a VM Storage Policy. Any vSphere datastore (including the ones deployed with ONTAP Tools) can have tags associated with it and define a policy based on the tag assignment.

 

Thoppay_0-1687891581910.png

 

Each persistent volume is mapped to a Virtual Hard disk (VMDK) file on a datastore and consumed as a local disk using SCSI controller on Kubernetes worker nodes. In vSphere, First Class Disk (FCD) allows lifecycle management of the Virtual Hard disks independent from the VM (AWS EBS like feature). Additional meta-data of persistent volume (VMDK) including associated Kubernetes tags, namespace, PVC are stored which can be viewed by vCenter as shown in the screenshot below.

Thoppay_1-1687891581917.png

 

This Cloud Native Storage restricts the datastore not to be shared with other vCenter servers (for example, NFS datastore should not be managed by multiple vCenter servers). Also, storage vMotion between different datastores on same vCenter, needs to be handled by  CNS Manager Tool. Ensure disk.EnableUUID option is enabled on worker nodes to have VMDKs with consistent UUID allowing the disks to be mounted properly.

 

Thoppay_2-1687891581927.png

 

Dynamic Provisioning:

 

vSphere CSI handles the dynamic provisioning of block persistent volumes on vSphere datastores utilizing the VM storage policy.

Thoppay_3-1687891581959.png

 

In Kubernetes, define a storage class that points to a VM storage policy. If a storage policy is not defined, the vSphere CSI driver will pick a shared storage with high capacity that is accessible from all worker nodes.

---

Kind: StorageClass

apiVersion: storage.k8s.io/v1

metadata:

  name: a300

  annotations:

    storageclass.kubernetes.io/is-default-class: “true”

provisioner: csi.vsphere.vmware.com

parameters:

  storagepolicyname: “A300”  #Optional Parameter

# csi.storage.k8s.io/fstype: “ext4” #Optional Parameter

---

Details of the provisioned volume can be viewed from vCenter.

Thoppay_4-1687891581987.png

 

Persistent volume (PV) contains the volume id (volumeHandle) which can be used to retrieve the info of the FCD using PowerCli or with govc (Go Lang vSphere CLI) tool.

 

Thoppay_5-1687891582012.png

 

Thoppay_6-1687891582028.png

 

Thoppay_7-1687891582034.png

 

Thoppay_8-1687891582039.png

 

Static Provisioning:

 

To migrate data from an existing virtual machine hard disk (VMDK) or to reuse cloud native application data (PV with retain reclaim policy), static provisioning can be used. The virtual machine hard disk needs to be registered as first class disk using API (PowerCLI or govc can be used)

Thoppay_9-1687891582052.png

 

(Note: With PowerCLI, the hard disk object retrieved with datastore didn’t work and it must be retrieved from VM object)

 

Thoppay_10-1687891582056.png

Thoppay_11-1687891582060.png

Thoppay_12-1687891582095.png

 

The FCD needs to be defined as persistent volume and assigned to a PVC to prevent claiming by other PVC requests.

---

apiVersion: v1

kind: PersistentVolume

metadata:

  name: mongo-1

  annotations:

    pv.kubernetes.io/provisioned-by: csi.vsphere.vmware.com

spec:

  capacity:

    storage: 11Gi

  accessModes:

    - ReadWriteOnce

  persistentVolumeReclaimPolicy: Delete

  storageClassName: a300

  claimRef:

    namespace: mongo

    name: pvc-mongo-1

  csi:

    driver: csi.vsphere.vmware.com

    fsType: ext4  # Change fstype to xfs or ntfs based on the requirement.

    volumeAttributes:

      type: "vSphere CNS Block Volume"

    volumeHandle: 7acddac4-b1e4-4409-aec3-1166f21b8b96  # First Class Disk (Improved Virtual Disk) ID

---

kind: PersistentVolumeClaim

apiVersion: v1

metadata:

  name: pvc-mongo-1

spec:

  accessModes:

    - ReadWriteOnce

  resources:

    requests:

      storage: 11Gi

  storageClassName: a300

  volumeName: mongo-1

---

The metadata syncer is responsible for sending the data to be displayed on vCenter for those persistent volumes (VMDKs). In PowerCLI CNSVolume commands can also be used for manipulating the metadata info.

 

High Availability options:

 

Most of the existing designs used for providing high availability of vSphere datastores are still valid for Kubernetes workloads on vSphere. As you consume topology aware provisioning, ONTAP systems can provide shared storage options across multiple availability zones.

 

MetroCluster solution is available to provide zero RPO for both file and block protocols at zone level, SVM-DR for regional level DR solution, SMBC for ONTAP volume level zero RPO and zero RTO (from storage perspective) using block protocol.

Thoppay_0-1687955058191.png

Even though ONTAP has feature of file caching across the regions with FlexCache, you don't want to use that on same vCenter server as it presents duplicate volume IDs. In Multi vCenter scenarios, VMware CSI doesn't support sharing the datastores.

 

Data Protection options:

 

The persistent storage of Cloud Native Applications is stored as FCDs using the vSphere CSI, any existing backup product that you currently use to protect the FCD can be utilized to recover the VMDKs.

 

SnapCenter Plug-in for VMware vSphere (4.8 or above) can also be used. Ensure Advanced option Include datastores with independent disks is checked.

Thoppay_13-1687891582102.png

 

 

To recover, mount the backup datastore & replace existing file and register (if required).

Thoppay_14-1687891582111.png

 

If application centric management is preferred, consider Astra Control which can take backup of not only the persistent data, but also includes other Kubernetes metadata to successfully recover the application. To utilize the vSphere CSI snapshot capability vSphere needs to be at least version 7.0 update 3.

 

Scale considerations:

 

To control capacity utilization, storage admin can define policies on SVM to limit the size at SVM (requires ONTAP 9.13.1 or above) or at volume level. vSphere admin can use single or multiple ONTAP volumes to create the datastore.

 

For additional capacity needs, new datastores can have the same tags to make it eligible for VM storage policy compatible datastores.

 

PVs (Persistent Volume) are consumed as local disks on worker nodes. The number of PVs that can be attached are limited by the SCSI controller. Each pvscsi controller can host 15 disks. With four pvscsi controllers, the number of PVs are limited to 59 (excluding one for OS disk).

 

A single vCenter supports 10000 PVs for VMFS and NFS datastores and 840 PVs for vVol datastore. A single ONTAP cluster supports up to 30000 volumes. For other ONTAP limits, check hardware universe.

Thoppay_15-1687891582118.png

 

Monitoring options:

 

Existing monitoring tools for vSphere can be utilized when using vSphere CSI which includes NetApp Harvest tool along with BlueXP Observability (Cloud Insights), Aria Operations, etc. VMware CSI provides metrics which can be consumed from Prometheus or other Kubernetes monitoring tools. BlueXP Observability and Aria Operations are also Kubernetes aware.

 

Cloud Migration considerations:

 

Even though Kubernetes is platform agnostic, to keep consistent operational experiences, to use vSphere CSI, we need either of the following options.

  • VMware Cloud on AWS
  • Azure VMware Solution
  • Google Cloud VMware Engine
  • Other VMware Cloud service provider environments.

As ONTAP is available in many major cloud providers, data can be easily replicated from one environment to another.

 

Windows support:

 

Windows based Kubernetes nodes support is included with vSphere CSI 3.0.

 

VMware vSphere datastores enabled to use multivendor storage systems and place the workloads based on VM storage policies. VMware CSI allows you to continue leverage those benefits with Kubernetes workloads too. Most protocols (FC, NVMe-oF, iSCSI, NFS) supported by VMware datastores can be leveraged to integrate with ONTAP systems. Our VASA (vSphere API for Storage Awareness) provider is available as part of ONTAP Tools, and a single instance can manage multiple ONTAP systems.

 

vSphere CSI with ONTAP supports only ReadWriteOnce access mode with any datastore types. If ReadWriteMany access mode is required, consider using Astra Trident CSI. We will explore more details of Astra Trident in the next part of my blog.

 

Comments
Public