Private Cloud with VMware Cloud Foundation
Customers often find the gap between the public cloud and their private cloud with VMware. In this blog, let me explore how to make it seamless with VMware Cloud Foundation (VCF) based on announcements made at VMware Explore.
For example, in AWS, the customer begins with creating VPC in a region and deploy workloads to specific availability zone. Data protection policies are applied to workloads rather than underlying infrastructure and tags are often used for management. Workloads communicate to each other with Layer 3 & above protocols. Inter VPC communications (even across regions) are handled by VPN or transit gateway.
If you are new to VCF, I recommend reading my previous blog – VMware Cloud Foundation deployment options with NetApp. Let me quickly summarize with recent changes with VCF 5.2.x
SDDC manager orchestrates the lifecycle of management domain (which hosts VMs related to private cloud infrastructure services) and VI workload domains. VCF can scale from 4 hosts to 1000 hosts. With VCF 5.2 onwards, existing vSphere environment can be converted to management domain or import as VI workload domain to existing VCF environment. That means, now you have the option to run VCF environment without vSAN for the management domain including stretch clusters created with SnapMirror active sync or MetroCluster.
The datastore that gets deployed by SDDC manager as part of domain creation is known as principal storage and any datastore that is created with vCenter on that domain is called supplemental storage. With ONTAP, here is the list of supported options for principal datastore.
Storage type
|
Management Domain – Default Cluster
|
Management Domain – Additional Cluster
|
VI Workload Domain – Default Cluster
|
VI Workload Domain – Additional Cluster
|
VMFS on FC
|
Yes (Import Tool)
|
Yes (SDDC API)
|
Yes
|
Yes
|
VMFS on iSCSI
|
NA
|
NA
|
NA
|
NA
|
VMFS on NVMe-oF
|
NA
|
NA
|
NA
|
NA
|
NFS v3
|
Yes
|
Yes
(SDDC API)
|
Yes
|
Yes
|
NFS v4.1
|
NA
|
NA
|
NA
|
NA
|
vVol on FC
|
NA
|
NA*
|
No
|
Yes
|
vVol on iSCSI
|
NA
|
NA*
|
No
|
Yes
|
vVol on NVMe-oF
|
NA
|
NA
|
NA
|
NA
|
NA – Not supported by VCF.
* - Pending Validation.
VCF can be consumed in enterprise datacenter, public cloud, edge or hosted service providers. VCF license can be applied or transferred to any of those environments. Check Broadcom for more info.
Region:
In VCF, every instance is a region. It can be consolidated (management & workload on same vCenter) or standard with one management domain and up to 24 workload domains. Each domain can have up to 64 clusters and each cluster can have 96 hosts max. The clusters can be at remote location. For updated configuration details, refer VMware Configmax portal.
Typically, a single instance of ONTAP tools 10.x, can manage datastores for the region. Deploy vSphere, vCenter, ONTAP tools and create & protect the datastore. Later convert to VCF to have stretched cluster for the management domain with VCF 5.2. The ONTAP tools plugin can be deployed to workload domain vCenter by registering to ONTAP tools.
For VM Data protection, SnapCenter Plugin for VMware vSphere can be deployed on management domain for every instance of VCF domain or have it co-located in VCF domain.
Project:
Multitenancy in NSX (which is a networking component of VCF) is managed by projects. The project is like AWS account where you associate the users, set quotas and optionally contains one or more VPCs. The project corresponds to Tier 0 gateway or VRF of NSX. To create the project, NSX edge cluster should be available.
Logs, performance metrics are all isolated based on tags assigned to projects.
In VCF, NSX can be shared across VCF domains or dedicated to single VCF domain. In shared NSX model, the project can span across the VCF domains.
VPC:
A VPC represents a self-contained private network within an NSX project that application developers or DevOps engineers can use to host their applications and consume networking and security objects by using a self-service consumption model.
NSX VPCs represent an additional layer of multi-tenancy within a project. It provides a simplified consumption model of networking and security services, which is aligned to the experience that you would have in a public cloud environment.
Note: NSX VPCs can be created only in projects.
Each VPC has its own Tier-1 gateway. VPC users can manage creating subnets in one of the three modes – Public/Private/Isolated. Public mode enables NAT service for external access. Private allows communication within VPC. Workloads on an isolated subnet can communicate with each other but cannot communicate with workloads on private or public subnets within the same NSX VPC.
Workloads in VPC can be identified by the subnet, which is visible in vCenter as NSX distributed vswitch port groups. To limit visibility into other VPC resources, it can be secured by resource permissions.
For the storage, the VMs are placed based on VM storage policies which can be controlled by tags & permissions to datastore.
We can expect more streamlined operation with VCF 9 for the VPCs.
Availability Zone:
Availability zones protect against failures of groups of hosts. In VCF, it maps to a vSphere cluster. The hosts of the cluster can reside in single availability zone or stretched across two availability zones with vSphere Metro Storage Cluster (often with ONTAP MetroCluster or SnapMirror active sync).
For more details on ONTAP integration with VCF, follow VMware Cloud Foundation on our solutions page.