Tech ONTAP Blogs

Using VPNs to provide Active Directory access to Google Cloud NetApp Volumes

okrause
NetApp
648 Views

Google Cloud NetApp Volumes is a powerful and native service of Google Cloud to provide NFS or SMB file shares. SMB shares offer SMB2.1 and SMB3.x protocol support with fine-grained access control on file level using the powerful NTFS permission model. This builds on the strong, Kerberos-based user authentication provided by Microsoft Active Directory (AD). In other words: NetApp Volumes needs to join an Active Directory domain for user authentication.

 

Networking between NetApp Volumes and Active Directory

To join a domain, NetApp Volumes needs to be able to reach AD domain controllers (DCs), which offer services like DNS, Kerberos, LDAP and NetLogon. The term "reach" translates to "be able to open TCP/UDP connections to the  required ports on a domain controller".

 

To establish this connectivity, you needs to:

  1.  open the required ports on the domain controllers firewall for the CIDR range used by NetApp Volumes. Here is how you find out the CIDR range:
    1. List VPCs peered to NetApp Volumes
      gcloud --project $project compute networks list --filter="peerings[].name=sn-netapp-prod" --format="table(name, peerings.name)"​
    2. Find name of NetApp Volume psaRanges in these VPCs
      gcloud --project $project services vpc-peerings list --network=<vpc> --service netapp.servicenetworking.goog --format="value(reservedPeeringRanges)"​
    3. List CIDR for given psaRange
      gcloud --project $project compute addresses list --filter="name=<psaRange>"​
  2. make sure network traffic is routed between your NetApp Volumes and your domain controller

Especially #2 can be tricky. NetApp Volumes uses a VPC peering to connect to your network (user-VPC). If the domain controller is on that user-VPC, all is well. But what if it is located in a different VPC? Maybe in a different project, e.g. a hub infrastructure project and NetApp Volumes is in a spoke? Or your interconnect to your on-premises domain controllers lands in such a "remote-VPC"?

 

Using VPC peering to a remote-VPC won't work, since that would result in two VPC peering "hops" (NetApp Volumes <-> user-VPC <-> remote-VPC), which would require transitive peering which Googles networking model blocks.

 

Using VPNs to connect networks

Using VPN technology instead of VPC peering is a commonly used alternative. VPN routing isn't subject to the "no transitive routing" rule of Googles VPC peering network model and you have fine grained control over route advertisement.

 

I am going to use a problem I had to solve as an example of the approach.

 

My problem: I had to provide AD services to a new project with NetApp Volumes. I could deploy a demo AD server on a small GCE VM, but who likes to have another AD server VM to setup and manage? It's kinda painful. Why not use the existing AD I already have in my main demo project?

 

So let's build a VPN between the existing VPC of my demo project (let's call it "LEFT") and the new VPC of my new project (let's call it "RIGHT").

 

Google offers classic VPNs with only one tunnel and static or dynamic routing or HA VPNs, which can utilise multiple tunnels for high availability, Cloud Routers and dynamic routing through BGP. 

For my purpose a classic VPN is fine and more cost efficient. For a production environment you might want a HA VPN to achieve your availability goals.

 

I am a big proponent of infrastructure as code, since it eliminates manual error, results in reproducible infrastructure and cleans up all the resources if I decide to stop using the tunnel. So let's use Terrafom.

 

Google offers a Terraform module which simplifies VPN lifecycle management. It can build classic or HA VPNs for you. My code uses a classic VPN with one tunnel between project LEFT (owning  the Active Directory VM) and RIGHT (owning NetApp Volumes which need access to AD):

 

 

### variables
variable "left_project_id" {
  type        = string
  description = "The ID of the production project where the VPC will be created."
}

variable "left_network" {
  type        = string
  default     = "default"
  description = "The name of the production VPC to be created."
}

variable "right_project_id" {
  type        = string
  description = "The ID of the management project where the VPC will be created."
}

variable "right_network" {
  type        = string
  default     = "default"
  description = "The name of the management VPC to be created."
} 
### infrastructure
locals {
    region = "northamerica-northeast1"
    shared_secret = random_id.secret.b64_url
}

resource "random_id" "secret" {
  byte_length = 8
}

module "vpn-gw-left" {
  source  = "terraform-google-modules/vpn/google"
  version = "~> 4.0"

  project_id         = var.left_project_id
  network            = var.left_network
  region             = local.region
  gateway_name       = "vpn-gw-left"
  tunnel_name_prefix = "vpn-tn-left"
  tunnel_count       = 1
  shared_secret      = local.shared_secret
  peer_ips           = [module.vpn-gw-right.gateway_ip]

  route_priority = 1000
  remote_subnet  = [
    "172.19.144.0/20",  # NetApp Volumes psaRange in right project which needs access to AD in this project
    "10.162.0.0/20",    # NA-NE1 subnet with test VM in right project
    ]
}

module "vpn-gw-right" {
  source  = "terraform-google-modules/vpn/google"
  version = "~> 4.0"

  project_id         = var.right_project_id
  network            = var.right_network
  region             = local.region
  gateway_name       = "vpn-gw-right"
  tunnel_name_prefix = "vpn-tn-right"
  tunnel_count       = 1
  shared_secret      = local.shared_secret
  peer_ips           = [module.vpn-gw-left.gateway_ip]

  route_priority = 1000
  remote_subnet  = [
    "10.70.0.0/24",     # subnet of AD server in left project which we want to make accessible for NetApp Volumes
    ]
}

 

 

You need to adjust the remote_subnet parameters to reflect the CIDR addresses used in your environment.

After deploying my code, NetApp Volumes in project RIGHT was able to use Active Directory in project LEFT to create SMB volumes. Problem solved. Next.

 

 

 

Public