Tech ONTAP Blogs

Google Cloud NetApp Volumes - Terraform integration

okrause
NetApp
3,781 Views

Terraform integration is here

 

We are proud to announce that the Terraform Provider for Google Cloud Platform now supports Google Cloud NetApp Volumes resources. It allows you to automate provisioning and management of NetApp Volumes resources using the powerful and widely used Terraform ecosystem. Beginning with version 5.13.0, you can integrate NetApp Volumes automation into your Terraform build pipelines.

 

You can find the available resources by going to the Terraform Google Provider documentation and by applying a “netapp” filter. You will find multiple resources starting with google_netapp_*. This blog walks you through the steps on how you can use the NetApp Volumes provider.

 

On-board the NetApp Volumes service

Once you complete basic configuration steps (1 through 5) to successfully set up NetApp Volumes, you can start the automation from step 6 “Configure the network”. 

 

NetApp Volumes uses Private Service Access (PSA) to connect the service with your VPC Network. This peering will work for all regions in your project. When using a Shared VPC service project, this needs to be done in the host project owning the VPC Network. Let’s use the Terraform Google Provider (google) to setup the networking:

 

 

 

terraform {
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = ">= 5.15.0"
    }
  }
}

locals {
  region = "us-east4"
}

# Let's define our project and the region we are working in
provider "google" {
  project = "test-project"
  region  = local.region
}

# Let's use a pre-existing VPC instead of creating a new one
data "google_compute_network" "my-vpc" {
  name = "ok-test-vpc"
}

# Reserve compute address CIDR for NetApp Volumes to use
resource "google_compute_global_address" "private_ip_alloc" {
  name          = "${data.google_compute_network.my-vpc.name}-ip-range"
  purpose       = "VPC_PEERING"
  address_type  = "INTERNAL"
  prefix_length = 24
  network       = data.google_compute_network.my-vpc.id
}

# You may need this CIDR to open a firewall on your Active Directory domain controllers
output "netapp-volumes-cidr" {
  value = "${google_compute_global_address.private_ip_alloc.address}/${google_compute_global_address.private_ip_alloc.prefix_length}"
}

# Create the PSA peering
resource "google_service_networking_connection" "default" {
  network                 = data.google_compute_network.my-vpc.id
  service                 = "netapp.servicenetworking.goog"
  reserved_peering_ranges = [google_compute_global_address.private_ip_alloc.name]
}

# Modify the PSA Connection to allow import/export of custom routes
resource "google_compute_network_peering_routes_config" "route_updates" {
  peering = google_service_networking_connection.default.peering
  network = data.google_compute_network.my-vpc.name

  import_custom_routes = true
  export_custom_routes = true
}

 

 

 

Create a storage pool

With the networking now in place, let's create our first storage pool on Google Cloud NetApp Volumes:

 

 

 

resource "google_netapp_storage_pool" "my-tf-pool" {
  name          = "my-tf-pool"
  location      = local.region
  service_level = "PREMIUM"
  capacity_gib  = 2048
  network       = data.google_compute_network.my-vpc.id
}

 

 

 

Create an NFS volume

NetApp Volumes supports NFSv3 and NFSv4.1. With the pool in place, let’s create an NFSv3 volume in the storage pool:

 

 

 

resource "google_netapp_volume" "my-nfsv3-volume" {
  location         = local.region
  name             = "my-nfsv3-volume"
  capacity_gib     = 1024 # Size can be up to space available in pool
  share_name       = "my-nfsv3-volume"
  storage_pool     = google_netapp_storage_pool.my-tf-pool.name
  protocols        = ["NFSV3"]
  unix_permissions = "0777"
  export_policy {
    # Order of rules matters! Go from most specific to most generic
    rules {
      access_type     = "READ_WRITE"
      allowed_clients = "10.10.10.17"
      has_root_access = true
      nfsv3           = true
    }
    rules {
      access_type     = "READ_ONLY"
      allowed_clients = "10.10.0.0/16"
      has_root_access = false
      nfsv3           = true
    }
  }
}

output "mountpath" {
    value = google_netapp_volume.my-nfsv3-volume.mount_options[0].export_full
}

 

 

 

The output now contains the path you can use to mount the volume on your GCE VM Linux client. Your linux client needs to be connected to your VPC and it’s IP address needs to be part of the allowed_clients in the export policy of the volume:

 

 

 

$ sudo mount $(terraform output mountpath) /mnt
$ df -h

 

 

 

You may want to play around with changing the volume size of your volume or pool by changing the capacity_gib parameter and re-applying the configuration with terraform. See how changing size up or down is reflected on your clients “df” output within seconds!

 

Create an Active Directory policy

To provision SMB volumes, NetApp Volumes needs to join an Active Directory domain. Let’s tell the service how to connect to your domain by creating an Active Directory policy.

 

 

 

variable "ad_username" {
}
variable "ad_password" {
   # Note: Handle this as a secret
}

resource "google_netapp_active_directory" "my-ad" {
  name            = "my-ad-${local.region}"
  location        = local.region
  domain          = "cvsdemo.internal"
  dns             = "10.70.0.2"
  net_bios_prefix = "smbserver"
  username        = var.ad_username
  password        = var.ad_password
}

 

 

 

The specified DNS server and one or more domain controllers need to exist on your VPC (no additional peering hop) and the firewall rules need to allow traffic from NetApp Volumes. An example firewall resource you can use to create a firewall tag which you can attach to your domain controllers is:

 

 

 

resource "google_compute_firewall" "netappvolumes2ad" {
  name        = "netappvolumes2ad"
  network     = data.google_compute_network.my-vpc.id
  description = "Attach netappvolumes2ad tag to your Active Directory domain controllers to allow NetApp Volumes to contact them. "

  source_ranges = ["${google_compute_global_address.private_ip_alloc.address}/${google_compute_global_address.private_ip_alloc.prefix_length}"]
  direction     = "INGRESS"
  allow {
    protocol = "icmp"
  }
  allow {
    protocol = "tcp"
    ports    = ["9389", "88", "636", "53", "464", "445", "389", "3269", "3268"]
  }
  allow {
    protocol = "udp"
    ports    = ["88", "53", "464", "445", "389", "123", ]
  }

  target_tags = ["netappvolumes2ad"]
}

 

 

 

Next, let’s attach the policy to our existing pool by updating the existing pool definition. Add the active_directory line as shown below:

 

 

 

resource "google_netapp_storage_pool" "my-tf-pool" {
  name             = "my-tf-pool"
  location         = local.region
  service_level    = "PREMIUM"
  capacity_gib     = 2048
  network          = data.google_compute_network.my-vpc.id
  active_directory = google_netapp_active_directory.my-ad.id
}

 

 

Please note that NetApp Volumes does not perform any AD validation on AD policy creation.

 

The ActiveDirectory joining and validation happens on creation of the first SMB volume. Any issue with joining the Active Directory - like wrong AD join credentials, networking problems like too many VPC peering hops or firewall rules - will only surface when you create your first SMB volume.

 

Create an SMB volume

With instructions on how to connect to Active Directory in place, we can now easily create an SMB volume:

 

 

 

resource "google_netapp_volume" "my-smb-volume" {
  location     = local.region
  name         = "my-smb-volume"
  capacity_gib = 1024 # Size can be up to space available in pool
  share_name   = "my-smb-volume"
  storage_pool = google_netapp_storage_pool.my-tf-pool.name
  protocols    = ["SMB"]
}

 

 

 

Cleanup

In case you want to destroy your resource, please note that deleting the peering might fail. After deleting all your NetApp Volumes resources, it takes about 6 hours for the service to lazily cleanup backend resources.

 

Start your build pipelines

As you can see, provisioning NetApp Volumes resources is straightforward.

To learn more about Google Cloud NetApp Volumes, visit its overview page. Happy terraforming.

 

This blog evolved into a series. Here links to all the blogs:

 

Public