Tech ONTAP Blogs

Google Cloud NetApp Volumes - Import resources transitioned from Cloud Volumes Service

okrause
NetApp
659 Views

Google Cloud NetApp Volumes is a fully managed, cloud-based file storage service that provides advanced data management capabilities and highly scalable performance.

 

The service is built out of multiple kinds of resources. Until recently, they had to be created using Cloud Console, gcloud or the API. Since January 2024 most resources of the service can now be managed using Terraform too.

 

But what if you already build multiple volumes manually and want to manage them using Terraform instead? Or you transitioned your volume from Cloud Volume Service (CVS) over to NetApp Volumes? You will have multiple Netapp Volumes resources which you need to put under Terraform management. How can this be done?

 

Terraform import

To make existing resources into Terraform managed resources, Terraform offers a functionality called Terraform import. Google has an extra webpage which explains how to Import your Google Cloud resources into Terraform state.

 

It talks about three approaches:

  1. Import resources one at a time
  2. Import resources in bulk with a configuration-driven import block
  3. Import resources created after doing a bulk export

 

Approach one is the most basic one, which will be a lot of manual work if you have to import dozens of resources. The second one is similar, but can import multiple resources at once. The third approach is maybe too big a stone for a little bird, since it imports all resources of a project, which can be far bigger than importing a few (dozens) NetApp Volumes resources.

 

So what is the best approach to get done quickly? Using the bulk import is pretty neat. Let’s run through the workflow detailed importing an example volume.

 

Here the current list of volumes in my demo project:

 

 

$ gcloud netapp volumes list --format='table(NAME,storage_pool)'         
NAME                                                                      STORAGE_POOL
projects/cvs-pm-host-1p/locations/asia-east1-a/volumes/vol1               ardalan-pool
projects/cvs-pm-host-1p/locations/asia-east1-a/volumes/summit             ardalan-pool
projects/cvs-pm-host-1p/locations/asia-southeast1/volumes/gcvevol         asiase1-gcve
projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata  montreal-premium

 

 

Let’s import the volume called okdata. Create an empty folder and generate a import.tf file which defines the import block:

 

 

$ mkdir import-test
$ cd import-test
$ cat << EOF
import {
  id = "projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata"
  to = google_netapp_volume.okdata
}
EOF

 

 

Next, let terraform call the API to read the existing volume and create a TF file describing my volume:

 

 

$ terraform init
...
$ terraform plan -generate-config-out=generated_resources.tf
...
Plan: 1 to import, 0 to add, 0 to change, 0 to destroy.

 

 

By now, the file generated_resources.tf contains a terraform definition of my existing volume. Let’s complete the import by updating my state:

 

 

$ terraform apply                                           
google_netapp_volume.okdata: Preparing import... [id=projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata]
google_netapp_volume.okdata: Refreshing state... [id=projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata]
Terraform will perform the following actions:
  # google_netapp_volume.okdata will be imported
...
Plan: 1 to import, 0 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.
  Enter a value: yes
google_netapp_volume.okdata: Importing... [id=projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata]
google_netapp_volume.okdata: Import complete [id=projects/cvs-pm-host-1p/locations/northamerica-northeast1/volumes/okdata]
Apply complete! Resources: 1 imported, 0 added, 0 changed, 0 destroyed.

 

 

By now, I have a definition of my volume in generated_resources.tf and my local state is in sync with the resource. I can now start managing my volume through Terraform. Mission accomplished.

 

This goes out to the lazy ones

 

If you are like me, you will dislike manually creating all the import blocks for your existing resources. Why invest 15 minutes of manual work, if you can spend 2 hours writing a script which simplifies the work?

 

The idea is to use the gcloud command to read the existing resource names and auto-generate import blocks to be fed into terraform plan.

 

This is the very basic script I came up with:

 

 

#!/usr/bin/env bash

# $1 = shortname, to be used for TF resource name, e.g. pool
# $2 = gcloud command name, e.g. storage-pools
# $3 = TF provider resource name, e.g. google_netapp_storage_pool
import_resource () {
    resources=$(gcloud netapp $2 list --format='get(NAME)')
    i=0
    for r in $resources
do
    cat << EOF
import {
  id = "$r"
  to = $3.$1$i
}

EOF
    i=$(($i+1))
done
} 

# List of resource types to import. Remove lines for unwanted resource types
import_resource "pool" "storage-pools" "google_netapp_storage_pool"
import_resource "volume" "volumes" "google_netapp_volume"
import_resource "activedirectory" "active-directories" "google_netapp_active_directory"
import_resource "kms" "kms-configs" "google_netapp_kmsconfig"

# redirect output in a tf file and run
# terraform plan -generate-config-out=generated_resources.tf

 

 

It generates the import blocks for Storage Pools, Volumes, Active Directory policies and CMEK policies. I run it with:

 

 

$ ./create-import-templates.sh > import.tf
# Verify that import.tf contains what you are expecting, then run:  
$ terraform plan -generate-config-out=generated_resources.tf

 

 

The generated_resources.tf file now has the definitions of my resources. Terraform may complain about missing parameters it cannot read from the API, like the password parameter of Active Directory policies. I need to add them manually.

Since I like to parameterise my TF files a lot, I would likely make some modifications to the file before running terraform apply. Examples would be replacing the storage pool resource names in the volume definitions with references to the Terraform resource definition of the pool or doing a similar edit to network resource names by adding a google_compute_network datasource. You can do whatever works for you. Happy importing.

 

Public