Tech ONTAP Blogs
Tech ONTAP Blogs
"This article was orignally published on Nov 7, 2018"
In part two of this series, we will look at an example scenario, where a volume is migrated between backends that lie on the same cluster. In case you haven’t been through part 1 already, you should definitely do so and obtain a general overview of Cinder volume migration. This post examines the configurations necessary to perform an intracluster migration of Cinder volumes and the procedure that must be followed.
I have used an OpenStack Queens deployment with an ONTAP cluster functioning as the storage backend. Here are my backend stanzas in cinder.conf:
[DEFAULT]
debug = True
enabled_backends = ontap-nfs,ontap-iscsi
.
.
[ontap-nfs]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
volume_backend_name = ontap-nfs
netapp_server_hostname = 192.168.0.101
netapp_server_port = 80
netapp_storage_protocol = nfs
netapp_storage_family = ontap_cluster
netapp_login = admin
netapp_password = <cluster_admin_password>
netapp_vserver = openstack
nfs_shares_config = /etc/cinder/nfs_shares
nas_secure_file_permissions = false
nas_secure_file_operations = false
nfs_mount_options=lookupcache=pos
backend_host = cluster1
[ontap-iscsi]
volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver
netapp_login = admin
netapp_password = <cluster_admin_password>
volume_backend_name = ontap-iscsi
netapp_server_hostname = 192.168.0.101
netapp_server_port = 80
netapp_transport_type = http
netapp_vserver = openstack_iscsi
netapp_storage_protocol = iscsi
netapp_storage_family = ontap_cluster
backend_host = cluster1
There are 2 backends defined:
The list of pools available for volume creation are provided below.
[root@openstack cinder-volume]# cinder get-pools
+-------------+-----------------------------------------+
| Property | Value |
+----------+--------------------------------------------+
| name | cluster1@ontap-iscsi#iscsi_flexvol |
+----------+--------------------------------------------+
+----------+--------------------------------------------+
| Property | Value |
+----------+--------------------------------------------+
| name | cluster1@ontap-nfs#192.168.0.131:/cinder_1 |
+----------+--------------------------------------------+
+----------+--------------------------------------------+
| Property | Value |
+----------+--------------------------------------------+
| name | cluster1@ontap-nfs#192.168.0.131:/cinder_2 |
+----------+--------------------------------------------+
In addition, the following Cinder volume types have also been created:
[root@openstack cinder-volume]# cinder extra-specs-list
+--------------------------------------+-------------+----------------------------------------+
| ID | Name | extra_specs |
+--------------------------------------+-------------+----------------------------------------+
| 90d40cef-b69c-4afa-b150-881d06a075e7 | ontap-nfs | {'volume_backend_name': 'ontap-nfs'} |
| b651f04e-3f38-427e-99c7-06134e708ba3 | ontap-iscsi | {'volume_backend_name': 'ontap-iscsi'} |
+--------------------------------------+-------------+----------------------------------------+
As you can see, there are 2 volume types created, each of which creates its volumes on a specific backend.
Let us now create a volume of the ontap-nfs
volume type, named v1 and of size 1G.
[root@openstack ~]# cinder create --volume-type ontap-nfs --name v1 1
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-10-25T18:50:45.000000 |
| description | None |
| encrypted | False |
| id | 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | v1 |
| os-vol-host-attr:host | None |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| updated_at | 2018-10-25T18:50:45.000000 |
| user_id | 6cdfc17b1e0047abb531e7efb301ab47 |
| volume_type | ontap-nfs |
+--------------------------------+--------------------------------------+
Once the volume has been created and is in the ‘available’ status, let’s take a closer look at v1’s details.
[root@openstack cinder-volume]# cinder show v1
+--------------------------------+--------------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-10-25T18:50:45.000000 |
| description | None |
| encrypted | False |
| id | 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef |
| metadata | |
| migration_status | None |
| multiattach | False |
| name | v1 |
| os-vol-host-attr:host | cluster1@ontap-nfs#192.168.0.131:/cinder_1 |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2018-10-25T18:50:46.000000 |
| user_id | 6cdfc17b1e0047abb531e7efb301ab47 |
| volume_type | ontap-nfs |
+--------------------------------+--------------------------------------------+
It can be seen that the volume v1 maps to the backend pool associated with the ontap-nfs
backend [cluster1@ontap-nfs#192.168.0.131:/cinder_1].
Let us assume that well after the volume was created, the OpenStack admin desires a higher degree of performance for the volume, which can be offered by the disks that serve the cinder_2
SVM. Problem? Not really. This is where cinder migration comes into the picture. The existing ‘v1’ volume can be migrated to the cluster1@ontap-nfs#192.168.0.131:/cinder_2
host.
The volume migration is initiated by issuing the command:
[root@openstack ~]# cinder migrate v1 cluster1@ontap-nfs#192.168.0.131:/cinder_2
Let’s break this down and understand the arguments that are used:
The sequence of steps that are triggered by this command look something like this:
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| ID | Status | Name | Size | Volume Type |Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef | available | v1 | 1 | ontap-nfs |false | |
| a6210964-8828-4d3e-a1cd-d7175a65f0ba | available | v1 | 1 | ontap-nfs |false | |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| ID | Status | Name | Size | Volume Type |Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef | deleting | v1 | 1 | ontap-nfs |false | |
| a6210964-8828-4d3e-a1cd-d7175a65f0ba | available | v1 | 1 | ontap-nfs |false | |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| ID | Status | Name | Size | Volume Type |Bootable | Attached to |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
| 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef | available | v1 | 1 | ontap-nfs |false | |
+--------------------------------------+-----------+------+------+-------------+---------+-------------+
To confirm the migration has been successfully completed, let’s take a look at v1’s details.
[root@openstack cinder-volume]# cinder show v1
+--------------------------------+---------------------------------------------+
| Property | Value |
+--------------------------------+---------------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2018-10-25T18:50:45.000000 |
| description | None |
| encrypted | False |
| id | 3b4ef5ab-ba10-41f2-8279-6433fe8cf9ef |
| metadata | |
| migration_status | success |
| multiattach | False |
| name | v1 |
| os-vol-host-attr:host | cluster1@ontap-nfs#192.168.0.131:/cinder_2 |
| os-vol-mig-status-attr:migstat | success |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2018-10-25T18:54:37.000000 |
| user_id | 6cdfc17b1e0047abb531e7efb301ab47 |
| volume_type | ontap-nfs |
+--------------------------------+---------------------------------------------+
It can be seen from above that the ‘host’ attribute for ‘v1’ now reflects the cinder_2 backend pool. The ‘migration_status’ indicates that the migration was successful.
But what if the volume is attached? And what if the requirement is to migrate an attached volume from an NFS backend to an iSCSI backend? That is also possible! This video demonstrates the migration of a volume that was created on a NFS backend, attached to a compute instance and then migrated to an iSCSI backend. Since this requires a conversion of the Cinder volume’s type, this migration was initiated with a cinder retype
, as seen in the video.
Stay tuned for our upcoming post on intercluster migration, where migration across clusters will be discussed in detail. Part 3