Tech ONTAP Blogs
Tech ONTAP Blogs
"This article was orignally published on Nov 7, 2018"
The system under consideration for this blog post is an OpenStack Queens deployment, with 2 ONTAP clusters serving as backends. This is the cinder.conf
file that is used:
[DEFAULT] debug = True . . enabled_backends = ontap-iscsi,ontap-nfs,c2-nfs [ontap-nfs] volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver volume_backend_name = ontap-nfs netapp_server_hostname = 192.168.0.101 netapp_server_port = 80 netapp_storage_protocol = nfs netapp_storage_family = ontap_cluster netapp_login = admin netapp_password = <cluster_admin_password> netapp_vserver = openstack nfs_shares_config = /etc/cinder/nfs_shares nas_secure_file_permissions = false nas_secure_file_operations = false nfs_mount_options=lookupcache=pos backend_host = cluster1 [ontap-iscsi] volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver netapp_login = admin netapp_password = <cluster_admin_password> volume_backend_name = ontap-iscsi netapp_server_hostname = 192.168.0.101 netapp_server_port = 80 netapp_transport_type = http netapp_vserver = openstack_iscsi netapp_storage_protocol = iscsi netapp_storage_family = ontap_cluster backend_host = cluster1 [c2-nfs] volume_driver = cinder.volume.drivers.netapp.common.NetAppDriver volume_backend_name = c2-nfs netapp_server_hostname = 192.168.0.102 netapp_server_port = 80 netapp_storage_protocol = nfs netapp_storage_family = ontap_cluster netapp_login = admin netapp_password = <cluster_admin_password> netapp_vserver = c2_openstack nfs_shares_config = /etc/cinder/c2_nfs_shares nas_secure_file_permissions = false nas_secure_file_operations = false nfs_mount_options=lookupcache=pos backend_host = cluster2
There are 3 backends defined:
[root@openstack ~]# cinder get-pools +----------+------------------------------------+ | Property | Value | +----------+------------------------------------+ | name | cluster1@ontap-iscsi#iscsi_flexvol | +----------+------------------------------------+ +----------+--------------------------------------------+ | Property | Value | +----------+--------------------------------------------+ | name | cluster1@ontap-nfs#192.168.0.131:/cinder_1 | +----------+--------------------------------------------+ +----------+--------------------------------------------+ | Property | Value | +----------+--------------------------------------------+ | name | cluster1@ontap-nfs#192.168.0.131:/cinder_2 | +----------+--------------------------------------------+ +----------+---------------------------------------+ | Property | Value | +----------+---------------------------------------+ | name | cluster2@c2-nfs#192.168.0.133:/cinder | +----------+---------------------------------------+
In addition, the following Cinder volume types have also been created:
[root@openstack ~]# cinder extra-specs-list +--------------------------------------+-------------+----------------------------------------+ | ID | Name | extra_specs | +--------------------------------------+-------------+----------------------------------------+ | 90d40cef-b69c-4afa-b150-881d06a075e7 | ontap-nfs | {'volume_backend_name': 'ontap-nfs'} | | b651f04e-3f38-427e-99c7-06134e708ba3 | ontap-iscsi | {'volume_backend_name': 'ontap-iscsi'} | | 59e178af-290c-4e3d-ac0b-4ef9ddabb109 | c2-nfs | {'volume_backend_name': 'c2-nfs'} | +--------------------------------------+-------------+----------------------------------------+
As you can see, there are 3 volume types created, each of which creates its volumes on a specific backend.
Let us now create a volume of the ontap-nfs
volume type, named v1 and of size 1G.
[root@openstack ~]# cinder create --volume-type ontap-nfs --name v1 1 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attachments | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-10-29T19:19:49.000000 | | description | None | | encrypted | False | | id | 2665cdf9-95bd-4126-a507-a8e1445b739a | | metadata | {} | | migration_status | None | | multiattach | False | | name | v1 | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | 2018-10-29T19:19:49.000000 | | user_id | 6cdfc17b1e0047abb531e7efb301ab47 | | volume_type | ontap-nfs | +--------------------------------+--------------------------------------+
Once the volume has been created and is in the ‘available’ status, let’s take a closer look at v1’s details.
[root@openstack ~]# cinder show v1 +--------------------------------+--------------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-10-29T19:19:49.000000 | | description | None | | encrypted | False | | id | 2665cdf9-95bd-4126-a507-a8e1445b739a | | metadata | | | migration_status | None | | multiattach | False | | name | v1 | | os-vol-host-attr:host | cluster1@ontap-nfs#192.168.0.131:/cinder_1 | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2018-10-29T19:19:50.000000 | | user_id | 6cdfc17b1e0047abb531e7efb301ab47 | | volume_type | ontap-nfs | +--------------------------------+--------------------------------------------+
The volume v1 maps to the backend pool associated with the ontap-nfs
backend [cluster1@ontap-nfs#192.168.0.131:/cinder_1].
In part two of this series, we have seen the migration of a volume from the cinder_1 FlexVol on cluster1 to the cinder_2 FlexVol on the same cluster. Now, we are migrating a volume to a backend that is on a different cluster. We have already seen that the c2-nfs
backend is present on another ONTAP cluster (named cluster2). The objective is to migrate the v1
volume from a backend that is on cluster1 to a backend that is present on cluster2. Since the c2-nfs backend is associated with a different volume type (creatively named c2-nfs
), a retyping of the v1
volume must be initiated. Essentially, this instructs Cinder to create a volume of type c2-nfs
and copy the data from the source volume.
The volume migration is initiated by issuing the command:
[root@openstack ~]# cinder retype --migration-policy on-demand v1 c2-nfs
The command is composed of the following arguments:
The sequence of steps that are triggered by this command look something like this:
[root@openstack ~]# cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 79a4065b-106f-4afb-a54b-19920ae82f3d | available | v1 | 1 | c2-nfs | false | | | 2665cdf9-95bd-4126-a507-a8e1445b739a | available | v1 | 1 | ontap-nfs | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+
[root@openstack ~]# cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 2665cdf9-95bd-4126-a507-a8e1445b739a | deleting | v1 | 1 | ontap-nfs | false | | | 79a4065b-106f-4afb-a54b-19920ae82f3d | available | v1 | 1 | c2-nfs | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+
[root@openstack ~]# cinder list +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+------+------+-------------+----------+-------------+ | 2665cdf9-95bd-4126-a507-a8e1445b739a | available | v1 | 1 | c2-nfs | false | | +--------------------------------------+-----------+------+------+-------------+----------+-------------+
To confirm the migration has been successfully completed, let’s look at v1’s details.
[root@openstack ~]# cinder show v1 +--------------------------------+------------------------------------------+ | Property | Value | +--------------------------------+------------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2018-10-29T19:19:49.000000 | | description | None | | encrypted | False | | id | 2665cdf9-95bd-4126-a507-a8e1445b739a | | metadata | | | migration_status | success | | multiattach | False | | name | v1 | | os-vol-host-attr:host | cluster2@c2-nfs#192.168.0.133:/cinder | | os-vol-mig-status-attr:migstat | success | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 6ee6fda1f58946b1ac1389900d103752 | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2018-10-29T20:11:18.000000 | | user_id | 6cdfc17b1e0047abb531e7efb301ab47 | | volume_type | c2-nfs | +--------------------------------+------------------------------------------+
It can be seen from above that the ‘host’ attribute for ‘v1’ now reflects the backend present on cluster2. The ‘migration_status’ indicates that the migration was successful.
It is interesting to observe that the volume does not necessarily have to be in the “available” state before attempting to migrate across clusters. This video shows the migration of a Cinder volume that is attached to a compute instance, with an active operation that is writing data to the Cinder volume throughout the migration process. The volume is moved from an Element cluster to an ONTAP cluster.
NetApp is committed to fostering open source solutions. You can learn more about our contributions by joining our Discord channel.