OpenStack Discussions

Cinder manage command

docana
9,494 Views

Hi,

 

I am trying to migrate volumes from one cloud to another and I would like to try the cinder manage command. Has anyone used this method before? From my understading, you would need to transfer the volume to the corresponding tenant after it has ben managed by the new cloud.

 

 

Following this https://netapp.github.io/openstack-deploy-ops-guide/mitaka/content/cinder.examples.cinder_cli.html#cinder.examples.cinder_cg_cli, I copy the volume file from the old NFS share mounted in /mnt to the new share managed by Cinder in the new cloud (Mitaka) and finally get the following error after executing the cinder manage command:

 

 

 

$ cp /mnt/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675 /var/lib/cinder/mnt/778040e8f6b2e949f5735fa6e2009346

$ df -hP /var/lib/cinder/mnt/778040e8f6b2e949f5735fa6e2009346
Filesystem                                     Size  Used Avail Use% Mounted on
nfshost:/vol_data01   2T  172G   2T   10% /var/lib/cinder/mnt/778040e8f6b2e949f5735fa6e2009346 $ cinder get-pools +----------+------------------------------------------------------------------------+ | Property | Value | +----------+------------------------------------------------------------------------+ | name | hostgroup@tripleo_netapp#nfshost:/vol_data01 | +----------+------------------------------------------------------------------------+

cinder manage --id-type source-name --name test3 --description "First cinder manage test." \
hostgroup@tripleo_netapp#nfshost:/vol_data01 10.0.0.1:/vol_data01/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675
+--------------------------------+------------------------------------------------------------------------+
|            Property            |                                 Value                                  |
+--------------------------------+------------------------------------------------------------------------+
|          attachments           |                                   []                                   |
|       availability_zone        |                                  nova                                  |
|            bootable            |                                 false                                  |
|      consistencygroup_id       |                                  None                                  |
|           created_at           |                       2017-02-07T15:02:40.000000                       |
|          description           |                       First cinder manage test.                        |
|           encrypted            |                                 False                                  |
|               id               |                  6c1e887f-6dfb-4825-a56a-f1d329dbeac8                  |
|            metadata            |                                   {}                                   |
|        migration_status        |                                  None                                  |
|          multiattach           |                                 False                                  |
|              name              |                                 test3                                  |
|     os-vol-host-attr:host      |               hostgroup@tripleo_netapp#nfshost:/vol_data01             |
| os-vol-mig-status-attr:migstat |                                  None                                  |
| os-vol-mig-status-attr:name_id |                                  None                                  |
|  os-vol-tenant-attr:tenant_id  |                    b851335000f044ee83c6aa0fcf424e0e                    |
|       replication_status       |                                  None                                  |
|              size              |                                   0                                    |
|          snapshot_id           |                                  None                                  |
|          source_volid          |                                  None                                  |
|             status             |                                 error                                  |
|           updated_at           |                       2017-02-07T15:02:40.000000                       |
|            user_id             |                    8c5c4ba6b24641e28a231091fad8ebe9                    |
|          volume_type           |                                  None                                  |
+--------------------------------+------------------------------------------------------------------------+

2017-02-07 15:02:40.635 2292 ERROR oslo_messaging.rpc.dispatcher ManageExistingInvalidReference: Manage existing
volume failed due to invalid backend reference {u'source-name': u'10.0.0.1:/vol_data01/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675'}:
Volume not found on configured storage backend.

# mount -o ro 10.0.0.1:/vol_data01 /mnt

# ls -lahtr /mnt/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675
-rw-r--r--. 1 cinder cinder 10G Feb  3 16:07 /mnt/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675

 

 

 So the file is there, but cinder complains about the volume nor being there. I have checked permissions and selinux contexts. Are there any special permissions to be set in the vfiler side?

 

Cheers,

David

1 ACCEPTED SOLUTION

docana
9,213 Views

Hi @SumitK , thanks again for your response. We had a round robin dns configuration for the nfs server hostname, and that seems to upset the cinder manage command, although we have not had any noticeable problems with the cinder volume service. With this setup we intented to use the two LIFs that we have configured for the nfshost SVM in a sort of dynamic way.

 

$ host nfshost
nfshost.openstack has address 10.0.0.1
nfshost.openstack has address 10.0.0.2

 

When I changed the dns to resolve just one ip address, the same command worked perfectly.

 

$ host nfshost
nfshost.openstack has address 10.0.0.1

cinder manage --id-type source-name --name test3 --description "First cinder manage test." \
hostgroup@tripleo_netapp#nfshost:/vol_data01 nfshost:/vol_data01/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675

 

Would you then balance the load between LIFs, which live in different controllers, by using the shares.conf file like this?

 

10.0.0.1:/vol_data01
10.0.0.2:/vol_data02
10.0.0.1:/vol_data03 10.0.0.2:/vol_data04
...

View solution in original post

5 REPLIES 5

Bishoy
9,434 Views

Since you have the path mounted already. Try to lose the ip form the path for the sake of testing.

 

Also, for the current way you are using. If the source vol has an extra spec of NFS worth adding --volume-type.

 

Best Regards,

Bishoy

SumitK
9,402 Views

I think you need to check your mounts (as Bishoy pointed). The mountpoint /var/lib/cinder/mnt/778040e8f6b2e949f5735fa6e2009346 should be for 10.0.0.1:/vol_data01/, and Cinder should have mounted this. I don't see that in your pools list though.


I think the result of your cinder get-pools should be "hostgroup@tripleo_netapp#10.0.0.1:/vol_data01" instead of "hostgroup@tripleo_netapp#nfshost:/vol_data01". 

 

In case you missed it, there is an NFS example for Cinder Manage on this page (search for the text "In this section we import a Data ONTAP NFS file by specifying its path") :  https://netapp.github.io/openstack-deploy-ops-guide/mitaka/content/cinder.examples.cinder_cli.html#cinder.examples.cinder_cg_cli


 

 

docana
9,359 Views

Hi guys, Thanks for your responses.

 

I tried to lose the ip from the path like @Bishoy suggested:

 

cinder manage --id-type source-name --name test3 \
--description "First cinder manage test." \
"hostgroup@tripleo_netapp#nfshost:/vol_data01" \
/vol_data01/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675

And this is the error in the cinder volume.log logfile (just the final lines):

 

2017-02-09 11:49:06.615 32661 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/dataontap/nfs_base.py", line 837, in _convert_vol_ref_share_name_to_share_ip
2017-02-09 11:49:06.615 32661 ERROR oslo_messaging.rpc.dispatcher     vol_ref_share_ip = na_utils.resolve_hostname(share_split[0])
2017-02-09 11:49:06.615 32661 ERROR oslo_messaging.rpc.dispatcher   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/utils.py", line 132, in resolve_hostname
2017-02-09 11:49:06.615 32661 ERROR oslo_messaging.rpc.dispatcher     res = socket.getaddrinfo(hostname, None)[0]
2017-02-09 11:49:06.615 32661 ERROR oslo_messaging.rpc.dispatcher gaierror: [Errno -2] Name or service not known

So, seems like the identifier parameter needs a hostname/ip from where to fetch the data. The mountpoint /var/lib/cinder/mnt/778040e8f6b2e949f5735fa6e2009346 is indeed for 10.0.0.1:/vol_data01/, and Cinder has this already mounted. You don't see that in my pools list because I used the ip instead of the hostname (nfshost), but either of those result in the same error.

 

I was already using the examples @SumitK provided, and they look quite simple, the host parameter would be the output for get-pools and the identifier parameter would be the full nfs path to the unmanaged volume file, but for some reason I'm getting the error "Volume not found on configured storage backend".

 

I have seen that Newton offers a cinder manageable-list command, http://docs.openstack.org/cli-reference/cinder.html which would be probably helpful for troubleshooting, but we are running Mitaka unfortunately.

SumitK
9,270 Views

Apologies for the delayed response. Can you please check your shares.conf and make sure that whatever you have specified in there matches your mount and manage commands? 'shares.conf' is the file that is usually passed as the value for "nfs_shares_config" in your cinder.conf file. So, for the purpose of testing, please just use the IP address instead of the hostname and see if that works for you.

docana
9,214 Views

Hi @SumitK , thanks again for your response. We had a round robin dns configuration for the nfs server hostname, and that seems to upset the cinder manage command, although we have not had any noticeable problems with the cinder volume service. With this setup we intented to use the two LIFs that we have configured for the nfshost SVM in a sort of dynamic way.

 

$ host nfshost
nfshost.openstack has address 10.0.0.1
nfshost.openstack has address 10.0.0.2

 

When I changed the dns to resolve just one ip address, the same command worked perfectly.

 

$ host nfshost
nfshost.openstack has address 10.0.0.1

cinder manage --id-type source-name --name test3 --description "First cinder manage test." \
hostgroup@tripleo_netapp#nfshost:/vol_data01 nfshost:/vol_data01/volume-6d43ccd2-9beb-44c2-bbc6-2c7814e74675

 

Would you then balance the load between LIFs, which live in different controllers, by using the shares.conf file like this?

 

10.0.0.1:/vol_data01
10.0.0.2:/vol_data02
10.0.0.1:/vol_data03 10.0.0.2:/vol_data04
...
Public