OpenStack Discussions

Copy offload workflow unsuccessful (Return code 13)

takeshik
5,908 Views

I tried the copy offload by "cinder create --image-id <image id>" command

It is different export from the glance image volume (source) and the cinder volume (destination).

# glance image-list

+--------------------------------------+-------------------+-------------+------------------+-----------+--------+

| ID                                   | Name              | Disk Format | Container Format | Size      | Status |

+--------------------------------------+-------------------+-------------+------------------+-----------+--------+

| 0aa78555-affe-4293-9c59-e2e85ece71ee | CentOS_65         | qcow2       | bare             | 344457216 | active |

| 3358307d-36c2-4a11-97fa-9e76e536841a | cirros-032-x86_64 | qcow2       | bare             | 13167616  | active |

| bf158476-ac0a-4bfc-a679-2a52afe7d2b7 | fedora-19-x86_64  | qcow2       | bare             | 239534080 | active |

| 30186104-64ae-404e-b833-9890d3f6685d | fedora-20-x86_64  | qcow2       | bare             | 210829312 | active |

+--------------------------------------+-------------------+-------------+------------------+-----------+--------+

Execute command

cinder create --image-id 30186104-64ae-404e-b833-9890d3f6685d 5 --display-name Glance2Volume000

volume.log

2014-08-19 15:03:39.808 11638 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'5548c424732047ca99b29a6fd0aca7df', 'tenant': u'63bba5d3297c4b38af6699e7a06a0468', 'user_identity': u'5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -'}

2014-08-19 15:03:40.082 11638 INFO cinder.volume.flows.manager.create_volume [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Volume 118812eb-0c05-4e0e-92a2-05160301cacf: being created using CreateVolumeFromSpecTask._create_from_image with specification: {'status': u'creating', 'image_location': (u'file:///glance/images/30186104-64ae-404e-b833-9890d3f6685d', [{u'url': u'file:///glance/images/30186104-64ae-404e-b833-9890d3f6685d', u'metadata': {u'mount_point': u'/glance/images', u'type': u'nfs', u'share_location': u'nfs://10.130.208.55/images'}}]), 'volume_size': 5, 'volume_name': u'volume-118812eb-0c05-4e0e-92a2-05160301cacf', 'image_id': u'30186104-64ae-404e-b833-9890d3f6685d', 'image_service': <cinder.image.glance.GlanceImageService object at 0x415d250>, 'image_meta': {'status': u'active', 'name': u'fedora-20-x86_64', 'deleted': None, 'container_format': u'bare', 'created_at': datetime.datetime(2014, 8, 6, 1, 3, 43, tzinfo=<iso8601.iso8601.Utc object at 0x2fed650>), 'disk_format': u'qcow2', 'updated_at': datetime.datetime(2014, 8, 6, 1, 3, 46, tzinfo=<iso8601.iso8601.Utc object at 0x2fed650>), 'id': u'30186104-64ae-404e-b833-9890d3f6685d', 'owner': u'63bba5d3297c4b38af6699e7a06a0468', 'min_ram': 0, 'checksum': u'1ec332a350e0a839f03c967c1c568623', 'min_disk': 0, 'is_public': None, 'deleted_at': None, 'properties': {}, 'size': 210829312}}

2014-08-19 15:03:40.085 11638 INFO cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Checking image clone 30186104-64ae-404e-b833-9890d3f6685d from glance share.

2014-08-19 15:03:40.104 11638 INFO cinder.brick.remotefs.remotefs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/7a2ad54f8ce5ac4c4f092e6cb1f1b39b

2014-08-19 15:03:40.115 11638 INFO cinder.brick.remotefs.remotefs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/ea9ebfcad8827c2df164a5792194d80e

2014-08-19 15:03:40.124 11638 INFO cinder.brick.remotefs.remotefs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/5fa24de47b0ced340706db7fde94ab64

2014-08-19 15:03:40.134 11638 INFO cinder.brick.remotefs.remotefs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/980e55e4b6bcbba7ff1f0adc1016e52e

2014-08-19 15:03:40.788 11638 INFO cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] casted to 10.130.208.55:/vol2_dedup

2014-08-19 15:03:41.539 11638 ERROR cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Copy offload workflow unsuccessful. Unexpected error while running command.

Command: /etc/cinder/coppyoffload/na_copyoffload 10.130.208.55 10.130.208.55 /images/30186104-64ae-404e-b833-9890d3f6685d /vol2_dedup/3b7c81f2-8c53-4e6a-b5b5-08112b039c1b

Exit code: 13

Stdout: 'Program exiting with return code 13.\n'

Stderr: ''

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs Traceback (most recent call last):

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/nfs.py", line 1124, in copy_image_to_volume

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs     self._try_copyoffload(context, volume, image_service, image_id)

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/nfs.py", line 1150, in _try_copyoffload

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs     image_id)

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs   File "/usr/lib/python2.7/site-packages/cinder/volume/drivers/netapp/nfs.py", line 1235, in _copy_from_img_service

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs     check_exit_code=0)

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs   File "/usr/lib/python2.7/site-packages/cinder/utils.py", line 136, in execute

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs     return processutils.execute(*cmd, **kwargs)

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs   File "/usr/lib/python2.7/site-packages/cinder/openstack/common/processutils.py", line 173, in execute

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs     cmd=' '.join(cmd))

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs ProcessExecutionError: Unexpected error while running command.

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs Command: /etc/cinder/coppyoffload/na_copyoffload 10.130.208.55 10.130.208.55 /images/30186104-64ae-404e-b833-9890d3f6685d /vol2_dedup/3b7c81f2-8c53-4e6a-b5b5-08112b039c1b

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs Exit code: 13

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs Stdout: 'Program exiting with return code 13.\n'

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs Stderr: ''

2014-08-19 15:03:41.539 11638 TRACE cinder.volume.drivers.netapp.nfs

2014-08-19 15:03:57.855 11639 INFO cinder.volume.manager [-] Updating volume status

2014-08-19 15:04:03.184 11638 INFO cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Copied image to volume volume-118812eb-0c05-4e0e-92a2-05160301cacf using regular download.

2014-08-19 15:04:03.185 11638 INFO cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Registering image in cache img-cache-30186104-64ae-404e-b833-9890d3f6685d

2014-08-19 15:04:03.187 11638 INFO cinder.volume.drivers.netapp.nfs [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Cloning from cache to destination img-cache-30186104-64ae-404e-b833-9890d3f6685d

2014-08-19 15:04:03.746 11638 INFO cinder.volume.manager [-] Updating volume status

2014-08-19 15:04:03.835 11638 INFO cinder.brick.remotefs.remotefs [-] Already mounted: /var/lib/cinder/mnt/7a2ad54f8ce5ac4c4f092e6cb1f1b39b

2014-08-19 15:04:03.848 11638 INFO cinder.brick.remotefs.remotefs [-] Already mounted: /var/lib/cinder/mnt/ea9ebfcad8827c2df164a5792194d80e

2014-08-19 15:04:03.861 11638 INFO cinder.volume.flows.manager.create_volume [req-be800531-2aa7-47aa-b4b2-517315084da9 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Volume volume-118812eb-0c05-4e0e-92a2-05160301cacf (118812eb-0c05-4e0e-92a2-05160301cacf): created successfully

2014-08-19 15:04:03.865 11638 INFO cinder.brick.remotefs.remotefs [-] Already mounted: /var/lib/cinder/mnt/5fa24de47b0ced340706db7fde94ab64

2014-08-19 15:04:03.880 11638 INFO cinder.brick.remotefs.remotefs [-] Already mounted: /var/lib/cinder/mnt/980e55e4b6bcbba7ff1f0adc1016e52e

2014-08-19 15:04:04.395 11638 INFO cinder.volume.drivers.netapp.ssc_utils [-] Running stale ssc refresh job for server: 10.130.202.180 and vserver demo-nfs-svm

2014-08-19 15:04:04.800 11638 INFO cinder.volume.drivers.netapp.ssc_utils [-] Successfully completed stale refresh job for server: 10.130.202.180 and vserver demo-nfs-svm

2014-08-19 15:04:49.644 11638 WARNING cinder.context [-] Arguments dropped when creating context: {'user': u'5548c424732047ca99b29a6fd0aca7df', 'tenant': u'63bba5d3297c4b38af6699e7a06a0468', 'user_identity': u'5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -'}

2014-08-19 15:04:49.675 11638 INFO cinder.volume.manager [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] volume 118812eb-0c05-4e0e-92a2-05160301cacf: deleting

2014-08-19 15:04:49.708 11638 INFO cinder.brick.remotefs.remotefs [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/ea9ebfcad8827c2df164a5792194d80e

2014-08-19 15:04:49.839 11638 INFO cinder.volume.manager [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] volume 118812eb-0c05-4e0e-92a2-05160301cacf: deleted successfully

2014-08-19 15:04:49.872 11638 INFO cinder.volume.manager [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Updating volume status

2014-08-19 15:04:49.884 11638 INFO cinder.brick.remotefs.remotefs [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/7a2ad54f8ce5ac4c4f092e6cb1f1b39b

2014-08-19 15:04:49.896 11638 INFO cinder.brick.remotefs.remotefs [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/ea9ebfcad8827c2df164a5792194d80e

2014-08-19 15:04:49.908 11638 INFO cinder.brick.remotefs.remotefs [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/5fa24de47b0ced340706db7fde94ab64

2014-08-19 15:04:49.919 11638 INFO cinder.brick.remotefs.remotefs [req-36c66b15-fd07-48e5-8ab8-852ab09186c8 5548c424732047ca99b29a6fd0aca7df 63bba5d3297c4b38af6699e7a06a0468 - - -] Already mounted: /var/lib/cinder/mnt/980e55e4b6bcbba7ff1f0adc1016e52e

2014-08-19 15:04:50.472 11638 INFO cinder.volume.drivers.netapp.ssc_utils [-] Running stale ssc refresh job for server: 10.130.202.180 and vserver demo-nfs-svm

2014-08-19 15:04:50.837 11638 INFO cinder.volume.drivers.netapp.ssc_utils [-] Successfully completed stale refresh job for server: 10.130.202.180 and vserver demo-nfs-svm

I would like to get a details of return code 13.

I noticed the netapp copyoffload tools was executed by a cinder user , but I have not found the solution for this error yet.

The attribute of glance images have 640. Therefore a cinder user cannot read these.

By the way it was successful after I changed to 644 these attributes.

Takeshi.K,

1 ACCEPTED SOLUTION

takeshik
5,908 Views
3 REPLIES 3

glenng
5,908 Views

Hi Takeshi,

The Error Code 13 is the standard Linux file error code of EACCESS: meaning there is an error on access permissions. The copy offload tool is running as the configured "OpenStack" user and said process identification does not seem to have access rights to your backend storage volume. Changing file access permissions on the backend storage volume will correct this issue.

Let's say that the configured OpenStack user is "glenng". When the copy offload tool runs, it is trying to access the storage as user "glenng". One would need to look at the backend volume configuration to determine if "glenng" will have access rights to the volume. If you see:

   >-rwx------  root root  volume-as7836485-4846733

then it is clear that "glenng" would not be able to access the volume as in this example, it is owned by root and no others have access rights. Changing the rights to allow "glenng" access will solve this problem.

Cheers,

Glenn Gobeli

takeshik
5,908 Views

Thank you.

I could understand this error code.

When I run "cinder create --image-id <image id>" to try the copy offload, "OpenStack" user ( "cinder" user in my system ) access a image file in image service directory(ex; /var/lib/glance/images) .

These image file has a permission 640  by creating "glance" user when it is stored by image service.

For example;

-rw-r----- 1 glance glance 344457216  8月  2 19:06 6fb20bf7-fe5f-4bdc-8a74-6b985e3fbbe4

-rw-r----- 1 glance glance 261030400  7月 26 11:51 74e36b24-7b98-444a-b7b1-d9026bb7abac

It is clear that "cinder" user would not be able to read access these files in this example, therefore would not be able to use the copy offload tool too.

I knew this is the cause of this error code 13.

I will be able to avoid this error by changing the attributes and permission manually as chmod command. However

it is not smart to change permission of image files manually each time. I think it is a issue to be solved by the tool.

What do you think?

Takeshi.K

takeshik
5,909 Views

This was a known bug in Glance -

https://bugs.launchpad.net/glance/+bug/1264302

Thank you.

Public