OpenStack Discussions
OpenStack Discussions
Hello Guys !
I am using the netapp cinder nfs drive in kilo release, when a try execute the live migrate I receive the error:
"f3d3e2ac4e420fb3951984998c3f66 3336ff9d2c464d78b39d8458af37c866 - - -] [instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Live Migration failure: Unsafe migration: Migration may lead to data corruption if disks use cache != none"
Whats the problem ?
All the best !
Hi Rodrigo,
Are you certain that /var/lib/nova/instances is hosted on a shared file system like NFS or Gluster? In most distributions by default this is hosted on local storage and won't work for live migrations.
See here if you're using RHEL-OSP7 (Kilo):
Upstream documentation:
http://docs.openstack.org/admin-guide-cloud/compute-configuring-migrations.html
Hello dcain !
Exactly, the /var/lib/nova/instance are on local disks on each computer node.
At this time I have 3 computer nodes, I will try create a new NFS share and mount the same NFS share for all nodes.
I will post in result here in few days.
Thank you for anwser.
Hello, dcain !
I change the /var/lib/nova/instances in all computer nodes but a I have the same problem.
Error:
<179>Feb 29 17:43:53 node-3 nova-compute 2016-02-29 17:43:53.943 4270 ERROR nova.virt.libvirt.driver [req-dc1a9aba-5997-47ee-9e05-d28884b6fdfb 05f3d3e2ac4e420fb3951984998c3f66 3336ff9d2c464d78b39d8458af37c866 - - -] [instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Live Migration failure: Unsafe migration: Migration may lead to data corruption if disks use cache != none <179>Feb 29 17:43:54 node-3 nova-compute 2016-02-29 17:43:54.440 4270 ERROR nova.virt.libvirt.driver [req-dc1a9aba-5997-47ee-9e05-d28884b6fdfb 05f3d3e2ac4e420fb3951984998c3f66 3336ff9d2c464d78b39d8458af37c866 - - -] [instance: 02c278ba-b79a-49bb-bfd0-b5e7c5667679] Migration operation has aborted
Config:
root@node-3:~# cat /etc/nova/nova.conf | grep live_migration_flag #live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE, VIR_MIGRATE_PEER2PEER, VIR_MIGRATE_LIVE, VIR_MIGRATE_TUNNELLED live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST
Mount Point:
root@node-3:~# df -h Filesystem Size Used Avail Use% Mounted on udev 16G 12K 16G 1% /dev tmpfs 3.2G 16M 3.2G 1% /run /dev/dm-1 27G 2.7G 23G 11% / none 4.0K 0 4.0K 0% /sys/fs/cgroup none 5.0M 0 5.0M 0% /run/lock none 16G 0 16G 0% /run/shm none 100M 0 100M 0% /run/user /dev/sda3 196M 44M 143M 24% /boot /dev/mapper/vm-nova 47G 33M 47G 1% /var/lib/nova 10.250.3.253:/vol/openstack_inst 10G 1.2M 10G 1% /var/lib/nova/instances 10.250.3.253:/vol/openstack 750G 647G 104G 87% /var/lib/cinder/mnt/36f09148d1243e64fbe04aa72c497a2e 10.250.3.253:/vol/openstack 750G 647G 104G 87% /var/lib/nova/mnt/36f09148d1243e64fbe04aa72c497a2e
How are the permissions setup on the "10.250.3.253:/vol/openstack_inst" directory? Should be 0744 owned by nova:nova if memory serves me. If this is a distribution with Selinux configured, are the approiate allowences setup?
How about debug logs for the nova-scheduler and associated nova-compute logs on the two hosts in question? Any insight there?
Hello dcain !
We are here again 😉
The permissions are correct:
root@node-3:~# ls -lh total 4.0K -rwx------ 1 root root 611 Feb 11 01:41 openrc root@node-3:~# ls -la /var/lib/nova/ total 8 drwxr-xr-x 13 nova nova 148 Feb 29 17:25 . drwxr-xr-x 57 root root 4096 Feb 1 22:04 .. drwxr-xr-x 2 nova nova 6 Oct 19 12:52 buckets drwxr-xr-x 6 nova nova 58 Feb 1 21:59 CA drwx------ 2 nova nova 33 Feb 29 17:25 .cache drwxr-xr-x 3 nova nova 45 Feb 2 13:07 .cinderclient drwxr-xr-x 2 nova nova 6 Oct 19 12:52 images drwxr-xr-x 27 nova nova 4096 Mar 7 12:55 instances drwxr-xr-x 2 nova nova 6 Oct 19 12:52 keys drwxr-xr-x 3 nova nova 45 Feb 2 12:48 mnt drwxr-xr-x 2 nova nova 6 Oct 19 12:52 networks drwx------ 2 nova nova 71 Feb 1 22:02 .ssh drwxr-xr-x 2 nova nova 6 Oct 19 12:52 tmp root@node-3:~# ls -la /var/lib/nova/instances/ total 108 drwxr-xr-x 27 nova nova 4096 Mar 7 12:55 . drwxr-xr-x 13 nova nova 148 Feb 29 17:25 .. drwxrwxr-x 2 nova nova 4096 Feb 29 17:30 02c278ba-b79a-49bb-bfd0-b5e7c5667679 drwxr-xr-x 2 nova nova 4096 Mar 4 22:51 0d7cbb6d-afd7-4699-acde-ea3c14c70856 drwxrwxr-x 2 nova nova 4096 Feb 29 17:27 1118709c-8b9e-46da-afe0-e0516c12a54e drwxr-xr-x 2 nova nova 4096 Mar 5 00:25 136113f4-5986-47fc-b894-0a1d0adbb654 drwxrwxr-x 2 nova nova 4096 Feb 29 17:28 14ae38ad-a1d3-4087-89b6-f0c67f8fa729 drwxr-xr-x 2 nova nova 4096 Mar 7 12:52 16839f46-78ac-4e16-bfb5-eab3e9dd528a drwxrwxr-x 2 nova nova 4096 Feb 29 17:32 25b923cb-9b5f-4d61-acba-0f3d7c3ff2a0 drwxr-xr-x 2 nova nova 4096 Mar 4 19:16 2cbe37cd-914e-43cb-a71e-48643c0eab28 drwxrwxr-x 2 nova nova 4096 Feb 29 17:34 30cab62a-6f2f-4ce3-a19d-7f6e2a5409f6 drwxrwxr-x 2 nova nova 4096 Feb 29 17:29 3df4ce9d-958c-4e6f-91e2-8c0244714b0c drwxrwxr-x 2 nova nova 4096 Feb 29 17:31 4820728f-4ed4-4df7-b19e-45b1bc37f168 drwxr-xr-x 2 nova nova 4096 Mar 4 23:33 4d4a864e-3a55-4acf-9f87-b5fd3c011d85 drwxr-xr-x 2 nova nova 4096 Mar 3 12:53 518f29bd-f765-491c-b77a-02b08a3226e1 drwxr-xr-x 2 nova nova 4096 Mar 7 12:52 556ed1b6-49b8-4cb0-b108-76f86e019648 drwxr-xr-x 2 nova nova 4096 Mar 7 12:53 666b92f9-6cce-41ce-8ddd-894dd891a28b drwxrwxr-x 2 nova nova 4096 Feb 29 17:35 6ca16ad8-3c9c-47a9-83a4-9bd69fae95bf drwxr-xr-x 2 nova nova 4096 Mar 7 12:52 6fb5b4cb-1bc8-4b6a-8896-330e7cf190d2 drwxrwxr-x 2 nova nova 4096 Feb 29 17:33 8b15ab17-0e36-4cc4-b033-27471d81b572 drwxr-xr-x 2 nova nova 4096 Mar 7 12:55 b11f8803-f5d1-4896-badf-24a432aa61f9 drwxr-xr-x 2 nova nova 4096 Mar 7 12:53 _base drwxr-xr-x 2 nova nova 4096 Mar 4 19:40 ca6141e5-3b7f-4e1a-8316-1252163df3f8 -rw-r--r-- 1 nova nova 129 Mar 8 21:03 compute_nodes drwxr-xr-x 2 nova nova 4096 Mar 7 12:53 d03f9276-178f-4379-8fa6-67e0dd6ff30d drwxr-xr-x 2 nova nova 4096 Mar 7 12:53 db4cac52-b284-49a1-8b0d-91d93e852a68 drwxr-xr-x 2 nova nova 4096 Mar 7 12:53 ff830ed3-6d27-4b13-8de8-690733c186c7 drwxr-xr-x 2 nova nova 4096 Mar 7 12:51 locks root@node-3:~# df -h Filesystem Size Used Avail Use% Mounted on udev 16G 12K 16G 1% /dev tmpfs 3.2G 18M 3.2G 1% /run /dev/dm-1 27G 2.8G 23G 11% / none 4.0K 0 4.0K 0% /sys/fs/cgroup none 5.0M 0 5.0M 0% /run/lock none 16G 0 16G 0% /run/shm none 100M 0 100M 0% /run/user /dev/sda3 196M 44M 143M 24% /boot /dev/mapper/vm-nova 47G 33M 47G 1% /var/lib/nova 10.250.3.253:/vol/openstack_inst 31G 16G 16G 51% /var/lib/nova/instances 10.250.3.253:/vol/openstack 1.7T 706G 974G 43% /var/lib/cinder/mnt/36f09148d1243e64fbe04aa72c497a2e 10.250.3.253:/vol/openstack 1.7T 706G 974G 43% /var/lib/nova/mnt/36f09148d1243e64fbe04aa72c497a2e root@node-3:~#
If I add the VIR_MIGRATE_UNSAFE flag in nova.conf I can execute live migrate but I am NOT Insurance about this option.
root@node-3:~# cat /etc/nova/nova.conf | grep live_migration_flag live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST # live_migration_flag=VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_UNSAFE root@node-3:~#
Best regards !