I am using SolidFire with an OpenStack environment running the Queens release to provision volumes for volume backed instances. I am looking for a solution to backup bootable root volumes and restore to new OpenStack instances.
Backups and restores work and appear to be consistent when restoring to the original volume. For example if I backup and instances root volume, then delete some files, shutdown the instance, restore volume from backup and boot the instance, the files are restored and the instance is functional. I am however having trouble restoring to new volumes to create a new instance in the event the original was removed.
I have tried creating a new instance then restoring a backup to its root volume and also creating a blank volume, restoring a backup to it and then creating an instance from this volume. In both cases the instance will boot, but the data appears to have been corrupted.There are corruptions and checksum errors in the kernel log and syslog. In an attempt to resolve the corruption I mounted the volume to another instance as a second disk and ran disk checks on the partitions. This modified the partition and fixed errors, however, if I chroot into the root partition of the restored volume most commands still produce errors like "invalid ELF header" and "cannot execute binary file: Exec format error".
When this volume is detached and used as a bootable root volume to create a new instance it just loads a grub rescue menu.
I would appreciate suggestions or advice from anyone who has experience running an OpenStack environment with volume backed instances on SolidFire. Any suggestions of other backup solutions are welcome.
1 REPLY 1
Re: Data corruption restoring volume backup to new openstack instance