I try to add our Openstack environment to our OCI.
The data source shows the following error: Internal error: None of the hypervisors builds KVM successfully
In the log I can see the following error: <IP> - Could not communicate with the device: Auth fail
For KVM sudo we are using a Keyfile. The private keyfile is located on the server and the path is added in the data source configuration.
Does someone have experiance with that?
I already added every host to the known host information.
Solved! See The Solution
I think OCI may be looking for a OpenSSH formatted key, not a .ppk.
If the file worked with Putty, it is probably in Putty .ppk format.
It may be worth using Puttygen to convert the key to OpenSSH format
I have the same issue as described above, but I do see correct login attempts on the KVM server (OpenStack Compute node).
I'm running OnCommand (R) Insight 7.3.2 on all servers, OCI server, OCI DWH and one acquisition unit and using a super user / password authentication (not key files).
Still I'm getting the error: "Internal error: None of the hypervisors builds KVM successfully"
Thank you for any hints,
My issue is still open. I have an active support case on that.
Which distribution do you use for openstack? We are using SuSE Cloud 5.
Do you have any support case reference or link to it? I would be interessted if there is any further information available.
Our OpenStack is set up on Ubuntu 16.04 LTS, NetApp OCI is running on Red Hat 7.4 and Red Hat 7.5 hosts.
We configured the controller node, the requested OpenStack admin user and related KVM user with sudo rights on the OpenStack nodes.
I can see the login attempts on the Compute node, as well as the "Test" section when configuring the OpenStack Controller data source completes successfully.
In the logs I see the related error messages:
I think your issue is different - your error message implies that OCI is successfully authenticating, but it not receiving the prompt it is expecting
Can you send me a data source error report?
I did some additional investigation this afternoon.
I will send you the error report in PN accordingly.
Thank you and best regards,
#1. I think the latest OpenStack releases have completely changed their performance API infrastructure, and OCI does not yet support that stuff. But that is not the problem at hand.
#2. You have self diagnosed the inventory problem completely - OCI is firing
sudo pvdisplay --colon
And not expecting the response we are getting
correct - we fired up another compute host and getting a result for the command "pvdisplay -c".
We are no investigating the root cause for not getting any result on the previous host for the same command...
Thank you so far for your support!
After fixing missing installation packages, Netapp OCI throws the following error message when we try to poll information from VMs that have volumes attached:
"Unsupported KVM instance disk type BLOCK with disk ID /dev/disk/by-path/pci-0000:1a:00.0-fc-0x500507680c111a9a-lun-0"
OCI is firing perfectly using the following commands:
root@computenode02:/home/pfeifan1# pvdisplay -c
Do you might have any hints regarding this issue?
Thank you and best regards,
I was on holiday in Italy, and unfortunately lost track of this thread.
#1. Can you elaborate on what packages were not installed that caused the error?
#2. If you send me a data source error report for the current failure, I can take a look
We received a patch from NetApp team which fixed the issue.
I guess it's already part of the newest release.
Thank you for your support and best regards,