-
Notifications
You must be signed in to change notification settings - Fork 493
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support multiple hypervisors on each virtualization node #3259
Comments
@dann1 I would love to see this feature comes true. It would be awsome that multiple hypervisors converge on the same host without these problems you have detailed before. I understand that, as a way to prevent these problems the team stablished these dependencies on hypervisor binaries, in order to dont allow the installation of multiple hypervisors on same host, but this is something that competency did, and I am sure that OpenNebula could do it as well. Having KVM, LXC and Firecracker on same host, on OpenNebula, hope to see it at least for ON Keep the hard work 💪 |
saw this good statement : https://opennebula.io/blog/experiences/using-lxd-and-kvm-on-the-same-host/ Then i was trying on a RHEL 8.9 system (a opennebula kvm node) to install FireCracker - let's see: rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm Retrieving https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm Verifying... ################################# [100%] dnf install opennebula-node-firecracker Updating Subscription Management repositories.
|
Hello, Aiming to launch KVM guests on a ONE 6.8 LXC testbed hardware to complement the current ONE LXC limitations (unprivileged prevents desktop container)... Fails at the installation of opennebula-node-kvm:
Single host testbed for ONE makes a lot of sense when trying ONE KVM, LXC, Firecracker, for example. It would ease a lot the evaluation work prior to ONE Open Cluster adoption. Open had better be ... open :) Strongly support enabling this possibility. |
Partial, simple support would be perfectly OK as a first stage for evaluation only (e.g. requiring several names for the same IP address for example). |
Description
A Linux OS can run simultaneously KVM and LXD, acting as a virtualization node that deploys containers and VMs. Currently, in order to use a hypervisor in OpenNebula, as both KVM and LXD, there are some limitations:
/var/tmp/one/
get overwritten when the node is added for the 2nd time, which could hurt some tinkering made by an admin on a virtualization node.Use case
Properly setup a node as KVM and LXD virtualization node
Interface Changes
There could be a lot of changes, since the vmm that is run when deploying a container, is selected based on the destination node, and not on whether the VM template states that the VM is a container or a regular VM. Also the wild VMs would need to be classified.
Additional Context
Proxmox treats its virtualization nodes this way, clearly differentiating a container from a VM. In the case of OpenNebula, it would just be marking the
hypervisor
setting in the template as a required field, and select the vmm_drivers based on that.Progress Status
The text was updated successfully, but these errors were encountered: