From e4c459d4ff7f757f80e218a364504ad9dd25d751 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Tino=20V=C3=A1zquez?= Date: Thu, 11 Jul 2024 15:00:13 +0200 Subject: [PATCH] M #-: Remove Firecracker --- .../frontend_installation/overview.rst | 2 +- .../release_notes/platform_notes.rst | 31 +-- source/legacy_components/index.rst | 1 - .../common_node/apparmor.txt | 7 - .../common_node/epel.txt | 22 -- .../common_node/networking.txt | 25 -- .../common_node/next_steps.txt | 6 - .../common_node/passwordless_ssh.txt | 133 ----------- .../common_node/repositories.txt | 1 - .../common_node/selinux.txt | 16 -- .../common_node/storage.txt | 3 - .../firecracker_node/firecracker_driver.rst | 213 ------------------ .../firecracker_node_installation.rst | 121 ---------- .../firecracker_node/index.rst | 12 - .../firecracker_node/overview.rst | 21 -- .../open_cluster_deployment/index.rst | 12 - .../cloud_architecture_design.rst | 10 +- 17 files changed, 10 insertions(+), 626 deletions(-) delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/apparmor.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/epel.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/networking.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/next_steps.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/passwordless_ssh.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/repositories.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/selinux.txt delete mode 100644 source/legacy_components/open_cluster_deployment/common_node/storage.txt delete mode 100644 source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_driver.rst delete mode 100644 source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_node_installation.rst delete mode 100644 source/legacy_components/open_cluster_deployment/firecracker_node/index.rst delete mode 100644 source/legacy_components/open_cluster_deployment/firecracker_node/overview.rst delete mode 100644 source/legacy_components/open_cluster_deployment/index.rst diff --git a/source/installation_and_configuration/frontend_installation/overview.rst b/source/installation_and_configuration/frontend_installation/overview.rst index e88cf6d0ef..fb7a080880 100644 --- a/source/installation_and_configuration/frontend_installation/overview.rst +++ b/source/installation_and_configuration/frontend_installation/overview.rst @@ -13,7 +13,7 @@ Before reading this chapter make sure you are familiar with the :ref:`Architectu The aim of this chapter is to give you a quick-start guide to deploying OpenNebula. This is the simplest possible installation, but it is also the foundation for a more complex setup. First, you should go through the :ref:`Database Setup ` section, especially if you expect to use OpenNebula for production. Then move on to the configuration of :ref:`OpenNebula Repositories `, from which you'll install the required components. And finally, proceed with the :ref:`Front-end Installation ` section. You'll end up running a fully featured OpenNebula Front-end. -After reading this chapter, you can go on to add the :ref:`KVM `, :ref:`LXC `, :ref:`Firecracker ` hypervisor nodes, or :ref:`vCenter `. +After reading this chapter, you can go on to add the :ref:`KVM ` or :ref:`LXC ` hypervisor nodes. To scale from a single-host Front-end deployment to several hosts for better performance or reliability (HA), continue to the following chapters on :ref:`Large-scale Deployment `, :ref:`High Availability ` and :ref:`Data Center Federation `. diff --git a/source/intro_release_notes/release_notes/platform_notes.rst b/source/intro_release_notes/release_notes/platform_notes.rst index e359ce53d9..f15375611e 100644 --- a/source/intro_release_notes/release_notes/platform_notes.rst +++ b/source/intro_release_notes/release_notes/platform_notes.rst @@ -21,9 +21,9 @@ Front-End Components +--------------------------+--------------------------------------------------------+-------------------------------------------------------+ | AlmaLinux | 8, 9 | :ref:`Front-End Installation ` | +--------------------------+--------------------------------------------------------+-------------------------------------------------------+ -| Ubuntu Server | 20.04 (LTS), 22.04 (LTS) | :ref:`Front-End Installation ` | +| Ubuntu Server | 22.04 (LTS), 24.04 (LTS) | :ref:`Front-End Installation ` | +--------------------------+--------------------------------------------------------+-------------------------------------------------------+ -| Debian | 10, 11 | :ref:`Front-End Installation `.| +| Debian | 11, 12 | :ref:`Front-End Installation `.| | | | Not certified to manage VMware infrastructures | +--------------------------+--------------------------------------------------------+-------------------------------------------------------+ | MariaDB or MySQL | Version included in the Linux distribution | :ref:`MySQL Setup ` | @@ -62,9 +62,9 @@ KVM Nodes +--------------------------+---------------------------------------------------------+-----------------------------------------+ | AlmaLinux | 8, 9 | :ref:`KVM Driver ` | +--------------------------+---------------------------------------------------------+-----------------------------------------+ -| Ubuntu Server | 20.04 (LTS), 22.04 (LTS) | :ref:`KVM Driver ` | +| Ubuntu Server | 22.04 (LTS), 24.04 (LTS) | :ref:`KVM Driver ` | +--------------------------+---------------------------------------------------------+-----------------------------------------+ -| Debian | 10, 11 | :ref:`KVM Driver ` | +| Debian | 11, 12 | :ref:`KVM Driver ` | +--------------------------+---------------------------------------------------------+-----------------------------------------+ | KVM/Libvirt | Support for version included in the Linux distribution. | :ref:`KVM Node Installation ` | | | For RHEL the packages from ``qemu-ev`` are used. | | @@ -76,33 +76,15 @@ LXC Nodes +---------------+--------------------------------------------------------+-----------------------------------------+ | Component | Version | More information | +===============+========================================================+=========================================+ -| Ubuntu Server | 20.04 (LTS), 22.04 (LTS) | :ref:`LXC Driver ` | +| Ubuntu Server | 22.04 (LTS), 24.04 (LTS) | :ref:`LXC Driver ` | +---------------+--------------------------------------------------------+-----------------------------------------+ -| Debian | 10, 11 | :ref:`LXC Driver ` | +| Debian | 11, 12 | :ref:`LXC Driver ` | +---------------+--------------------------------------------------------+-----------------------------------------+ | AlmaLinux | 8, 9 | :ref:`LXC Driver ` | +---------------+--------------------------------------------------------+-----------------------------------------+ | LXC | Support for version included in the Linux distribution | :ref:`LXC Node Installation ` | +---------------+--------------------------------------------------------+-----------------------------------------+ -Firecracker Nodes --------------------------------------------------------------------------------- - -+--------------------------+-------------------------------------------------+----------------------------------+ -| Component | Version | More information | -+==========================+=================================================+==================================+ -| Red Hat Enterprise Linux | 8, 9 | :ref:`Firecracker Driver ` | -+--------------------------+-------------------------------------------------+----------------------------------+ -| AlmaLinux | 8, 9 | :ref:`Firecracker Driver ` | -+--------------------------+-------------------------------------------------+----------------------------------+ -| Ubuntu Server | 20.04 (LTS), 22.04 (LTS) | :ref:`Firecracker Driver ` | -+--------------------------+-------------------------------------------------+----------------------------------+ -| Debian | 10, 11 | :ref:`Firecracker Driver ` | -+--------------------------+-------------------------------------------------+----------------------------------+ -| KVM/Firecracker | Support for Firecracker and KVM versions | :ref:`Firecracker Node | -| | included in the Linux distribution. | Installation ` | -+--------------------------+-------------------------------------------------+----------------------------------+ - .. _context_supported_platforms: `Linux and Windows Contextualization Packages `__ @@ -264,4 +246,3 @@ Debian 10 and Ubuntu 18 Upgrade -------------------------------------------------------------------------------- When upgrading your nodes from Debian 10 or Ubuntu 18 you may need to update the opennebula sudoers file because of the */usr merge* feature implemented for Debian11/Ubuntu20. You can `find more information and a recommended work around in this issue `__. - diff --git a/source/legacy_components/index.rst b/source/legacy_components/index.rst index 695f096bc2..93bbebfbef 100644 --- a/source/legacy_components/index.rst +++ b/source/legacy_components/index.rst @@ -12,4 +12,3 @@ Legacy Components refers to functionality that is present in this current versio Ruby Sunstone VMware Integration - Open Cluster Deployment diff --git a/source/legacy_components/open_cluster_deployment/common_node/apparmor.txt b/source/legacy_components/open_cluster_deployment/common_node/apparmor.txt deleted file mode 100644 index ea96f028e1..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/apparmor.txt +++ /dev/null @@ -1,7 +0,0 @@ -Depending on the type of OpenNebula deployment, the AppArmor can block some operations initiated by the OpenNebula Front-end, which results in a failure of the particular operation. It's **not recommended to disable** the apparmor on production environments, as it degrades the security of your server, but to investigate and workaround each individual problem, a good starting point is `AppArmor HowToUse Guide `__. The administrator might disable the AppArmor to temporarily workaround the problem or on non-production deployments the steps for disabling it can be found `here `__. - -.. note:: Depending on your OpenNebula deployment type, the following lines might be required at ``/etc/apparmor.d/abstractions/libvirt-qemu`` profile: - - .. prompt:: bash # auto - - # /var/lib/one/datastores/** rwk, \ No newline at end of file diff --git a/source/legacy_components/open_cluster_deployment/common_node/epel.txt b/source/legacy_components/open_cluster_deployment/common_node/epel.txt deleted file mode 100644 index 745894d07e..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/epel.txt +++ /dev/null @@ -1,22 +0,0 @@ -Repository EPEL -^^^^^^^^^^^^^^^ - -OpenNebula depends on packages which aren't in the base distribution repositories. Execute one of the commands below (distinguished by the host platform) to configure access to additional `EPEL `__ (Extra Packages for Enterprise Linux) repository: - -**AlmaLinux** - -.. prompt:: bash # auto - - # yum -y install epel-release - -**RHEL 8** - -.. prompt:: bash # auto - - # rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm - -**RHEL 9** - -.. prompt:: bash # auto - - # rpm -ivh https://dl.fedoraproject.org/pub/epel/epel-release-latest-9.noarch.rpm diff --git a/source/legacy_components/open_cluster_deployment/common_node/networking.txt b/source/legacy_components/open_cluster_deployment/common_node/networking.txt deleted file mode 100644 index 3d4cc935a4..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/networking.txt +++ /dev/null @@ -1,25 +0,0 @@ -.. image:: /images/network-02.png - :width: 30% - :align: center - -.. TODO - This needs rework or drop. - -Network connection is needed by the OpenNebula Front-end daemons to access, manage and monitor the Hosts, and to transfer the Image files. It is highly recommended to use a dedicated network for this purpose. - -There are various models for Virtual Networks, check the :ref:`Open Cloud Networking ` chapter to find the ones supported by OpenNebula. - -You may want to use the simplest network model, that corresponds to the :ref:`bridged ` driver. For this driver, you will need to setup a Linux bridge and include a physical device in the bridge. Later on, when defining the network in OpenNebula, you will specify the name of this bridge and OpenNebula will know that it should connect the VM to this bridge, thus giving it connectivity with the physical network device connected to the bridge. For example, a typical host with two physical networks, one for public IP addresses (attached to an ``eth0`` NIC for example) and the other for private virtual LANs (NIC ``eth1`` for example) should have two bridges: - -.. prompt:: bash # auto - - # ip link show type bridge - 4: br0: ... - 5: br1: ... - - # ip link show master br0 - 2: eth0: ... - - # ip link show master br1 - 3: eth1: ... - -.. note:: Remember that this is only required in the Hosts, not in the Front-end. Also remember that the exact name of the resources is not important (``br0``, ``br1``, etc...), however it's important that the bridges and NICs have the same name in all the Hosts. diff --git a/source/legacy_components/open_cluster_deployment/common_node/next_steps.txt b/source/legacy_components/open_cluster_deployment/common_node/next_steps.txt deleted file mode 100644 index 10e8d7345f..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/next_steps.txt +++ /dev/null @@ -1,6 +0,0 @@ -Now, you can continue with: - -- configuring :ref:`Storage ` and :ref:`Networking ` -- exploring :ref:`Management and Operations ` guide - -to extend and control your cloud. diff --git a/source/legacy_components/open_cluster_deployment/common_node/passwordless_ssh.txt b/source/legacy_components/open_cluster_deployment/common_node/passwordless_ssh.txt deleted file mode 100644 index 2ae96ddd09..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/passwordless_ssh.txt +++ /dev/null @@ -1,133 +0,0 @@ -The OpenNebula Front-end connects to the hypervisor Nodes using SSH. Following connection types are being established: - -- from Front-end to Front-end, -- from Front-end to hypervisor Node, -- from Front-end to hypervisor Node with another connection within to another Node (for migration operations), -- from Front-end to hypervisor Node with another connection within back to Front-end (for data copy back). - -.. important:: - - It must be ensured that Front-end and all Nodes **can connect to each other** over SSH without manual intervention. - -When OpenNebula server package is installed on the Front-end, a SSH key pair is automatically generated for the ``oneadmin`` user into ``/var/lib/one/.ssh/id_rsa`` and ``/var/lib/one/.ssh/id_rsa.pub``, the public key is also added into ``/var/lib/one/.ssh/authorized_keys``. It happens only if these files don't exist yet, existing files (e.g., leftovers from previous installations) are not touched! For new installations, the :ref:`default SSH configuration ` is placed for the ``oneadmin`` from ``/usr/share/one/ssh`` into ``/var/lib/one/.ssh/config``. - -To enable passwordless connections you must distribute the public key of the ``oneadmin`` user from the Front-end to ``/var/lib/one/.ssh/authorized_keys`` on all hypervisor Nodes. There are many methods to achieve the distribution of the SSH keys. Ultimately the administrator should choose a method; the recommendation is to use a configuration management system (e.g., Ansible or Puppet). In this guide, we are going to manually use SSH tools. - -**Since OpenNebula 5.12**. The Front-end runs a dedicated **SSH authentication agent** service which imports the ``oneadmin``'s private key on start. Access to this agent is delegated (forwarded) from the OpenNebula Front-end to the hypervisor Nodes for the operations which need to connect between Nodes or back to the Front-end. While the authentication agent is used, you **don't need to distribute private SSH key from Front-end** to hypervisor Nodes! - -To learn more about the SSH, read the :ref:`Advanced SSH Usage ` guide. - -A. Populate Host SSH Keys -------------------------- - -You should prepare and further manage the list of host SSH public keys of your nodes (a.k.a. ``known_hosts``) so that all communicating parties know the identity of the other sides. The file is located in ``/var/lib/one/.ssh/known_hosts`` and we can use the command ``ssh-keyscan`` to manually create it. It should be executed on your Front-end under the ``oneadmin`` user and copied on all your Nodes. - -.. important:: - - You'll need to update and redistribute file with host keys every time any host is reinstalled or its keys are regenerated. - -.. important:: - - If :ref:`default SSH configuration ` shipped with OpenNebula is used, the SSH client automatically accepts host keys on the first connection. That makes this step optional, as the ``known_hosts`` will be incrementally automatically generated on your infrastructure when the various connections happen. While this simplifies the initial deployment, it lowers the security of your infrastructure. We highly recommend to populate ``known_hosts`` on your infrastructure in controlled manner! - -Make sure you are logged in on your Front-end and run the commands as ``oneadmin``, e.g. by typing: - -.. prompt:: bash $ auto - - # su - oneadmin - -Create the ``known_hosts`` file by running following command with all the Node names including the Front-end as parameters: - -.. prompt:: bash $ auto - - $ ssh-keyscan ... >> /var/lib/one/.ssh/known_hosts - -B. Distribute Authentication Configuration ------------------------------------------- - -To enable passwordless login on your infrastructure, you must copy authentication configuration for ``oneadmin`` user from Front-end to all your nodes. We'll distribute only ``known_hosts`` created in the previous section and ``oneadmin``'s SSH public key from Front-end to your nodes. We **don't need to distribute oneadmin's SSH private key** from Front-end, as it'll be securely delegated from Front-end to hypervisor Nodes with the default **SSH authentication agent** service running on the Front-end. - -Make sure you are logged in on your Front-end and run the commands as ``oneadmin``, e.g. by typing: - -.. prompt:: bash $ auto - - # su - oneadmin - -Enable passwordless logins by executing the following command for each of your Nodes. For example: - -.. prompt:: bash $ auto - - $ ssh-copy-id -i /var/lib/one/.ssh/id_rsa.pub - $ ssh-copy-id -i /var/lib/one/.ssh/id_rsa.pub - $ ssh-copy-id -i /var/lib/one/.ssh/id_rsa.pub - -If the list of host SSH public keys was created in the previous section, distribute the ``known_hosts`` file to each of your Nodes. For example: - -.. prompt:: bash $ auto - - $ scp -p /var/lib/one/.ssh/known_hosts :/var/lib/one/.ssh/ - $ scp -p /var/lib/one/.ssh/known_hosts :/var/lib/one/.ssh/ - $ scp -p /var/lib/one/.ssh/known_hosts :/var/lib/one/.ssh/ - -Without SSH Authentication Agent (Optional) -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -.. warning:: - - **Not Recommended**. If you don't use integrated SSH authentication agent service (which is initially enabled) on the Front-end, you'll have to distribute also ``oneadmin``'s private SSH key on your hypervisor Nodes to allow connections among Nodes and from Nodes to Front-end. For security reasons, it's recommended to use SSH authentication agent service and **avoid this step**. - - If you need to distribute ``oneadmin``'s private SSH key on your Nodes, proceed with steps above and continue with following extra commands for all your Nodes. For example: - - .. prompt:: bash $ auto - - $ scp -p /var/lib/one/.ssh/id_rsa :/var/lib/one/.ssh/ - $ scp -p /var/lib/one/.ssh/id_rsa :/var/lib/one/.ssh/ - $ scp -p /var/lib/one/.ssh/id_rsa :/var/lib/one/.ssh/ - -C. Validate Connections ------------------------ - -You should verify that none of these connections (under user ``oneadmin``) fail and none require password: - -* from the Front-end to Front-end itself -* from the Front-end to all Nodes -* from all Nodes to all Nodes -* from all Nodes back to Front-end - -For example, execute on the Front-end: - -.. prompt:: bash $ auto - - # from Front-end to Front-end itself - $ ssh - $ exit - - # from Front-end to node, back to Front-end and to other nodes - $ ssh - $ ssh - $ exit - $ ssh - $ exit - $ ssh - $ exit - $ exit - - # from Front-end to node, back to Front-end and to other nodes - $ ssh - $ ssh - $ exit - $ ssh - $ exit - $ ssh - $ exit - $ exit - - # from Front-end to nodes and back to Front-end and other nodes - $ ssh - $ ssh - $ exit - $ ssh - $ exit - $ ssh - $ exit - $ exit diff --git a/source/legacy_components/open_cluster_deployment/common_node/repositories.txt b/source/legacy_components/open_cluster_deployment/common_node/repositories.txt deleted file mode 100644 index 4eda64c059..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/repositories.txt +++ /dev/null @@ -1 +0,0 @@ -Refer to :ref:`OpenNebula Repositories ` guide to add the **Enterprise** and **Community** Edition software repositories. diff --git a/source/legacy_components/open_cluster_deployment/common_node/selinux.txt b/source/legacy_components/open_cluster_deployment/common_node/selinux.txt deleted file mode 100644 index 96c563965d..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/selinux.txt +++ /dev/null @@ -1,16 +0,0 @@ -Depending on the type of OpenNebula deployment, the SELinux can block some operations initiated by the OpenNebula Front-end, which results in a failure of the particular operation. It's **not recommended to disable** the SELinux on production environments, as it degrades the security of your server, but to investigate and workaround each individual problem based on the `SELinux User's and Administrator's Guide `__. The administrator might disable the SELinux to temporarily workaround the problem or on non-production deployments by changing following line in ``/etc/selinux/config``: - -.. code-block:: bash - - SELINUX=disabled - -After the change, you have to reboot the machine. - -.. note:: Depending on your OpenNebula deployment type, the following may be required on your SELinux-enabled nodes: - - * package ``util-linux`` newer than 2.23.2-51 installed - * SELinux boolean ``virt_use_nfs`` enabled (with datastores on NFS): - - .. prompt:: bash # auto - - # setsebool -P virt_use_nfs on \ No newline at end of file diff --git a/source/legacy_components/open_cluster_deployment/common_node/storage.txt b/source/legacy_components/open_cluster_deployment/common_node/storage.txt deleted file mode 100644 index 7574bf27b2..0000000000 --- a/source/legacy_components/open_cluster_deployment/common_node/storage.txt +++ /dev/null @@ -1,3 +0,0 @@ -In default OpenNebula configuration, the local storage is used for storing Images and running Virtual Machines. This is enough for basic use and you don't need to take any extra steps now unless you want to deploy an advanced storage solution. - -Follow the :ref:`Open Cloud Storage Setup ` guide to learn how to use Ceph, NFS, LVM, etc. diff --git a/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_driver.rst b/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_driver.rst deleted file mode 100644 index 5aa4d82906..0000000000 --- a/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_driver.rst +++ /dev/null @@ -1,213 +0,0 @@ -.. _fcmg: - -================================================================================ -Firecracker Driver -================================================================================ - -Requirements -============ - -Firecracker requires a Linux kernel version >= 4.14 and the KVM kernel module. - -The specific information containing the supported platforms for Firecracker can be found in the `code repository `__. - -Considerations & Limitations -================================================================================ - -microVM CPU Usage --------------------------------------------------------------------------------- - -There are two main limitations regarding CPU usage for microVM: - -- OpenNebula deploys microVMs by using `Firecracker's Jailer `__. The Jailer takes care of increasing the security and isolation of the microVM and is the Firecracker's recommended way of deploying microVMs in production environments. The Jailer forces the microVM to be isolated in a NUMA node; OpenNebula takes care of evenly distributing microVMs among the available NUMA nodes. One of the following policies can be selected in ``/var/lib/one/remotes/etc/vmm/firecracker/firecrackerrc``: - - - ``rr``: schedule the microVMs in a RR way across NUMA nodes based on the VM id. - - ``random``: schedule the microVMs randomly across NUMA nodes. - -.. note:: Currently Firecracker only supports the isolation at NUMA level so OpenNebula NUMA & CPU pinning options are not available for Firecracker microVMs. - -- Firecracker microVMs support hyperthreading but in a very specific way. When hyperthreading is enabled the number of threads per core will be always two (e.g., with ``VCPU=8`` the VM will have four cores with two threads each). In order to enable hyperthreading for microVM, the ``TOPOLOGY/THREADS`` value can be used in the microVM template as shown below: - -.. code:: - - TOPOLOGY = [ - CORES = "4", - PIN_POLICY = "NONE", - SOCKETS = "1", - THREADS = "2" ] - -Storage Limitations --------------------------------------------------------------------------------- - -- ``qcow2`` images are **not supported**. Firecracker only supports ``raw`` format images. - -- The Firecracker driver is only compatible with :ref:`NFS/NAS Datastores ` and :ref:`Local Storage Datastores `. - -- As Firecracker Jailer performs a ``chroot`` operation under the microVM location, persistent images are not supported when using ``TM_MAD=shared``. In order to use persistent images when using ``TM_MAD=shared`` the system ``TM_MAD`` must be overwritten to use ``TM_MAD=ssh`` this can be easily achieved by adding ``TM_MAD_SYSTEM=ssh`` at the microVM template. More info on how to combine different ``TM_MADs`` can be found :ref:`here `. - -MicroVM Actions --------------------------------------------------------------------------------- - -Some of the :ref:`actions ` supported by OpenNebula for VMs and containers are not supported for microVM due to Firecracker's limitations. - -The following actions are not currently supported: - -- ``Disk hot-plugging`` -- ``NIC hot-plugging`` -- ``Migration`` -- ``Recontextualization`` -- ``Reboot`` -- ``Pause`` -- ``Capacity resize`` -- ``Disk resize`` -- ``Disk saving`` -- ``System snapshots`` -- ``Disk snapshots`` - -Configuration -================================================================================ - -Driver Specifics Configuration --------------------------------------------------------------------------------- - -Firecracker specifics configurations are available in the ``/var/lib/one/remotes/etc/vmm/firecracker/firecrackerrc`` file in the OpenNebula Front-end node. The following list contains the supported configuration attributes and a brief description of each one: - -+----------------------------+-------------------------------------------------------+ -| NAME | Description | -+============================+=======================================================+ -| ``:vnc`` | Options to customize the VNC access to the | -| | microVM. ``:width``, ``:height`` and ``:timeout`` | -| | can be set | -+----------------------------+-------------------------------------------------------+ -| ``:datastore_location`` | Default path for the datastores. This only needs to be| -| | changed if the corresponding value in oned.conf has | -| | been modified | -+----------------------------+-------------------------------------------------------+ -| ``:uid`` | UID for starting microVMs corresponds with ``--uid`` | -| | Jailer parameter | -+----------------------------+-------------------------------------------------------+ -| ``:gid`` | GID for starting microVMs corresponds with ``--gid`` | -| | Jailer parameter | -+----------------------------+-------------------------------------------------------+ -| ``:firecracker_location`` | Firecracker binary location | -+----------------------------+-------------------------------------------------------+ -| ``:shutdown_timeout`` | Timeout (in seconds) for executing cancel action if | -| | shutdown gets stuck | -+----------------------------+-------------------------------------------------------+ -| ``:cgroup_location`` | Path where group file system is mounted | -+----------------------------+-------------------------------------------------------+ -| ``:cgroup_cpu_shares`` | If true the cpu.shares value will be set according to | -| | the VM CPU value, if false the cpu.shares is left by | -| | default which means that all the resources are shared | -| | equally across the VMs. | -+----------------------------+-------------------------------------------------------+ -| ``:cgroup_delete_timeout`` | Timeout to wait for a cgroup to be empty after | -| | shutdown/cancel a microVM | -+----------------------------+-------------------------------------------------------+ - -.. note:: Firecracker only supports cgroup v1. - -Drivers Generic Configuration --------------------------------------------------------------------------------- - -The Firecracker driver is enabled by default in OpenNebula ``/etc/one/oned.conf`` on your Front-end Host. The configuration parameters: ``-r``, ``-t``, ``-l``, ``-p`` and ``-s`` are already preconfigured with reasonable defaults. If you change them, you will need to restart OpenNebula. - -Read the :ref:`oned Configuration ` to understand these configuration parameters and :ref:`Virtual Machine Drivers Reference ` to know how to customize and extend the drivers. - -Storage -================================================================================ - -Unlike common VMs, Firecracker microVMs do not use full disk images (with partition tables, MBR...). Instead, Firecracker microVMs use a root file system image together with an uncompressed Linux Kernel binary file. - -Root File System Images --------------------------------------------------------------------------------- - -The root file system can be uploaded as a raw image (``OS`` type) to any OpenNebula image datastore. Once the image is available it can be added as a new disk to the microVM template. - -Also, root file system images can be downloaded directly to OpenNebula from `Docker Hub `__, `Linux Containers `__ and `Turnkey Linux `__ Marketplaces. Check :ref:`Public Marketplaces ` chapter for more information. - -.. note:: Custom images can also be created by using common linux tools like ``mkfs`` command for creating the file system and ``dd`` for copying an existing file system inside the new one. - -Kernels --------------------------------------------------------------------------------- - -The kernels must be uploaded to a :ref:`Kernels & Files Datastore ` with the "Kernel" type. Once the kernel is available it can be referenced by using the attribute ``KERNEL_DS`` inside ``OS`` section at microVM template. - -Kernel images can build the desired kernel version, with the configuration attribute required for the use case. In order to improve the performance, the kernel image can be compiled with the minimal options required. Firecracker project provides a suggested configuration file in the `official repository `__ - -.. _fc_network: - -Networking -================================================================================ - -Firecracker works with all OpenNebula networking drivers. - -As Firecracker does not manage the tap devices used for microVM networking, OpenNebula takes care of managing these devices and plugs then inside the pertinent bridge. In order to enable this functionality the following actions have to be carried out manually when networking is desired for MicroVMs. - -.. code:: - - # In the frontend for each driver to be use with firecracker - $ cp /var/lib/one/remotes/vnm/hooks/pre/firecracker /var/lib/one/remotes/vnm//pre.d/firecracker - $ cp /var/lib/one/remotes/vnm/hooks/clean/firecracker /var/lib/one/remotes/vnm//clean.d/firecracker - $ onehost sync -f - - -.. note:: Execute the ``cp`` commands for every networking driver which is going to be used with MicroVMs. And make sure ``oneadmin`` user has enough permissions to run the scripts. - -Usage -================================================================================ - -MicroVM Template ------------------------ - -Below there is a minimum microVM Template: - -.. code:: - - CPU="1" - MEMORY="146" - VCPU="2" - CONTEXT=[ - NETWORK="YES", - SSH_PUBLIC_KEY="$USER[SSH_PUBLIC_KEY]" ] - DISK=[ - IMAGE="Alpine Linux 3.11", - IMAGE_UNAME="oneadmin" ] - GRAPHICS=[ - LISTEN="0.0.0.0", - TYPE="VNC" ] - NIC=[ - NETWORK="vnet", - NETWORK_UNAME="oneadmin", - SECURITY_GROUPS="0" ] - OS=[ - BOOT="", - KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd", - KERNEL_DS="$FILE[IMAGE_ID=2]"] - -MicroVMs ``OS`` sections need to contain a ``KERNEL_DS`` attribute referencing a linux kernel from a File & Kernel datastore: - -.. code:: - - OS=[ - BOOT="", - KERNEL_CMD="console=ttyS0 reboot=k panic=1 pci=off i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd", - KERNEL_DS="$FILE[IMAGE_ID=2]"] - -Remote Access ------------------------ - -MicroVMs supports remote access via VNC protocol which allows easy access to microVMs. The following section must be added to the microVM template to configure the VNC access: - -.. code:: - - GRAPHICS=[ - LISTEN="0.0.0.0", - TYPE="VNC" ] - -Troubleshooting -================================================================================ - -Apart from the :ref:`system logs `, Firecracker generates a microVMs log inside the `jailed` folder. This log can be found in: ``/var/lib/one/datastores///logs.fifo``. - -.. note:: This log cannot be forwarded outside the VM folder, as while the Firecracker microVMs run, the Firecracker process is isolated in their VM folder to increase the security. More information on how Firecracker isolates the microVM can be found in the Firecracker `official documentation `__. diff --git a/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_node_installation.rst b/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_node_installation.rst deleted file mode 100644 index 24a816e697..0000000000 --- a/source/legacy_components/open_cluster_deployment/firecracker_node/firecracker_node_installation.rst +++ /dev/null @@ -1,121 +0,0 @@ -.. _fc_node: - -========================================== -Firecracker Node Installation -========================================== - - -This page shows you how to configure the OpenNebula Firecracker Node from the binary packages. - -.. note:: Before reading this chapter, you should have at least installed your :ref:`Front-end node `. - -Step 1. Add OpenNebula Repositories -=================================== - -.. include:: ../common_node/repositories.txt - -Step 2. Installing the Software -=============================== - -Installing on AlmaLinux/RHEL ----------------------------- - -.. include:: ../common_node/epel.txt - -Install OpenNebula Firecracker Node Package -^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ - -Execute the following commands to install the OpenNebula Firecracker Node package: - -.. prompt:: bash # auto - - # yum -y install opennebula-node-firecracker - -For further configuration, check the specific :ref:`guide `. - -Installing on Debian/Ubuntu ---------------------------- - -Execute the following commands to install the OpenNebula Firecracker Node package: - -.. prompt:: bash # auto - - # apt-get update - # apt-get -y install opennebula-node-firecracker - -For further configuration check the specific :ref:`guide `. - -Step 3. Disable SELinux on AlmaLinux/RHEL (Optional) -==================================================== - -.. include:: ../common_node/selinux.txt - -Step 4. Configure Passwordless SSH -================================== - -.. include:: ../common_node/passwordless_ssh.txt - -Step 5. Networking Configuration -================================ - -.. include:: ../common_node/networking.txt - -.. important:: Firecracker microVM Networking needs to be enabled in the hypervisor Node. Please check the :ref:`Network ` section in Firecracker Driver guide. - -Step 6. Storage Configuration -============================= - -.. include:: ../common_node/storage.txt - -Step 7. Adding Host to OpenNebula -================================= - -In this step, we'll register the hypervisor Node we have configured above into the OpenNebula Front-end, so that OpenNebula can launch Virtual Machines on it. This step is documented for Sunstone GUI and CLI, but both accomplish the same result. Select and proceed with just one of the two options. - -Learn more in :ref:`Hosts and Clusters Management `. - -.. note:: If the Host turns to ``err`` state instead of ``on``, check OpenNebula log ``/var/log/one/oned.log``. The problem might be with connecting over SSH. - -Add Host with Sunstone ----------------------- - -Open Sunstone as documented :ref:`here `. On the left side menu go to **Infrastructure** → **Hosts**. Click on the ``+`` button. - -|sunstone_select_create_host| - -Then fill in the hostname, FQDN, or IP of the Node in the ``Hostname`` field. - -|sunstone_create_host_dialog| - -Finally, return back to the **Hosts** list and check that the Host has switched to ``ON`` status. It can take up to one minute. You can click on the refresh button to check the status more frequently. - -|sunstone_list_hosts| - -Add Host with CLI ------------------ - -To add a Node to the cloud, run this command as ``oneadmin`` in the Front-end (replace ```` with your Node hostname): - -.. code:: - - $ onehost create -i firecracker -v firecracker - - $ onehost list - ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT - 1 localhost default 0 - - init - - # After some time (up to 1 minute) - - $ onehost list - ID NAME CLUSTER RVM ALLOCATED_CPU ALLOCATED_MEM STAT - 0 node01 default 0 0 / 400 (0%) 0K / 7.7G (0%) on - -Next steps -================================================================================ - -.. include:: ../common_node/next_steps.txt - -.. |image3| image:: /images/network-02.png -.. |sunstone_create_host_dialog| image:: /images/sunstone_create_host_dialog_fc.png -.. |sunstone_list_hosts| image:: /images/sunstone_list_hosts.png -.. |sunstone_select_create_host| image:: /images/sunstone_select_create_host.png diff --git a/source/legacy_components/open_cluster_deployment/firecracker_node/index.rst b/source/legacy_components/open_cluster_deployment/firecracker_node/index.rst deleted file mode 100644 index 437b8671ff..0000000000 --- a/source/legacy_components/open_cluster_deployment/firecracker_node/index.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. _firecracker_node_deployment: - -================================================================================ -Firecracker Node Deployment -================================================================================ - -.. toctree:: - :maxdepth: 2 - - Overview - Firecracker Node Installation - Firecracker Driver diff --git a/source/legacy_components/open_cluster_deployment/firecracker_node/overview.rst b/source/legacy_components/open_cluster_deployment/firecracker_node/overview.rst deleted file mode 100644 index bdc2897fb1..0000000000 --- a/source/legacy_components/open_cluster_deployment/firecracker_node/overview.rst +++ /dev/null @@ -1,21 +0,0 @@ -.. _firecracker_node_deployment_overview: - -================================================================================ -Overview -================================================================================ - -`Firecracker `__ is an open source virtual machine monitor (VMM) developed by AWS. It's widely used as part of its Fargate and Lambda services⁠. Firecracker is especially designed for creating and managing secure, multi-tenant container and function-based services. It enables you to deploy workloads in lightweight VMs (called **microVMs**) which provide enhanced security and workload isolation over traditional VMs, while enabling the speed and resource efficiency of containers. - -Firecracker uses the Linux Kernel-based Virtual Machine (KVM) to create and manage microVMs. It has a minimalist design, excluding unnecessary devices and guest functionality to reduce the memory footprint and attack surface area of each microVM. - -How Should I Read This Chapter -================================================================================ - -This chapter focuses on the configuration options for Firecracker-based Nodes. Read the :ref:`installation ` section to add a Firecracker Node to your OpenNebula cloud to start deploying microVMs. Continue with the :ref:`driver ` section in order to understand the specific requirements, functionalities, and limitations of the Firecracker driver. - -You can then finish off with the Open Cloud :ref:`Storage ` and :ref:`Networking ` chapters to be able to deploy your Virtual Machines on your Firecracker Nodes and access them remotely over the network. - -Hypervisor Compatibility -================================================================================ - -This chapter applies only to Firecracker. diff --git a/source/legacy_components/open_cluster_deployment/index.rst b/source/legacy_components/open_cluster_deployment/index.rst deleted file mode 100644 index 8fd5ea425b..0000000000 --- a/source/legacy_components/open_cluster_deployment/index.rst +++ /dev/null @@ -1,12 +0,0 @@ -.. _legacy_ocd: -.. _legacy_vmmg: -.. _legacy_open_cluster_deployment: - -================================================================================ -Open Cluster Deployment -================================================================================ - -.. toctree:: - :maxdepth: 2 - - Firecracker Node Deployment diff --git a/source/overview/cloud_architecture_and_design/cloud_architecture_design.rst b/source/overview/cloud_architecture_and_design/cloud_architecture_design.rst index 9a9cecc901..57acc86bd5 100644 --- a/source/overview/cloud_architecture_and_design/cloud_architecture_design.rst +++ b/source/overview/cloud_architecture_and_design/cloud_architecture_design.rst @@ -36,12 +36,8 @@ The first step in building a customized cluster is to decide on the hypervisor t - **Virtualization and Cloud Management on KVM**. Many companies use OpenNebula to manage data center virtualization, consolidate servers, and integrate existing IT assets for computing, storage, and networking. In this deployment model, OpenNebula directly integrates with KVM and has complete control over virtual and physical resources, providing advanced features for capacity management, resource optimization, high availability and business continuity. Some of these deployments additionally use OpenNebula’s **Cloud Management and Provisioning** features when they want to federate data centers, implement cloud bursting, or offer self-service portals for end-users. -- **Cloud Management on VMware vCenter**. Other companies use OpenNebula to provide a multi-tenant, cloud-like provisioning layer on top of VMware vCenter. These deployments are looking for provisioning, elasticity and multi-tenancy cloud features like virtual data centers provisioning, datacenter federation or hybrid cloud computing to connect in-house infrastructures with public clouds, while the infrastructure is managed by already familiar tools for infrastructure management and operation, such as vSphere and vCenter Operations Manager. - - **Containerization with LXC**. Containers are the next step towards virtualization. They have a minimal memory footprint and skip the compute intensive and sometimes unacceptable performance degradation inherent to hardware emulation. You can have a very high density of containers per virtualization node and run workloads close to bare-metal metrics. LXC focuses on system containers unlike similar technologies like Docker, which focuses on application containers. -- **Lightweight Virtualization on Firecracker**. Firecracker MicroVMs provide enhanced security and workload isolation over traditional container solutions while preserving their speed and resource efficiency. MicroVMs are especially designed for creating and managing secure, multi-tenant container (CaaS) and function-based (FaaS) services. - After having installed the cloud with one hypervisor, you may add other hypervisors. You can deploy heterogeneous multi-hypervisor environments managed by a single OpenNebula instance. An advantage of using OpenNebula on VMware is the strategic path to openness as companies move beyond virtualization toward a private cloud. OpenNebula can leverage existing VMware infrastructure, protecting IT investments, and at the same time gradually integrate other open source hypervisors, therefore avoiding future vendor lock-in and strengthening the negotiating position of the company. |OpenNebula Hypervisors| @@ -49,20 +45,20 @@ After having installed the cloud with one hypervisor, you may add other hypervis 3.2. Install the Virtualization hosts ------------------------------------------------- -Now you are ready to **add the virtualization nodes**. The OpenNebula packages bring support for :ref:`KVM `, :ref:`LXC `, :ref:`Firecracker ` and :ref:`vCenter ` nodes. In the case of vCenter, a host represents a vCenter cluster with all its ESX hosts. You can add different hypervisors to the same OpenNebula instance. +Now you are ready to **add the virtualization nodes**. The OpenNebula packages bring support for :ref:`KVM ` and :ref:`LXC` nodes. In the case of vCenter, a host represents a vCenter cluster with all its ESX hosts. You can add different hypervisors to the same OpenNebula instance. 3.3. Integrate with Data Center Infrastructure ------------------------------------------------------------ Now you should have an OpenNebula cloud up and running with at least one virtualization node. The next step is to configure OpenNebula to work with your infrastructure. When using the vCenter driver, no additional configurations are needed. -However, when using KVM, LXC or Firecracker, OpenNebula directly manages the hypervisor, networking and storage platforms, and you may need additional configuration: +However, OpenNebula directly manages the hypervisor, networking and storage platforms, and you may need additional configuration: - **Networking setup** with :ref:`802.1Q VLANs `, :ref:`Open vSwitch ` or :ref:`VXLAN `. - **Storage setup** with :ref:`NFS/NAS datastore `, :ref:`Local Storage datastore `, :ref:`SAN datastore `, :ref:`Ceph `, :ref:`Dev `, or :ref:`iSCSI ` datastore. -- **Host setup** with the configuration options for the :ref:`KVM hosts `, :ref:`LXC hosts `, :ref:`Firecracker hosts ` :ref:`Monitoring subsystem `, :ref:`Virtual Machine HA ` or :ref:`PCI Passthrough `. +- **Host setup** with the configuration options for the :ref:`KVM hosts `, :ref:`LXC hosts `, :ref:`Monitoring subsystem `, :ref:`Virtual Machine HA ` or :ref:`PCI Passthrough `. - **Authenticagtion setup**, OpenNebula comes by default with an internal **user/password authentication system**, but it can use an external Authentication driver like :ref:`ssh `, :ref:`x509 `, :ref:`ldap ` or :ref:`Active Directory `.