-
Notifications
You must be signed in to change notification settings - Fork 17
oneke_architecture
OneKE is available as a Virtual Appliance from the OpenNebula Public MarketPlace.
Let's take a closer look at the OneKE 1.27 MarketPlace app:
$ onemarketapp list -f NAME~'OneKE 1.27' -l NAME --no-header
OneKE 1.27 Storage
OneKE 1.27 OS disk
Service OneKE 1.27
OneKE 1.27
OneKE 1.27 VNF
OneKE 1.27 Storage disk
A specific version of OneKE consists of:
- A single OneFlow template
Service OneKE 1.27
used to instantiate a cluster. - VM templates
OneKE 1.27 VNF
,OneKE 1.27
,OneKE 1.27 Storage
used to instantiate VMs. - Disk images
OneKE 1.27 OS disk
,OneKE 1.27 Storage disk
(all Kubernetes nodes are cloned from theOneKE 1.27 OS disk
image).
Note
Different versions of OneKE follow the same structure but with different version numbers, e.g. Service OneKe 1.24
Note
A service template links to VM templates which link to disk images, in such a way that everything is recursively downloaded when importing the Virtual Appliance in the OpenNebula Cloud.
The OneKE virtual appliance is implemented as a OneFlow Service. OneFlow allows you to define, execute, and manage muli-tiered applications, known as Services, composed of interconnected Virtual Machines with deployment dependencies between them. Each group of Virtual Machines is deployed and managed as a single entity (called role).
Note
For a full OneFlow API/template reference, please refer to the OneFlow Specification.
OneKE Service comprises four different Roles:
- VNF: Load Balancer for Control-Plane and Ingress Traffic
- Master: Control-Plane nodes
- Worker: Nodes to run application workloads
- Storage: Dedicated storage nodes for Persistent Volume replicas
To check the roles defined in the service template, use the following command:
$ oneflow-template show -j 'Service OneKE 1.27' | jq -r '.DOCUMENT.TEMPLATE.BODY.roles[].name'
vnf
master
worker
storage
VNF is a multi-node service that provides Routing, NAT, and Load-Balancing to OneKE clusters. VNF has been implemented on top of Keepalived which allows for basic HA/Failover functionality via Virtual IPs (VIPs).
OneKE operates in a dual subnet environment: VNF facilitates NAT and Routing between public and private VNETs. When the public VNET acts as a gateway to the public Internet, it also enables Internet connectivity to all internal VMs.
Dedicated documentation for VNF can be found in the VNF documentation.
The master role is responsible for running RKE2's Control Plane, managing the etcd database, API server, controller manager, scheduler, along with the worker nodes. It has been implemented according to principles defined in RKE2's High Availability section. Specifically, the fixed registration address is an HAProxy instance exposing TCP port 9345
on a VNF node.
The worker role deploys standard RKE2 nodes without any taints or labels and serves as the default destination for regular workloads.
The storage role deploys labeled and tainted nodes specifically designated to run only Longhorn replicas.
Selectors and tolerations can be applied to deploy pods into storage nodes. Here's an example in YAML format:
tolerations:
- key: node.longhorn.io/create-default-disk
value: "true"
operator: Equal
effect: NoSchedule
nodeSelector:
node.longhorn.io/create-default-disk: "true"
OneKE includes a retain version of the default Longhorn storage class, defined as:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: longhorn-retain
provisioner: driver.longhorn.io
allowVolumeExpansion: true
reclaimPolicy: Retain
volumeBindingMode: Immediate
parameters:
fsType: "ext4"
numberOfReplicas: "3"
staleReplicaTimeout: "2880"
fromBackup: ""
Further information about Kubernetes storage classes can be found in the storage classes documentation.
Warning: Each storage node requires a dedicated storage block device attached to the VM (/dev/vdb
by default) to hold Longhorn's replicas (mounted at /var/lib/longhorn/
). Deleting a cluster will also remove all its Longhorn replicas, so always ensure to back up your data!
OneKE's OneFlow Service necessitates two networks: a public and a private VNET. These can be simple bridged networks.
Consider the following network settings:
- The public VNET/subnet:
10.2.11.0/24
with the IPv4 range10.2.11.200-10.2.11.249
, providing public Internet access via NAT. - The private VNET/subnet:
172.20.0.0/24
with the IPv4 range172.20.0.100-172.20.0.199
, DNS context value1.1.1.1
, and complete isolation from the public Internet.
Avoid including VIP addresses within VNET ranges to prevent possible conflicts. For example:
VIP | IPv4 |
---|---|
ONEAPP_VROUTER_ETH0_VIP0 | 10.2.11.86 |
ONEAPP_VROUTER_ETH1_VIP0 | 172.20.0.86 |
graph LR;
internet --- vnf;
vnf --- master & worker & storage;
internet((Internet));
style vnf text-align:left
style master text-align:left
style worker text-align:left
style storage text-align:left
vnf[["vnf (NAT 🔀)"<br><hr>eth0:10.2.11.86<br><hr>eth1:172.20.0.68]];
master[master<br><hr>eth0:172.20.0.101<br><hr>GW:172.20.0.86<br>DNS:1.1.1.1];
worker[worker<br><hr>eth0:172.20.0.102<br><hr>GW:172.20.0.86<br>DNS:1.1.1.1];
storage[storage<br><hr>eth0:172.20.0.103<br><hr>GW:172.20.0.86<br>DNS:1.1.1.1];
On a leader VNF node IP/NAT configuration will look like these listings:
localhost:~# auto
localhost:~# ip address list
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 02:00:0a:02:0b:c8 brd ff:ff:ff:ff:ff:ff
inet 10.2.11.200/24 scope global eth0
valid_lft forever preferred_lft forever
inet 10.2.11.86/32 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::aff:fe02:bc8/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 02:00:ac:14:00:64 brd ff:ff:ff:ff:ff:ff
inet 172.20.0.100/24 scope global eth1
valid_lft forever preferred_lft forever
inet 172.20.0.86/32 scope global eth1
valid_lft forever preferred_lft forever
inet6 fe80::acff:fe14:64/64 scope link
valid_lft forever preferred_lft forever
localhost:~# auto
localhost:~# iptables -t nat -vnL POSTROUTING
Chain POSTROUTING (policy ACCEPT 20778 packets, 1247K bytes)
pkts bytes target prot opt in out source destination
2262 139K MASQUERADE all -- * eth0 0.0.0.0/0 0.0.0.0/0
On Kubernetes nodes the Routing/DNS configuration will look like these listings:
root@oneke-ip-172-20-0-101:~# auto
root@oneke-ip-172-20-0-101:~# ip route list
default via 172.20.0.86 dev eth0
10.42.0.0/24 via 10.42.0.166 dev cilium_host src 10.42.0.166
10.42.0.166 dev cilium_host scope link
10.42.1.0/24 via 10.42.0.166 dev cilium_host src 10.42.0.166 mtu 1450
172.20.0.0/24 dev eth0 proto kernel scope link src 172.20.0.101
root@oneke-ip-172-20-0-101:~# auto
root@oneke-ip-172-20-0-101:~# cat /etc/resolv.conf
nameserver 1.1.1.1
Note
Please refer to the Virtual Networks document for more info about networking in OpenNebula.
Note
The default gateway on every Kubernetes node is automatically set to the private VIP address, which facilitates (NATed) access to the public Internet.
- OpenNebula Apps Overview
- OS Appliances Update Policy
- OneApps Quick Intro
- Build Instructions
- Linux Contextualization Packages
- Windows Contextualization Packages
- OneKE (OpenNebula Kubernetes Edition)
- Virtual Router
- Overview & Release Notes
- Quick Start
- OpenRC Services
- Virtual Router Modules
- Glossary
- WordPress
- Harbor Container Registry
- MinIO
- Ray AI
- Development