-
Notifications
You must be signed in to change notification settings - Fork 779
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multiple nodes on same host? #732
Comments
Indeed, the request #235 was to have locally more than one nodes. What we delivered is a way to connect two or more MicroK8s instances. We do not cover the step of provisioning the substrate where these instances reside. You may want them in a VM, in an LXC container, or in a Docker container. Out of these options, the VM solution is the most straight forward, LXC works with a special profile and I haven't tried docker although it would have been a really cool setup (self-hosted nodes?). If you want, I can provide you with more details for each setup. |
Thanks @ktsakalozos ! VMs seem pretty straight forward. I'll go with that. I am also pretty tempted by the docker approach. I also saw that you can run multiple instances of the same snap app simultaneously, but I suspect I would need to do some tweaking on the network side of things. |
Hi @ktsakalozos! I'm currently trying to build a small infrastructure in a local server for development and testing purposes only. Briefly, I would like a multi-node kubernetes cluster on the same machine without the overhead of VM or any other kind of virtualization. Docker seems the best option for this. There's the kind (kubernetes in docker) alternative but it uses a custom image without all the benefits from microk8s (add-ons like istio, prometheus, dashboard which are ready to integrate). Could you provide further details in how to accomplish the Docker alternative using microk8s? Thanks and look forward to your answer! |
Hi @manuelnucci , @ktsakalozos. I'm also looking to build a multi-node k8s cluster for developing purposes. I'm looking forward for yours answer, thanks. |
With LXC containers (instead of docker) you can setup a local cluster in your machine. The process is not well tested but seems to work. On a system with lxd 3.0. Configure lxd with
Then launch as many nodes as you want: In each node install microk8s with This process is proven to work but is not well tested. So please report back anything you find. It would be nice if we patched the [1] https://github.com/ubuntu/microk8s/blob/master/tests/lxc/microk8s.profile |
Hi @ktsakalozos, I am trying to get MicroK8s with 3 nodes on VirtualBox VMs, managed by Vagrant. I manage to get the the master and the two nodes added in the cluster, but it seems i can not get the VM network configured properly as the control plane can not be reached from the nodes. It looks the pods can not connect to the api service :
I am using a combination of NAT and Host-Only interface in virtual box, the nodes can ping each other, but only the master can connect to the api service. So, in case the VM approach is not straight forward. Any suggestion will be highly appreciated. Thanks. |
@acampesino this sounds similar to the issue reported in: |
Thanks @ktsakalozos, But the iptables configuration was not the issue. I guess the problem is more related to VirtualBox + Vagrant and how the virtual networks are handle by them. The problem is that in order to configure the VM, vagrant need to get NAT virtual networking enable in order to ssh into the VM. VirtualBox assign IP 10.0.2.2 as the default gateway in the VM instances, which correspond to the IP of the host in the VM instance. And MicroK8s apiserver service attach itself to the default gateway interface, so the 10.152.183.1 ip is attached to the NAT subnet which is not reachable from the other VMS, only from the host by NAT port forwarding. My first attempt was to use a multi VM NAT network mode, so all the VM instances were attached to the same subnet, 10.0.2.x, but still as Vagrant needs the NAT interface, VirtualBox select the 10.0.2.2 as default gateway. I didn't manage to get Vagrant working properly without the NAT interface. My final attempt was to configure what in Vagrant is call private network, which basically is a number of VMs that shared a Host-Only interface with private IPs. VMs can talk to each other and each VM is reachable from the host using NAT and port forwarding. Still same issue with the default gateway and the apiserver service. So all the PODs and services in the worker nodes can not reach the apiserver. If I configure the HostOnly (private network) gateway as default gateway, and restart the apiserver, everything looks fine, but I lost access to the outside world in the master node. So what i did and verify is working is to configure the HostOnly network gateway as default gateway, restart the apiserver and restore the old gateway. If you do not need access to the outside, you can just keep the HostOnly network as default gateway. Probably there is a better way to configure the networks and the routing, but I could not get anything else working, so if someone knows, please comment here... So, for anybody interested in having a multi-node microk8s in your computer with VirtualBox + Vagrant, that is what I did after installing microk8s, install the plugins you need and add the worker nodes to cluster.:
@ktsakalozos, for cases with multi subnets on the cluster nodes, it could be good to configure to which gateway interface the microk8s apiserver will be listening. Thanks for all the help. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I'd like to start by saying great project :)
I am very happy to see multi node support has been added (#235).
However, one of the things requested on feature request #235 was for multiple nodes on the same host, but the docs so far imply different hosts. It really helps me to be able to simulate multiple nodes on the same laptop.
The text was updated successfully, but these errors were encountered: