Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

multiple nodes on same host? #732

Closed
simonhardingforgerock opened this issue Oct 16, 2019 · 9 comments
Closed

multiple nodes on same host? #732

simonhardingforgerock opened this issue Oct 16, 2019 · 9 comments
Labels

Comments

@simonhardingforgerock
Copy link

I'd like to start by saying great project :)

I am very happy to see multi node support has been added (#235).

However, one of the things requested on feature request #235 was for multiple nodes on the same host, but the docs so far imply different hosts. It really helps me to be able to simulate multiple nodes on the same laptop.

  1. Is this possible at present?
  2. If so, is it using some snap related tricks?
@ktsakalozos
Copy link
Member

Hi @simonhardingforgerock,

Indeed, the request #235 was to have locally more than one nodes. What we delivered is a way to connect two or more MicroK8s instances. We do not cover the step of provisioning the substrate where these instances reside. You may want them in a VM, in an LXC container, or in a Docker container. Out of these options, the VM solution is the most straight forward, LXC works with a special profile and I haven't tried docker although it would have been a really cool setup (self-hosted nodes?).

If you want, I can provide you with more details for each setup.

@simonhardingforgerock
Copy link
Author

Thanks @ktsakalozos !

VMs seem pretty straight forward. I'll go with that. I am also pretty tempted by the docker approach.

I also saw that you can run multiple instances of the same snap app simultaneously, but I suspect I would need to do some tweaking on the network side of things.

@manuelnucci
Copy link

manuelnucci commented Dec 10, 2019

Hi @ktsakalozos! I'm currently trying to build a small infrastructure in a local server for development and testing purposes only.

Briefly, I would like a multi-node kubernetes cluster on the same machine without the overhead of VM or any other kind of virtualization. Docker seems the best option for this. There's the kind (kubernetes in docker) alternative but it uses a custom image without all the benefits from microk8s (add-ons like istio, prometheus, dashboard which are ready to integrate).

Could you provide further details in how to accomplish the Docker alternative using microk8s?

Thanks and look forward to your answer!

@SebastianCanonaco
Copy link

Hi @manuelnucci , @ktsakalozos. I'm also looking to build a multi-node k8s cluster for developing purposes. I'm looking forward for yours answer, thanks.

@ktsakalozos
Copy link
Member

With LXC containers (instead of docker) you can setup a local cluster in your machine. The process is not well tested but seems to work.

On a system with lxd 3.0. Configure lxd with lxd init and create a new storage pool with the dir backend. Then create a mirok8s profile as the one found in [1]:

lxc profile copy default microk8s
curl  https://raw.githubusercontent.com/ubuntu/microk8s/master/tests/lxc/microk8s.profile | lxc profile edit microk8s

Then launch as many nodes as you want: lxc launch -p default -p microk8s ubuntu:18.04 microk8s-node.

In each node install microk8s with lxc exec microk8s-node -- snap install microk8s --classic. And then form a cluster.

This process is proven to work but is not well tested. So please report back anything you find.

It would be nice if we patched the microk8s.add-node script [2] to create such nodes if an argument like --use-lxd was passed. Anyone up for this task? :)

[1] https://github.com/ubuntu/microk8s/blob/master/tests/lxc/microk8s.profile
[2] https://github.com/ubuntu/microk8s/blob/master/microk8s-resources/wrappers/microk8s-add-node.wrapper

@acampesino
Copy link

Hi @ktsakalozos,

I am trying to get MicroK8s with 3 nodes on VirtualBox VMs, managed by Vagrant.

I manage to get the the master and the two nodes added in the cluster, but it seems i can not get the VM network configured properly as the control plane can not be reached from the nodes. It looks the pods can not connect to the api service :

Get https://10.152.183.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.152.183.1:443: connect: connection refused 10.183.152.1:443

I am using a combination of NAT and Host-Only interface in virtual box, the nodes can ping each other, but only the master can connect to the api service.

So, in case the VM approach is not straight forward. Any suggestion will be highly appreciated.

Thanks.

@ktsakalozos
Copy link
Member

@acampesino this sounds similar to the issue reported in:
#854 (comment)

@acampesino
Copy link

Thanks @ktsakalozos,

But the iptables configuration was not the issue. I guess the problem is more related to VirtualBox + Vagrant and how the virtual networks are handle by them.

The problem is that in order to configure the VM, vagrant need to get NAT virtual networking enable in order to ssh into the VM. VirtualBox assign IP 10.0.2.2 as the default gateway in the VM instances, which correspond to the IP of the host in the VM instance. And MicroK8s apiserver service attach itself to the default gateway interface, so the 10.152.183.1 ip is attached to the NAT subnet which is not reachable from the other VMS, only from the host by NAT port forwarding.

My first attempt was to use a multi VM NAT network mode, so all the VM instances were attached to the same subnet, 10.0.2.x, but still as Vagrant needs the NAT interface, VirtualBox select the 10.0.2.2 as default gateway. I didn't manage to get Vagrant working properly without the NAT interface.

My final attempt was to configure what in Vagrant is call private network, which basically is a number of VMs that shared a Host-Only interface with private IPs. VMs can talk to each other and each VM is reachable from the host using NAT and port forwarding. Still same issue with the default gateway and the apiserver service. So all the PODs and services in the worker nodes can not reach the apiserver.

If I configure the HostOnly (private network) gateway as default gateway, and restart the apiserver, everything looks fine, but I lost access to the outside world in the master node.

So what i did and verify is working is to configure the HostOnly network gateway as default gateway, restart the apiserver and restore the old gateway.

If you do not need access to the outside, you can just keep the HostOnly network as default gateway.

Probably there is a better way to configure the networks and the routing, but I could not get anything else working, so if someone knows, please comment here...

So, for anybody interested in having a multi-node microk8s in your computer with VirtualBox + Vagrant, that is what I did after installing microk8s, install the plugins you need and add the worker nodes to cluster.:

sudo route del default gw 10.0.2.2
sudo route add default gw 192.168.50.1
sudo systemctl restart snap.microk8s.daemon-apiserver.service
sudo route delete default gw 192.168.50.1
sudo route add default gw 10.0.2.2

@ktsakalozos, for cases with multi subnets on the cluster nodes, it could be good to configure to which gateway interface the microk8s apiserver will be listening.

Thanks for all the help.

@stale
Copy link

stale bot commented Nov 11, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the inactive label Nov 11, 2020
@stale stale bot closed this as completed Dec 11, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants