Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add multi-node support #94

Closed
vishh opened this issue May 19, 2016 · 61 comments
Closed

Add multi-node support #94

vishh opened this issue May 19, 2016 · 61 comments
Assignees
Labels
co/multinode Issues related to multinode clusters help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. roadmap/2019 Items on the 2019 roadmap

Comments

@vishh
Copy link
Contributor

vishh commented May 19, 2016

To provide a complete kubernetes experience, user might want to play & experience with features like scheduling and daemon set, etc. If minikube can emulate multiple kubernetes nodes, users can then use most of the kubernetes features.

This is probably not necessary in the first few versions of minikube. Once the single node setup is stable, we can look at emulating multiple nodes.

Up votes from users on this issue can be used as a signal to start working on this feature.

@vishh vishh added kind/feature Categorizes issue or PR as related to a new feature. priority/P3 labels May 19, 2016
@ernestoalejo
Copy link

Other use case: This would allow to reproduce & play locally with failover scenarios (health checking configuration, scheduling in new machines, etc.) with high availability apps when a node fails or machine resources get exhausted. VM can be stopped manually to test for example.

@pgray
Copy link
Contributor

pgray commented Sep 28, 2016

This would be awesome as a feature in minikube, but for anyone looking for something passable in the meantime, this might help.

Is the scope of a feature like this large because minikube supports 5 virtualization types? I see the priority of P3 but I'm not sure if that means it's already being worked on or that there's enough work to do on other stuff that it's not worth trying to do yet.

@marun
Copy link
Contributor

marun commented Sep 28, 2016

I don't think it's large. It could be as simple as running nodes running docker-in-docker as pods on the master node.

@marun
Copy link
Contributor

marun commented Sep 28, 2016

It would be nice if minikube could be updated to use kubeadm to make adding new nodes easier. Any plans for that?

@aaron-prindle
Copy link
Contributor

aaron-prindle commented Nov 2, 2016

This might be something to look into regarding this:
https://github.com/marun/nkube

@fabiand
Copy link
Contributor

fabiand commented Feb 24, 2017

/me also thought about the kubeadm style of adding additional nodes

@fabiand
Copy link
Contributor

fabiand commented Mar 7, 2017

Using kubeadm would also help to align with other K8s setups which eases debugging.

@jellonek
Copy link

https://github.com/Mirantis/kubeadm-dind-cluster solves this case. It also solves other cases for multi node setup needed during development process, listed in https://github.com/ivan4th/kubeadm/blob/27edb59ba62124b6c2a7de3c75d866068d3ea9ca/docs/proposals/local-cluster.md
Also it does not require any VM during the process.

There is also a demo of virtlet based on it, which shows how in simple steps you can start multi node setup, patch one node with injected image for daemonset for CRI runtime, and then start example pod on it.
All this you can read in https://github.com/Mirantis/virtlet/blob/master/deploy/demo.sh

@MichielDeMey
Copy link

@pgray I've used that setup for a long time but it looks like they won't support K8s 1.6+ 😞
coreos/coreos-kubernetes#881

@nukepuppy
Copy link

definitely thing it should be a target to use minikube for that.. like minikube start nodes=3 etc etc... i dunno haven't looked at backend but it will fill a tremendous gap right now for developing from desktop to production in the same fashion which will pay for itself in adoption faster than other things.

@ccampo
Copy link

ccampo commented Jun 9, 2017

I am using Mac and I can already bring up a second Minikube with "minikube start --profile=second". (using VirtualBox) So all I am missing is a way to connect the two so that the default minkube can now also deploy to the second (virtual)node.

@MichielDeMey
Copy link

@ccampo I believe that spins up a second cluster, not a second node?

@ccampo
Copy link

ccampo commented Jun 12, 2017

So the difference is basically that both Minikube instances have their own master (API server etc). So if the second minikube could use the master of the first minikube, that would get me closer to my goal right ?

@MichielDeMey
Copy link

Yes, basically. You can however use kubefed (https://kubernetes.io/docs/concepts/cluster-administration/federation/) to manage multiple clusters since k8s 1.6.

@ccampo
Copy link

ccampo commented Jun 12, 2017

ok I will look at federation thanks. Is there an easy way that you know off to make the second cluster or node use the api of the first cluster ?

@fabiand
Copy link
Contributor

fabiand commented Jun 12, 2017

Kube fed manages independent clusters, right?
But isn't the goal here to create a single cluster with multiple VMs?

@MichielDeMey
Copy link

@fabiand Correct, but it seems I've derailed it a bit, apologies. :)
@ccampo I'm not very familiar with the internals of Kubernetes (or Minikube) but I know for a fact that it's possible to have multiple master nodes in a cluster setup.

You might want to look at https://github.com/kelseyhightower/kubernetes-the-hard-way if you're interested in the internals and want to get something working.

@PerArneng
Copy link

This would be very nice to be able to play with scalability across nodes in an easy way.

@pbitty
Copy link
Contributor

pbitty commented Feb 11, 2018

I made a multi-node prototype in #2539, if anyone is interested in seeing one way it could be implemented, using individual VMs for each node.

@pbitty
Copy link
Contributor

pbitty commented Feb 11, 2018

Demo here:
asciicast

@YiannisGkoufas
Copy link

Hi there @pbitty , great job!
I build it, start the master but when adding 1 worker it fails with:

~/go/src/k8s.io/minikube$ out/minikube node start
Starting nodes...
Starting node: node-1
Moving assets into node...
Setting up certs...
Joining node to cluster...
E0510 13:03:34.368403    3605 start.go:63] Error bootstrapping node:  Error joining node to cluster: kubeadm init error running command: sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443: Process exited with status 2

Any idea how I can debug it?
Thanks!

@pbitty
Copy link
Contributor

pbitty commented May 10, 2018

Hi @YiannisGkoufas, you can ssh into the node with

out/minikube node ssh node-1

and then try to run the same comment from the shell:

sudo /usr/bin/kubeadm join --token 5a0dw7.2af6rci1fuzl5ak5 192.168.99.100:8443

(It would be great if the log message contained the command output. I can't remember why it doesn't. I think it would have required some refactoring and the PoC was a bit of a hack with minimal refactoring done.)

@YiannisGkoufas
Copy link

Thanks! Didn't realize you could ssh into the node that way.
So I tried:

sudo /usr/bin/kubeadm join --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING FileExisting-crictl]: crictl not found in system path
[preflight] Some fatal errors occurred:
	[ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Then added the --ignore-preflight-errors parameter and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-crictl]: crictl not found in system path
discovery: Invalid value: "": using token-based discovery without DiscoveryTokenCACertHashes can be unsafe. set --discovery-token-unsafe-skip-ca-verification to continue

Then I added the suggested flag and executed:

sudo /usr/bin/kubeadm join --ignore-preflight-errors=all --token jcgflt.1iqcoi62819z1yw2 192.168.99.100:8443 --discovery-token-unsafe-skip-ca-verification

I got:

[preflight] Running pre-flight checks.
	[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.06.0-ce. Max validated version: 17.03
	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
	[WARNING Swap]: running with swap on is not supported. Please disable swap
	[WARNING FileExisting-crictl]: crictl not found in system path
[discovery] Trying to connect to API Server "192.168.99.100:8443"
[discovery] Created cluster-info discovery client, requesting info from "https://192.168.99.100:8443"
[discovery] Failed to request cluster info, will try again: [Unauthorized]
[discovery] Failed to request cluster info, will try again: [Unauthorized]
...

Can't figure out what to try next.
Thanks again!

@gauthamsunjay
Copy link

@YiannisGkoufas out/minikube start --kubernetes-version v1.8.0 --bootstrapper kubeadm worked for me. I think I was facing the same issue as you and it looks like by default the bootstrapper used is localkube. Basically kubeadm init was not happening on master. Hence, we were not able to add worker nodes. Hope this helps! Thanks @pbitty

@ghostsquad
Copy link

@andersthorsen Host or Guest OS?

@andersthorsen
Copy link

andersthorsen commented Jan 1, 2020

@ghostsquad as host os. They support Windows 10 as host os tough.

@afbjorklund afbjorklund removed their assignment Jan 4, 2020
@tstromberg tstromberg removed this from the v1.7.0 milestone Jan 22, 2020
@MartinKaburu
Copy link

Is this still being developed? I've been waiting and following for ages

@sharifelgamal
Copy link
Collaborator

@MartinKaburu yes, I'm actively working on this.

@MartinKaburu
Copy link

@sharifelgamal do you need a hand on this?

@sharifelgamal
Copy link
Collaborator

Experimental multi-node support will be available in the upcoming 1.9 release and will be available in the next 1.9 beta as well.

@yusufharip
Copy link

Hey @sharifelgamal i'm running minikube v1.9.0 on MacOS Catalina and get this error

$ minikube node add
🤷 This control plane is not running! (state=Stopped)
❗ This is unusual - you may want to investigate using "minikube logs"
👉 To fix this, run: minikube start

first install minikube with this command
$ minikube start --driver=docker

@sharifelgamal
Copy link
Collaborator

@yusufharip can you open up a new issue and give us a little more detail so we can debug better?

minikube start --driver=docker -v=3 --alsologtostderr and minikube logs would be helpful.

@petersaints
Copy link

I'm interested in this feature. Will this allow us to simulate Cluster Autoscaler (https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) scenarios locally?

@tstromberg
Copy link
Contributor

tstromberg commented Apr 19, 2020

This feature is now available experimentally. We even have documentation:

https://minikube.sigs.k8s.io/docs/tutorials/multi_node/

The UX is pretty rough, and there are many issues to resolve, but multi-node has now been added. We're now working off a newer more specific issue to address the usability issues and other bugs:

#7538

We look forward to releasing v1.10 within the next 2 weeks, which will greatly improve the experience.

Thank you for being so patient! This was by far minikube's most popular request for many years.

@foobarbecue
Copy link

Does this allow you to run on multiple physical machines?

@aasmall
Copy link

aasmall commented May 6, 2020

Non-master nodes do not get an InternalAddress:

$ kubectl get nodes -o wide
NAME           STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE               KERNEL-VERSION   CONTAINER-RUNTIME
minikube       Ready    master   173m   v1.18.0   192.168.39.83   <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
minikube-m02   Ready    <none>   80m    v1.18.0   <none>          <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
minikube-m03   Ready    <none>   80m    v1.18.0   <none>          <none>        Buildroot 2019.02.10   4.19.107         docker://19.3.8
$ kubectl describe nodes | grep InternalIP     
  InternalIP:  192.168.39.83

This appears to be because we are specifying the --node-ip as a kubelet argument,

from minikube master vm:

$ hostname
minikube
$ systemctl cat kubelet.service
# /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests

[Install]

from minikube-m02

$ hostname
minikube-m02
$ systemctl cat kubelet.service
# /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/

[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms

[Install]
WantedBy=multi-user.target

# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket

[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests

[Install

Note that the --node-ip arguments are the same in both cases.
This results in an inability to get logs or ssh into pods scheduled on non-master nodes

$ kubectl get pods -o wide
NAME                             READY   STATUS    RESTARTS   AGE   IP              NODE           NOMINATED NODE   READINESS GATES
dice-magic-app-86d4bc958-phx6j   2/2     Running   0          76m   10.244.29.196   minikube-m02   <none>           <none>
dice-magic-app-86d4bc958-qfw2t   2/2     Running   0          76m   10.244.23.5     minikube-m03   <none>           <none>
redis-2mvbc                      1/1     Running   0          76m   10.244.23.4     minikube-m03   <none>           <none>
redis-xrh9q                      1/1     Running   0          76m   10.244.29.195   minikube-m02   <none>           <none>
redis-xtgjh                      1/1     Running   0          76m   10.244.39.8     minikube       <none>           <none>
www-c57b7f645-5vwd5              1/1     Running   0          76m   10.244.29.197   minikube-m02   <none>           <none>

scheduled on master(minikube)

$ kubectl logs redis-xtgjh
10:C 06 May 2020 08:47:55.461 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10:C 06 May 2020 08:47:55.461 # Redis version=6.0.1, bits=64, commit=00000000, modified=0, pid=10, just started
10:C 06 May 2020 08:47:55.461 # Configuration loaded
10:M 06 May 2020 08:47:55.462 * No cluster configuration found, I'm 5b67e68d6d6944abce833f7d1a7310fef3cecf85
10:M 06 May 2020 08:47:55.465 * Running mode=cluster, port=6379.
10:M 06 May 2020 08:47:55.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10:M 06 May 2020 08:47:55.465 # Server initialized
10:M 06 May 2020 08:47:55.466 * Ready to accept connections

scheduled on non-master(m02)

$ kubectl logs redis-xrh9q
Error from server: no preferred addresses found; known addresses: []

@MatayoshiMariano
Copy link

MatayoshiMariano commented May 6, 2020

After running minikube start --nodes 2 -p multinode-demo --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.244.0.0/16 --disk-size 3GB

When describing the nodes kubectl describe nodes in both nodes I get:

Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Wed, 06 May 2020 10:39:42 -0300   Wed, 06 May 2020 10:29:00 -0300   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

Take a look to runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

So when using nodeaffinity i'm getting 0/2 nodes are available: 2 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.

Am I missing something?

@sharifelgamal
Copy link
Collaborator

@aasmall good catch, #8018 should fix it.

@aasmall
Copy link

aasmall commented May 6, 2020

@MatayoshiMariano - I think you need to actually install a CNI. the demo page has a flannel yaml that works. personally I went through calico the hard way...

@sharifelgamal - That's awesome! thank you. For now I think I'll have to use a different cluster tech for multi-node development, but I can't wait until minikube is ready.

@sharifelgamal
Copy link
Collaborator

The next minikube release (1.10) will automatically apply a CNI for multinode clusters, but for the current latest, you do need to manually apply CNI.

@MatayoshiMariano
Copy link

@aasmall yeah, that was it! Forgot to install flannel

kubectl  apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

@sudoflex
Copy link

@sharifelgamal how does minikube dashboard work in a multi-nodes environment?

Here's an overview of the cluster I'm dealing with:

kubectl get po -A -o wide -w
NAMESPACE              NAME                                        READY   STATUS    RESTARTS   AGE   IP           NODE       NOMINATED NODE   READINESS GATES
kube-system            coredns-f9fd979d6-jfkm7                     1/1     Running   0          67m   172.18.0.2   t0       <none>           <none>
kube-system            etcd-t0                                     1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kindnet-j6b6p                               1/1     Running   0          66m   172.17.0.4   t0-m02   <none>           <none>
kube-system            kindnet-rmrzm                               1/1     Running   0          66m   172.17.0.3   t0       <none>           <none>
kube-system            kube-apiserver-t0                           1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-controller-manager-t0                  1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-proxy-8jzh7                            1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            kube-proxy-gbm79                            1/1     Running   0          66m   172.17.0.4   t0-m02   <none>           <none>
kube-system            kube-scheduler-t0                           1/1     Running   0          67m   172.17.0.3   t0       <none>           <none>
kube-system            metrics-server-d9b576748-j97rs              1/1     Running   0          62m   172.18.0.2   t0-m02   <none>           <none>
kube-system            storage-provisioner                         1/1     Running   1          67m   172.17.0.3   t0       <none>           <none>
kubernetes-dashboard   dashboard-metrics-scraper-c95fcf479-27v7x   1/1     Running   0          61m   172.18.0.4   t0-m02   <none>           <none>
kubernetes-dashboard   kubernetes-dashboard-5c448bc4bf-xqkgw       1/1     Running   0          61m   172.18.0.3   t0-m02   <none>           <none>

The following command gets stuck indefinetly:

minikube dashboard --url -p t0
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/multinode Issues related to multinode clusters help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. roadmap/2019 Items on the 2019 roadmap
Projects
None yet
Development

No branches or pull requests