-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add multi-node support #94
Comments
Other use case: This would allow to reproduce & play locally with failover scenarios (health checking configuration, scheduling in new machines, etc.) with high availability apps when a node fails or machine resources get exhausted. VM can be stopped manually to test for example. |
This would be awesome as a feature in minikube, but for anyone looking for something passable in the meantime, this might help. Is the scope of a feature like this large because minikube supports 5 virtualization types? I see the priority of P3 but I'm not sure if that means it's already being worked on or that there's enough work to do on other stuff that it's not worth trying to do yet. |
I don't think it's large. It could be as simple as running nodes running docker-in-docker as pods on the master node. |
It would be nice if minikube could be updated to use kubeadm to make adding new nodes easier. Any plans for that? |
This might be something to look into regarding this: |
/me also thought about the |
Using |
https://github.com/Mirantis/kubeadm-dind-cluster solves this case. It also solves other cases for multi node setup needed during development process, listed in https://github.com/ivan4th/kubeadm/blob/27edb59ba62124b6c2a7de3c75d866068d3ea9ca/docs/proposals/local-cluster.md There is also a demo of virtlet based on it, which shows how in simple steps you can start multi node setup, patch one node with injected image for daemonset for CRI runtime, and then start example pod on it. |
@pgray I've used that setup for a long time but it looks like they won't support K8s 1.6+ 😞 |
definitely thing it should be a target to use minikube for that.. like minikube start nodes=3 etc etc... i dunno haven't looked at backend but it will fill a tremendous gap right now for developing from desktop to production in the same fashion which will pay for itself in adoption faster than other things. |
I am using Mac and I can already bring up a second Minikube with "minikube start --profile=second". (using VirtualBox) So all I am missing is a way to connect the two so that the default minkube can now also deploy to the second (virtual)node. |
@ccampo I believe that spins up a second cluster, not a second node? |
So the difference is basically that both Minikube instances have their own master (API server etc). So if the second minikube could use the master of the first minikube, that would get me closer to my goal right ? |
Yes, basically. You can however use |
ok I will look at federation thanks. Is there an easy way that you know off to make the second cluster or node use the api of the first cluster ? |
Kube fed manages independent clusters, right? |
@fabiand Correct, but it seems I've derailed it a bit, apologies. :) You might want to look at https://github.com/kelseyhightower/kubernetes-the-hard-way if you're interested in the internals and want to get something working. |
This would be very nice to be able to play with scalability across nodes in an easy way. |
I made a multi-node prototype in #2539, if anyone is interested in seeing one way it could be implemented, using individual VMs for each node. |
Hi there @pbitty , great job!
Any idea how I can debug it? |
Hi @YiannisGkoufas, you can ssh into the node with
and then try to run the same comment from the shell:
(It would be great if the log message contained the command output. I can't remember why it doesn't. I think it would have required some refactoring and the PoC was a bit of a hack with minimal refactoring done.) |
Thanks! Didn't realize you could ssh into the node that way.
I got:
Then added the --ignore-preflight-errors parameter and executed:
I got:
Then I added the suggested flag and executed:
I got:
Can't figure out what to try next. |
@YiannisGkoufas |
@andersthorsen Host or Guest OS? |
@ghostsquad as host os. They support Windows 10 as host os tough. |
Is this still being developed? I've been waiting and following for ages |
@MartinKaburu yes, I'm actively working on this. |
@sharifelgamal do you need a hand on this? |
Experimental multi-node support will be available in the upcoming 1.9 release and will be available in the next 1.9 beta as well. |
Hey @sharifelgamal i'm running minikube v1.9.0 on MacOS Catalina and get this error $ minikube node add first install minikube with this command |
@yusufharip can you open up a new issue and give us a little more detail so we can debug better?
|
I'm interested in this feature. Will this allow us to simulate Cluster Autoscaler (https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler) scenarios locally? |
This feature is now available experimentally. We even have documentation: https://minikube.sigs.k8s.io/docs/tutorials/multi_node/ The UX is pretty rough, and there are many issues to resolve, but multi-node has now been added. We're now working off a newer more specific issue to address the usability issues and other bugs: We look forward to releasing v1.10 within the next 2 weeks, which will greatly improve the experience. Thank you for being so patient! This was by far minikube's most popular request for many years. |
Does this allow you to run on multiple physical machines? |
Non-master nodes do not get an InternalAddress: $ kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready master 173m v1.18.0 192.168.39.83 <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8
minikube-m02 Ready <none> 80m v1.18.0 <none> <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8
minikube-m03 Ready <none> 80m v1.18.0 <none> <none> Buildroot 2019.02.10 4.19.107 docker://19.3.8 $ kubectl describe nodes | grep InternalIP
InternalIP: 192.168.39.83 This appears to be because we are specifying the --node-ip as a kubelet argument, from minikube master vm: $ hostname
minikube
$ systemctl cat kubelet.service
# /usr/lib/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests
[Install] from minikube-m02 $ hostname
minikube-m02
$ systemctl cat kubelet.service
# /etc/systemd/system/kubelet.service
[Unit]
Description=kubelet: The Kubernetes Node Agent
Documentation=http://kubernetes.io/docs/
[Service]
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet
Restart=always
StartLimitInterval=0
# Tuned for local dev: faster than upstream default (10s), but slower than systemd default (100ms)
RestartSec=600ms
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
[Unit]
Wants=docker.socket
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.18.0/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=systemd --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-domain=cluster.local --config=/var/lib/kubelet/config.yaml --container-runtime=docker --fail-swap-on=false --hostname-override=minikube-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.83 --pod-manifest-path=/etc/kubernetes/manifests
[Install Note that the --node-ip arguments are the same in both cases. $ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dice-magic-app-86d4bc958-phx6j 2/2 Running 0 76m 10.244.29.196 minikube-m02 <none> <none>
dice-magic-app-86d4bc958-qfw2t 2/2 Running 0 76m 10.244.23.5 minikube-m03 <none> <none>
redis-2mvbc 1/1 Running 0 76m 10.244.23.4 minikube-m03 <none> <none>
redis-xrh9q 1/1 Running 0 76m 10.244.29.195 minikube-m02 <none> <none>
redis-xtgjh 1/1 Running 0 76m 10.244.39.8 minikube <none> <none>
www-c57b7f645-5vwd5 1/1 Running 0 76m 10.244.29.197 minikube-m02 <none> <none> scheduled on master(minikube) $ kubectl logs redis-xtgjh
10:C 06 May 2020 08:47:55.461 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
10:C 06 May 2020 08:47:55.461 # Redis version=6.0.1, bits=64, commit=00000000, modified=0, pid=10, just started
10:C 06 May 2020 08:47:55.461 # Configuration loaded
10:M 06 May 2020 08:47:55.462 * No cluster configuration found, I'm 5b67e68d6d6944abce833f7d1a7310fef3cecf85
10:M 06 May 2020 08:47:55.465 * Running mode=cluster, port=6379.
10:M 06 May 2020 08:47:55.465 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
10:M 06 May 2020 08:47:55.465 # Server initialized
10:M 06 May 2020 08:47:55.466 * Ready to accept connections scheduled on non-master(m02) $ kubectl logs redis-xrh9q
Error from server: no preferred addresses found; known addresses: [] |
After running When describing the nodes
Take a look to So when using nodeaffinity i'm getting Am I missing something? |
@MatayoshiMariano - I think you need to actually install a CNI. the demo page has a flannel yaml that works. personally I went through calico the hard way... @sharifelgamal - That's awesome! thank you. For now I think I'll have to use a different cluster tech for multi-node development, but I can't wait until minikube is ready. |
The next minikube release (1.10) will automatically apply a CNI for multinode clusters, but for the current latest, you do need to manually apply CNI. |
@aasmall yeah, that was it! Forgot to install flannel
|
@sharifelgamal how does Here's an overview of the cluster I'm dealing with: kubectl get po -A -o wide -w
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-system coredns-f9fd979d6-jfkm7 1/1 Running 0 67m 172.18.0.2 t0 <none> <none>
kube-system etcd-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kindnet-j6b6p 1/1 Running 0 66m 172.17.0.4 t0-m02 <none> <none>
kube-system kindnet-rmrzm 1/1 Running 0 66m 172.17.0.3 t0 <none> <none>
kube-system kube-apiserver-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-controller-manager-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-proxy-8jzh7 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system kube-proxy-gbm79 1/1 Running 0 66m 172.17.0.4 t0-m02 <none> <none>
kube-system kube-scheduler-t0 1/1 Running 0 67m 172.17.0.3 t0 <none> <none>
kube-system metrics-server-d9b576748-j97rs 1/1 Running 0 62m 172.18.0.2 t0-m02 <none> <none>
kube-system storage-provisioner 1/1 Running 1 67m 172.17.0.3 t0 <none> <none>
kubernetes-dashboard dashboard-metrics-scraper-c95fcf479-27v7x 1/1 Running 0 61m 172.18.0.4 t0-m02 <none> <none>
kubernetes-dashboard kubernetes-dashboard-5c448bc4bf-xqkgw 1/1 Running 0 61m 172.18.0.3 t0-m02 <none> <none> The following command gets stuck indefinetly: minikube dashboard --url -p t0
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
🤔 Verifying proxy health ... |
To provide a complete kubernetes experience, user might want to play & experience with features like scheduling and daemon set, etc. If minikube can emulate multiple kubernetes nodes, users can then use most of the kubernetes features.
This is probably not necessary in the first few versions of minikube. Once the single node setup is stable, we can look at emulating multiple nodes.
Up votes from users on this issue can be used as a signal to start working on this feature.
The text was updated successfully, but these errors were encountered: