-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm join on slave node fails preflight checks #1
Comments
@dgoodwin I think this issue is fixed with your PR kubernetes/kubernetes#36083, right? |
@luxas no I don't think so as this is /var/lib/kubelet, which I didn't touch in that pr. When combined with kubernetes/kubernetes#37063 I think something is up, probably deb specific. It almost seems like kubeadm init is being run by default out of the box after package installation. @ndtreviv what was actually in /var/lib/kubelet? Did it look like this gist? https://gist.github.com/bronger/92d8cf703628c6d1ff9e93aa920515de |
@luxas The issue still persists. The directory /var/lib/kubelet contains two empty directories pods and plugins. I faced the above issue when I executed the command
just after installation of the kube* packages. |
I am facing the same issue on Slave node, with the same set of steps. root@kube-node-3:~# kubeadm join --token=<...> 10.2.15.7 root@kube-node-3:~# cat /home/kubeadmin/install.sh apt-get update && apt-get install -y apt-transport-https Install docker if you don't have it already.apt-get install -y docker.io root@kube-node-3:~# history root@kube-node-3:~# kubeadm version root@kube-node-3:~# uname -a -- I tried to skip the pre-flight checks and it seems working, but not sure of if it missing anything. root@kube-node-2:~# kubeadm join --token=<..> 10.2.15.7 --skip-preflight-checks Node join complete:
Run 'kubectl get nodes' on the master to see this machine join. |
As a workaround for this, I run |
From @ndtreviv on November 17, 2016 12:13
Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see http://kubernetes.io/docs/troubleshooting/.):
No.
What keywords did you search in Kubernetes issues before filing this one? (If you have found any duplicates, you should instead reply there.):
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Kubernetes version (use
kubectl version
):Environment:
Cloud provider or hardware configuration:
AWS EC2 Classic ami-45b69e52 m3.2xlarge instance type in us-east-1 region.
OS (e.g. from /etc/os-release):
uname -a
):N/A
N/A
What happened:
I was standing up a 3-node cluster following the instructions here: http://kubernetes.io/docs/getting-started-guides/kubeadm/
Commands I ran on each slave:
Additional commands run on the master:
flannel.yml:
ssh-d onto slave node and ran - as per the documentation - (anonymised to protect the not-so-innocent):
Output was:
What you expected to happen:
I expected the node to join the cluster and output data according to the docs: http://kubernetes.io/docs/getting-started-guides/kubeadm/
As I understand the architecture, each node has to have a kubelet on it. I have a feeling that some preflight checks have been added for something else, reused for this and aren't quite appropriate. I may be wrong, though!
How to reproduce it (as minimally and precisely as possible):
Create three AWS EC2 Classic ami-45b69e52 m3.2xlarge instance type in us-east-1 region.
Run the commands as I did.
Anything else do we need to know:
No.
Copied from original issue: kubernetes/kubernetes#36987
The text was updated successfully, but these errors were encountered: