-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm should make the --node-ip option available #203
Comments
I'm currently having issues setting up a Kubernetes cluster in DigitalOcean because of this. By default, kubelet will bind and expose/broadcast the ip of the default gateway, which in these cases is the public IP facing Internet traffic. Then when getting to the point of setting up a pod networking add-on (like Weave) the whole hell breaks loose because the master's advertise IP is the internal network's IP address but the worker nodes are trying to expose the public one :/ The solution is to update the unit file dropped under If |
I just wanted to confirm that adding the |
I'm having the same problem. I tried your method and it didn't work for me either. Did you manage to make it work? |
@agsergi I was trying to setup a k8s cluster in DigitalOcean with the Private Networking option enabled, which lead me to this issue. Disabling that feature did it for me but I'm not sure if you're on the same boat. I guess this issue will still be present if you have more than two NICs attached to the machine. |
Guys, And it worked for me. |
@luxas Why is this tagged as kind/support? Is there any way to use kubeadm with a node ip which isn't the source of the default route? |
@evocage there is no --apiserver-advertise-address=<host_only_adaptop_IP for my kubeadm . How did you get that option? many thanks. |
I ran into the same issue on Scaleway. When I've initialized the master I've passed But when I attach a public IP to my node et reboot it everything goes well 🤔 Any idea? |
Same problem when trying to setup kubernetes on a VM host with single IP where only some ports can be forwarded to the VMs. Any workaround to get it to work? |
Just wanted to +1. I'm trying to run across a set of Digitalocean VMs with a private IP on all of them, yet the public facing address keeps working its way into the cluster somehow. |
I managed to get private IPs working by running this on master:
Adding
@mongrelion What type of communication between master<->node is still using public interfaces? I wasn't able to replicate this so I'd be interested to know if kubernetes is behaving unexpectedly. |
That did it! The combination of Per this request though, having that |
Thank you @jamiehannaford for that summary. Do we think we should document this more visibly? |
@luxas Yeah I think having this use case explicitly documented would be useful |
@jamiehannaford If you want to add also this to the document reshuffle for v1.9, please send me a the paragraph to add to kubernetes/website#6103 |
@fabriziopandini Sure! Done |
Hi, I wish to give some feedback as well. I'm trying to use kubeadm to build a secure single node cluster. I'm using this command and the cluster is created:
However
I will try the --node approach, but I would appreciate if you can help me to find a solution for this. My use case: I have a beefy machine that I wish to use as a staging environment and maybe as a production environment for small projects where I don't care about HA. I can use SSH and use kubectl to control the cluster. Thanks, |
I had exactly the same issue as @mosho1 over here and got down to the bottom of it. I use DO and CoreOS but this really is related to neither and could happen on other providers and distros. It is also unrelated to DO's private networks being enabled or disabled: I reproduced the issue in both cases.
EDIT: Thanks to @klausenbusk, it seems kubelet picks the anchor IP up under the assumption that it could be useful when it's not. See details below. The solution is indeed to tell kubelet what is the IP to use. It can be the public one or the private one if you use the optional private network. Here's how I made use of $ DROPLET_IP_ADDRESS=$(ip addr show dev eth0 | awk 'match($0,/inet (([0-9]|\.)+).* scope global eth0$/,a) { print a[1]; exit }')
$ echo $DROPLET_IP_ADDRESS # check this, jus tin case
$ echo "Environment=\"KUBELET_EXTRA_ARGS=--node-ip=$DROPLET_IP_ADDRESS\"" >> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
$ systemctl daemon-reload
$ systemctl restart kubelet |
@lloeki Thanks for the write-up. Would you mind updating the docs, possibly here: https://github.com/kubernetes/website/blob/master/docs/setup/independent/troubleshooting-kubeadm.md |
Are you sure about that? It could just be the anchor ip (compare with |
@klausenbusk you're absolutely correct, that was entertaining speculation on my part, sorry! The following is from the master node, now using So it seems kubelet picks that one up under the assumption that it could be useful when it's not?
|
@jamiehannaford Sounds like I can do that :) |
A common source of beffudlement when using local hypervisors or cloud providers with peculiar interface setups, IP adressing, or network policies for which kubelet cannot guess the right IP to use. `awk` is being used so that the example works in CoreOS Container Linux too. Requested in kubernetes/kubeadm#203.
I'm ok with closing this one. |
Just note that in Kubernetes 1.11 setting of KUBELET_EXTRA_ARGS in /etc/systemd/system/kubelet.service.d/20-custom.conf don't work anymore: it should be set in /etc/sysconfig/kubelet (a bit different syntax of these files). |
@stepin I just set up a 1.11 cluster using kubeadm and got this afterwards
If what you said was true, it appears the game of config hot potato continues. I was able to find yet another location for kubelet config here: For future travellers (for the next week at least), this seems to work:
Obviously you'll need to change the IP to whatever yours is on that particular node. Disclaimer: I'm just checking out Kubernetes so I don't guarantee this isn't doing something terrible. Though, |
@jazoom - Thanks for your comment, it finally led me to read the systemd unit file more closely. I thought I was going crazy as I could bring up the same config in 1.10, and everything worked... bring up the config in 1.11 and the custom |
@geerlingguy You're welcome. At least it wasn't a case of "it works every second time I bring up a 1.11 cluster". Those irreproducible issues will really make you go crazy. |
Just ran into this in "Kubeadm 1.13". Fixed it using the following:
|
^ If anyone is aware of how to get that to work with floating IPs that don't appear on the NIC, please let me know. Successful otherwise. |
Hi, Please try this https://wiki.hetzner.de/index.php/Cloud_floating_IP_persistent/en . |
This worked perfectly. Thank you. |
thanks a lot ,moving |
and add --node-ip=<your_ip> into any possible EnvironmentFiles like
|
Run |
FEATURE REQUEST
If kubeadm is used to deploy an K8S cluster it seems that by default cloud provider internal IP addresses are used. However, it would be really helpful (for cross-cloud deployment use cases) to provide an option to set the --node-ip option of the kubelet (see https://kubernetes.io/docs/admin/kubelet/).
So, a kubeadm init call could on node with
<public_master_ip>
look like that:kubeadm init --token=<token> --api-advertise-addresses=<public_master_ip> --node-ip=<public_master_ip>
And a kubeadm join on a node with
<public_worker_ip>
would look like that:kubeadm join --token=<token> --node-ip=<public_worker_ip>
Having this, kubeadm could be easily used for cross-cloud provider deployments. If there are other options I am not aware of, I would like to hear. But my search did not turn up a solution (using kubeadm).
The text was updated successfully, but these errors were encountered: