-
Notifications
You must be signed in to change notification settings - Fork 716
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Install on a system using systemd-resolved
leads to broken DNS
#273
Comments
So kubeadm doesn't lay down the kubelet startup, that's done in the system unit file, which is done here: https://github.com/kubernetes/release /cc @marcoceppi @castrojo - this appears to be an ubuntu default for desktop setups. |
@timothysc @marcoceppi @castrojo Critical for v1.7? |
@luxas no. |
Sorry, not sure if anybody will still look at closed issues. #272 is not resolved by the solution suggested here. |
Please reopen #272 or start working on this issue considering the other context as well. |
I'm hitting this when I try to use kubeadm with GCE's ubuntu-1710 image so it looks like it's not limited to the desktop install. |
As an FYI: as I commented on kubernetes/kubernetes#45828, I don't believe that over-riding kubelet's resolv.conf reference will work anyway. This will just dump a broken (referencing 127.0.0.53) resolv.conf into all the pods and bypass cluster-local resolution. The current state of affairs is that just external resolution is broken because kube-dns has a broken upstream, but it is able to stub the cluster-local zones off to k8s. The only fix I can see is adding / editing config to kube-dns / CoreDNS. NB
|
@mt-inside that's why pointing
|
@antoineco I agree that'll get |
By default, if What you described is the behaviour of ref https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/#pods-dns-policy |
@antoineco Ah, you're right. I was confused about dnsPolicy. I was confused about what coredns is running as, because Default isn't the default. I also confused myself by looking at a ClusterFirst Pod that was failing back to Default when I didn't specify --cluster-dns in some of my tests. Also the scope of --resolv-conf (not applying to ClusterFirst) and --cluster-dns (not applying to Default) isn't documented, and I didn't think of it until I really grokked the different dns modes. I agree this fix is perfectly sensible. |
So what is the consensus? |
@timothysc Sorry, it's not spelt out. A combination of what @antoineco says here and @thockin says on kubernetes/kubernetes#45828 However (deferring to the kubeadm authors here):
|
I've hit the very same issue with kubeadm 1.10.0 and CoreDNS - with even worse results, as CoreDNS asked to resolve any external name starts looping to itself, consuming all allowed RAM and getting OOM-killed. Obviously it can be fixed either by I've raised an issue in CoreDNS tracker for better handling of such a misconfiguration on CoreDNS side: coredns/coredns#1647 |
/assign @detiber @timothysc |
seems like a duplicate of #787 |
Automatic merge from submit-queue (batch tested with PRs 63673, 63712, 63691, 63684). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>. kubeadm - add preflight warning when using systemd-resolved **What this PR does / why we need it**: This PR adds a preflight warning when the host is running systemd-resolved. Newer Ubuntu releases (artful and bionic in particular) run systemd-resolved by default and in the dfeault configuration have an /etc/resolv.conf file that references 127.0.0.53 which is not accessible from containers running on the host. We will now provide a warning to the user to tell them that the kubelet args should include `--resolv-conf=/run/systemd/resolve/resolv.conf`. `/run/systemd/resolve/resolv.conf`. **Which issue(s) this PR fixes**: This does not resolve the following issues, but it does provide better output to the users affected by the issues: kubernetes/kubeadm#273 kubernetes/kubeadm#787 **Release note**: ```release-note NONE ```
systemd-resolved
leads to broken DNS
As we have the preflight check (added in kubernetes/kubernetes#63691), I'm gonna close this Thank you a lot everyone who have contributed to fixing this! |
Make sure kubeletes use `/run/systemd/resolve/resolv.conf` and not `/etc/resolv.conf` to make sure that any dnsmasq / resolved installed on the workers does not interfere with the clusters DNS resolution Refs: kubernetes/kubeadm#273 https://blog.sophaskins.net/blog/misadventures-with-kube-dns/
This problem occurs because systems using systemd-resolved copy 127.0.0.53 from the host's /etc/resolv.conf. More discussion here: kubernetes/kubernetes#45828 Related issues: kubernetes/kubeadm#787 kubernetes/kubeadm#273 kubernetes/kubeadm#845 The upstream fix is now in v1.11.
This problem occurs because systems using systemd-resolved copy 127.0.0.53 from the host's /etc/resolv.conf. More discussion here: kubernetes/kubernetes#45828 Related issues: kubernetes/kubeadm#787 kubernetes/kubeadm#273 kubernetes/kubeadm#845 The upstream fix is now in v1.11.
This problem occurs because kube-dns on systems using systemd-resolved copy 127.0.0.53 from the host's /etc/resolv.conf. Since 127.0.0.53 is a loopback address, dns queries never get past kube-dns causing our conformance tests to fail on DNS related issues. More discussion here: kubernetes/kubernetes#45828 Related issues: kubernetes/kubeadm#787 kubernetes/kubeadm#273 kubernetes/kubeadm#845 The upstream fix is now in v1.11.
This problem occurs because kube-dns on systems using systemd-resolved copy 127.0.0.53 from the host's /etc/resolv.conf. Since 127.0.0.53 is a loopback address, dns queries never get past kube-dns causing our conformance tests to fail on DNS related issues. More discussion here: kubernetes/kubernetes#45828 Related issues: kubernetes/kubeadm#787 kubernetes/kubeadm#273 kubernetes/kubeadm#845 The upstream fix is now in v1.11. Without the fix, the kubedns and dnsmasq containers would copy the host's `/etc/resolv.conf`: ``` \# This file is managed by man:systemd-resolved(8). Do not edit. \# \# This is a dynamic resolv.conf file for connecting local clients to the \# internal DNS stub resolver of systemd-resolved. This file lists all \# configured search domains. \# \# Run "systemd-resolve --status" to see details about the uplink DNS servers \# currently in use. \# \# Third party programs must not access this file directly, but only through the \# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way, \# replace this symlink by a static file or a different symlink. \# \# See man:systemd-resolved.service(8) for details about the supported modes of \# operation for /etc/resolv.conf. nameserver 127.0.0.53 search platform9.sys ``` After the fix: ``` \# This file is managed by man:systemd-resolved(8). Do not edit. \# \# This is a dynamic resolv.conf file for connecting local clients directly to \# all known uplink DNS servers. This file lists all configured search domains. \# \# Third party programs must not access this file directly, but only through the \# symlink at /etc/resolv.conf. To manage man:resolv.conf(5) in a different way, \# replace this symlink by a static file or a different symlink. \# \# See man:systemd-resolved.service(8) for details about the supported modes of \# operation for /etc/resolv.conf. nameserver 10.105.16.2 nameserver 10.105.16.4 search platform9.sys ```
What keywords did you search in kubeadm issues before filing this one?
systemd resolved dns
Is this a BUG REPORT or FEATURE REQUEST?
BUG REPORT
Versions
kubeadm version (use
kubeadm version
): v1.6.3Environment:
kubectl version
): v1.6.3uname -a
): Linux gjc-XPS-8500 4.10.0-21-generic Clusters built with kubeadm don't support basic auth #23-Ubuntu SMP Fri Apr 28 16:14:22 UTC 2017 x86_64 x86_64 x86_64 GNU/LinuxWhat happened?
Installed kubernetes on bare metal using kubeadm. Dns inside pods did not work.
What you expected to happen?
Would expect dns inside pods to work.
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
As noted in kubernetes/kubernetes#45828, the problem is due to the fact that on a normal Ubuntu desktop (and maybe other desktop Linux OSes),
/etc/resolve.conf
contains127.0.0.35
, which doesn't work inside Pods.The correct thing to do is to add
--resolv-conf=/run/systemd/resolve/resolv.conf
to the kubelet config in casesystemd-resolved
is running withDNSStubListener
and/etc/resolv.conf
is configured with the local resolver (solution suggested by @antoineco and @thockin).The text was updated successfully, but these errors were encountered: