-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ubuntu 18.04 dns problems in pods #448
Comments
Update: After restarting kubelet, destroy kube-dns otherwise dns queries will stop working:
Delete all pods. You may |
Seems like the issue isn't necessarily specific to Ubuntu 18.04 and The fix will however need to be specific to the local resolver in use... for the
The upstream kubernetes fix for 1.11 seems to have Our fix for this in pharos 1.3 would be to upgrade to kube 1.11 which would fix this for new installs... to fix this for pharos 1.2 as well as existing 1.2 -> 1.3 upgrades, then we will need to detect this configuration and set that ourselves. |
Confirm that Normal
|
Symptoms:
kubectl forward doens't work, Example:
Results in
Geting the log in kubelet on the worker node
Results in something like:
Which points to getaddrinfo("localhost")..., that means pod is not able to resolve localhost.
Other commands that use port-forward like helm version or other helm commands (helm package interact with tiller server using port forwards) has the same symptoms
Cause
Ubuntu 18.04 uses systemd-resolve which change /etc/resolv.conf to use a local dns. Kubelet needs to be started with the flag --resolv-conf=/run/systemd/resolve/resolv.conf on this systems.
Possible Solutions
This have been addressed by kubernetes/kubeadm#787 and probably will be on kubernetes 1.11, as a work around there is two easy fix:
Restart kubelet:
Do the same on all machines.
Pharos Installer
I suggest that pharos-cluster check for the existence of the file in Ubuntu 18.04 and perform one of the fix above. We need to check after the kubeadm fix is released to make sure it do not conflict (possible resulting in two flags added). Check the pull request to see how it was fixed in kubeadm side.
The text was updated successfully, but these errors were encountered: