-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kube-proxy show up as single pod #2931
Comments
There is a related problem here: We may pay a (small, in the grand scheme of things) performance penalty because under some circumstance the probes will be unable to determine which host these system pods run on, which leads these pods to be reported by all probes. That's because one path through that logic identifies local pods by their |
There is just one place in the kubernetes code base which sets the That looks like a bug. What I haven't worked out yet is what subsequently replaces the pod's UID, since by the time we get hold of it from the apiserver, it has a regular UID, which might explain why the problem has gone unnoticed for so long, since it only manifest in places where the originally assigned ID is used, such as the |
The implication of us seeing the problem across the three cluster's I've looked at, is that kubelet on these clusters obtained the manifest from a URL rather than a file. But that's not the case - I see a |
Found it. The hash is created as follows: hasher := md5.New()
if isFile {
fmt.Fprintf(hasher, "host:%s", nodeName)
fmt.Fprintf(hasher, "file:%s", source)
} else {
fmt.Fprintf(hasher, "url:%s", source)
}
hash.DeepHashObject(hasher, pod)
pod.UID = types.UID(hex.EncodeToString(hasher.Sum(nil)[0:])) Alas, |
There is one other call site to Seems to me that the the And the config hash code should be updated to always mix in the node name. |
@rade thanks for reporting and digging into this! I send a PR kubernetes/kubernetes#57135 to the upstream. Please take a look :) |
I'm not sure why |
@xiangpengzhao Thanks. PR looks good with the caveat you noted, i.e. it's preserving a somewhat questionable API. |
The fix has at last merged in Kubernetes kubernetes/kubernetes#87461 (Nowadays kube-proxy is typically installed as a DaemonSet rather than static pods, which means the original symptom is rarely seen) |
The kube-proxy shows up as a single pod
even though there is one running on each host
On closer inspection, this issue also shows for etcd, kube-scheduler, kube-api-server, kube-controller-manager.
The underlying problem is that we identify certain system pods by their
kubernetes.io/config.hash
annotation, which turns out to be the same on all hosts, and hence scope will merge all the information into a single node (hence we see multiple kube-proxy containers and processes in the above).We use that value instead of the usual
metadata.uid
field because for these system pods theio.kubernetes.pod.uid
container label we use for determining which pod a container belongs to, contains that value. See #1412 for when this special case was introduced.The text was updated successfully, but these errors were encountered: