-
Notifications
You must be signed in to change notification settings - Fork 39.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failing e2e: restart kube-controller-manager fails #68246
Comments
The other test cases that test kubelet restart also become very flaky at the same time. I suspect those tests will fail setup if they are run after this kube-controller restart test case |
It looks like the kube-controller-manager is started as static pod in the gci e2e test. When it first starts, there is no kubernetes service and the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are not set inside the container. When the kube-controller-manager is restarted, the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are set, but the static pod has no service account, i.e. /var/run/secrets/kubernetes.io/serviceaccount/token does not exist. We made the later fatal in |
Automatic merge from submit-queue (batch tested with PRs 68265, 68273). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. apiserver: make InClusterConfig errs for delegated authn/z non-fatal Fixes #68246: Background: In gci e2e tests the kube-controller-manager is started as static pod. When it first starts, there is no kubernetes service and the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are not set inside the container. When the kube-controller-manager is restarted, the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are set, but the static pod has no service account, i.e. /var/run/secrets/kubernetes.io/serviceaccount/token does not exist. We made the later fatal in rest.InClusterConfig and its use to setup delegated authn/z.
Automatic merge from submit-queue (batch tested with PRs 68265, 68273). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. apiserver: make InClusterConfig errs for delegated authn/z non-fatal Fixes kubernetes/kubernetes#68246: Background: In gci e2e tests the kube-controller-manager is started as static pod. When it first starts, there is no kubernetes service and the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are not set inside the container. When the kube-controller-manager is restarted, the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are set, but the static pod has no service account, i.e. /var/run/secrets/kubernetes.io/serviceaccount/token does not exist. We made the later fatal in rest.InClusterConfig and its use to setup delegated authn/z. Kubernetes-commit: 2c933695fa61d57d1c6fa5defb89caed7d49f773
Automatic merge from submit-queue (batch tested with PRs 68265, 68273). If you want to cherry-pick this change to another branch, please follow the instructions here: https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md. apiserver: make InClusterConfig errs for delegated authn/z non-fatal Fixes kubernetes/kubernetes#68246: Background: In gci e2e tests the kube-controller-manager is started as static pod. When it first starts, there is no kubernetes service and the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are not set inside the container. When the kube-controller-manager is restarted, the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT env vars are set, but the static pod has no service account, i.e. /var/run/secrets/kubernetes.io/serviceaccount/token does not exist. We made the later fatal in rest.InClusterConfig and its use to setup delegated authn/z. Kubernetes-commit: 2c933695fa61d57d1c6fa5defb89caed7d49f773
Is this a BUG REPORT or FEATURE REQUEST?:
@kubernetes/sig-storage-bugs
@kubernetes/sig-auth-bugs
What happened:
Since 8/31 10am, the following test case fails. It kills kube-controller-manager and waits for it to restart. However, it seems like kube-controller-manager failed to come back up:
[sig-storage] NFSPersistentVolumes[Disruptive][Flaky] when kube-controller-manager restarts should delete a bound PVC from a clientPod, restart the kube-control-manager, and ensure the kube-controller-manager does not crash
Testgrid
Logs
kube-controller-manager failed to come up with this error:
The text was updated successfully, but these errors were encountered: