-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The recording webhook's resource updating is racy #744
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Especially with multiple replicas, we've seen the webhooks error out due to a conflict. This is problematic because the webhooks have a hard-fail policy. Let's retry multiple times instead. Fixes: kubernetes-sigs#744
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
What happened:
I deleted pods in a deployment to trigger recording a policy and saw:
What you expected to happen:
No errors
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
N/A
Environment:
cat /etc/os-release
): RHCOS 4.9uname -a
): N/AThe text was updated successfully, but these errors were encountered: