-
Notifications
You must be signed in to change notification settings - Fork 769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Webhook requests fail on gatekeeper pod startup/shutdown #3776
Comments
@l0wl3vel Thank you for sharing this information. I am looking into this. |
@l0wl3vel we check if the webhook is able to serve requests or not as a part of |
The first thing is that the health check is not of interest to us. Rather, the readiness probe causes our problems, since that controls if traffic is routed to a pod. And with the current implementation the readiness probe does not indicate if the webhook is able to serve traffic. I already created a pull request for this issue. The second issue is that preStop with the sleep type is only supported on 1.30+ without modifying feature gates. Which means gatekeeper would have to bump the last supported K8s version to 1.30. 1.29 goes EOL in the end of February 2025, so just 1.5 months to go according to the version skew policy of gatekeeper. This is one of the few times a "sleep for x" is the proper solution, I think. Unless someone has an idea how to wait until an endpoint has been removed from a service and the changes are propagated through the routing tables I do not see any alternatives. |
@JaydipGabani I have created a PR containing the fix for this issue here: #3780 |
Co-Authored by @nilsfed
What steps did you take and what happened:
On our clusters we use gatekeeper with an
AssignImage
mutation to rewrite Podimages to use our private mirror. On about 2% of pods this rewrite does not
happen when rolling over our node pools.
We run Gatekeeper in a failing-open configuration with three
gatekeeper-controller-manager replicas.
We investigated this by installing gatekeeper in a controlled environment
(minikube) and used curl to query the webhook endpoint in a loop as fast as
possible and recording failures. Our test setup is outlined below. On scaling
events we observed failing requests. We root caused this to the following two
problems in gatekeeper:
unable to serve webhook traffic
for a brief moment due to Services updating asynchronously. Gatekeeper does
not have a grace period on server shutdown, leading to refused connections.
What did you expect to happen:
Gatekeeper pods can receive requests when they are registered endpoints at the
service.
Mitigations:
We found that the health- and readinessprobes are misconfigured. They indicate a
ready-state as soon as the manager is started, even though the webhook is not
responding to requests yet.
While we can reconfigure the health probe to validate that the webhook server is
able to serve requests by passing
--enable-tls-healthcheck=true
to thegatekeeper-controller-manager, this is not possible yet for the readiness probe.
If webhooks are enabled the readiness probe should check actual service health.
So we propose to implement the behavior of
--enable-tls-healthcheck=true
forthe readiness probe as well and enable it by default for the readiness probe
only.
We also found that adding a preStopHook to the gatekeeper-controller-manager
further prevents failing requests due to the webhook server terminating before
the endpoint gets removed from the K8s service.
Both mitigations yield zero failed requests over a test time frame of 30 minutes
with the test setup outlined below. Without the mitigations we saw requests
failing after less than a minute.
Anything else you would like to add:
Our test setup:
deployment.yaml
We have attached the kustomization we use to work around this problem, until the
final fix is upstream, here. It is a hack, which enables tls health checks and
uses the /healthz endpoint for readiness probes and adds the termination
preStopHook to buy the service time to disconnect the service and finish
in-flight requests.
Kustomization
kustomization.yaml
hotfix.yaml
Environment:
kubectl version
): 1.31.0The text was updated successfully, but these errors were encountered: