-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconcile on all private service events in KPA. #7514
Reconcile on all private service events in KPA. #7514
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@markusthoemmes: 0 warnings.
In response to this:
Proposed Changes
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.
This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
Release Note
Fixed a bug where a revision with minScale > 1set would never become ready.
/assign @vagababov
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
d5f3cad
to
4835c3e
Compare
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: markusthoemmes The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
The following is the coverage report on the affected files.
|
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
* Propagate Ingress status once ObservedGenerated matches generation. (#7606) Currently, status propagation is gated by both the ObservedGeneration check and a readiness check. In effect, as long as the Ingress is not ready, the route will always only show "IngressNotConfigured", which isn't super helpful to the user. This lifts this check to allow the actual status from the Ingress to bubble up into the route (and thus into the Service potentially) to allow for finer grained diagnostics towards the user. * Reconcile on all private service events in KPA. (#7514) We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again. This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
Proposed Changes
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.
This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
Release Note
/assign @vagababov