Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reconcile on all private service events in KPA. #7514

Merged

Conversation

markusthoemmes
Copy link
Contributor

Proposed Changes

We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.

Release Note

Fixed a bug where a revision with minScale > 1set would never become ready.

/assign @vagababov

@googlebot googlebot added the cla: yes Indicates the PR's author has signed the CLA. label Apr 6, 2020
@knative-prow-robot knative-prow-robot added the size/S Denotes a PR that changes 10-29 lines, ignoring generated files. label Apr 6, 2020
Copy link
Contributor

@knative-prow-robot knative-prow-robot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@markusthoemmes: 0 warnings.

In response to this:

Proposed Changes

We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.

Release Note

Fixed a bug where a revision with minScale > 1set would never become ready.

/assign @vagababov

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@knative-prow-robot knative-prow-robot added approved Indicates a PR has been approved by an approver from all required OWNERS files. area/API API objects and controllers area/autoscale labels Apr 6, 2020
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
@knative-prow-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: markusthoemmes

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link
Contributor

@vagababov vagababov left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@knative-prow-robot knative-prow-robot added the lgtm Indicates that a PR is ready to be merged. label Apr 6, 2020
@knative-metrics-robot
Copy link

The following is the coverage report on the affected files.
Say /test pull-knative-serving-go-coverage to re-run this coverage report

File Old Coverage New Coverage Delta
pkg/reconciler/autoscaling/kpa/controller.go 94.3% 94.1% -0.2

@knative-prow-robot knative-prow-robot merged commit 2ac7eb1 into knative:master Apr 6, 2020
markusthoemmes added a commit to markusthoemmes/knative-serving that referenced this pull request Apr 6, 2020
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
openshift-merge-robot pushed a commit to openshift/knative-serving that referenced this pull request Apr 7, 2020
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
markusthoemmes added a commit to markusthoemmes/knative-serving that referenced this pull request May 4, 2020
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
markusthoemmes added a commit to markusthoemmes/knative-serving that referenced this pull request May 4, 2020
We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
knative-prow-robot pushed a commit that referenced this pull request May 5, 2020
* Propagate Ingress status once ObservedGenerated matches generation. (#7606)

Currently, status propagation is gated by both the ObservedGeneration check and a readiness check. In effect, as long as the Ingress is not ready, the route will always only show "IngressNotConfigured", which isn't super helpful to the user.

This lifts this check to allow the actual status from the Ingress to bubble up into the route (and thus into the Service potentially) to allow for finer grained diagnostics towards the user.

* Reconcile on all private service events in KPA. (#7514)

We used to have a metrics service which the KPA watched and reacted to. We got rid of that and now have a race condition if we specify minScale but the SKS becomes ready with less than minScale pods. Nothing after that will ever kick the KPA to reconcile again.

This fixes that by watching the private services to make sure the KPA sees every change in deployment size. It itself keys off of the private services too to calculate the current replicas, closing the loop here.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/API API objects and controllers area/autoscale cla: yes Indicates the PR's author has signed the CLA. lgtm Indicates that a PR is ready to be merged. size/S Denotes a PR that changes 10-29 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants