-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scaling Roadmap 2019 #3040
Scaling Roadmap 2019 #3040
Conversation
/assign mattmoor Hey @mattmoor, I would love your input on this draft 2019 scaling roadmap. |
/lgtm |
Co-Authored-By: josephburnett <josephburnett79@gmail.com>
Co-Authored-By: josephburnett <josephburnett79@gmail.com>
docs/roadmap/scaling-2019.md
Outdated
|
||
The goal of Overload Handling is to enqueue requests at a revision-level. Scale-from-zero should not overload if autoscaling can react in a reasonable amount of time to provide additional pods. When new pods come online, they should be able to take load for the existing pods. Even when scaled above zero, brief spikes of overload should be handled by enqueuing requests at a revision-level. The depth of the revision-level queue should also be configurable because even the Revision as a whole needs to guard against overload. | ||
|
||
The overall problem touches on both Networking and Autoscaling, two different working groups. Much of the overload handling will be implemented in the Activator, which is a part of ingress. So this project is shared jointly between the two working groups. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc @tcnghia
@mattmoor this is ready for another look. |
@josephburnett thanks for the ping - added a PR to your fork & branch proposing something around migrating K8s Deployments to Knative. |
Propose section on migration K8s Deployments
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the ℹ️ Googlers: Go here for more info. |
Sorry I messed up the CLA bot. I confirm that I am ok with my commits being contributed to this project. And, feel free to rebase and take my wording without my actual commit if that makes it easier. |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) ℹ️ Googlers: Go here for more info. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few comments, I feel like some of the points are narrowed down to much in a specific direction.
- Unified some wording (capitalization mostly). - Removed prescriptive key steps. These should be captured by the respective projects, which will be more dynamically changeable than this document.
Reworked parts of the Scaling roadmap.
So there's good news and bad news. 👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there. 😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request. Note to project maintainer: This is a terminal state, meaning the ℹ️ Googlers: Go here for more info. |
@evankanderson can you manually add the cla:yes label please? @markusthoemmes just added some edits to my branch. |
/cla yes |
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) ℹ️ Googlers: Go here for more info. |
@mattmoor can you lgtm? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: evankanderson, josephburnett The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Move to corev1.PodSpec now that vN-1 supports the containers field. (knative#4221) Previously we defined our own partial PodSpec because the corev1 version lacks `omitempty` and appears as `containers: null` in requests from generated clients, even if unspecified, which would have broken webhook validation. Now that the field has been out for a release, we can switch to the common PodSpec. Scaling Roadmap 2019 (knative#3040) * Scaling 2019 roadmap stub. * Descriptions for all 2019 goals. * Goals, POCs and Github projects for each. * Remove recap (will do later). * Remove indent. * Add Pluggability and HPA line item. * Yanwei as POC for layering. * Update docs/roadmap/scaling-2019.md Co-Authored-By: josephburnett <josephburnett79@gmail.com> * Update docs/roadmap/scaling-2019.md Co-Authored-By: josephburnett <josephburnett79@gmail.com> * Clarify overload handling for 0 and non-0 cases. * Refactor cold-start goal. * Remove POC. * Autoscaler scalability. * More edits. * HPA Interation. * Minor edits. * Propose section on migration K8s Deployments * Reworked parts of the Scaling roadmap. - Unified some wording (capitalization mostly). - Removed prescriptive key steps. These should be captured by the respective projects, which will be more dynamically changeable than this document. Enable setting the resource request/limits via annotations for queue-proxy side-car container (knative#4151) * Enable setting the resource request/limits via annotations for queue-proxy side-car container * Last PR comments * more * added integration tests * more * testfix * integrationtest * comments * integration test fix * PR comments * more * final * more pr comments * added error ErrInvalidValue * code coverage of queue.go Remove unused constants. (knative#4238) Update DEVELOPMENT.md (knative#4230) Auto TLS landed in v0.6, so this documentation is out of date golang format tools (knative#4241) Produced via: `gofmt -s -w $(find -path './vendor' -prune -o -type f -name '*.go' -print))` `goimports -w $(find -name '*.go' | grep -v vendor)` Move Metric interfaces into the general autoscaling package. (knative#4236) * Move Metric interfaces into the general autoscaling package. This used to be KPA specific but will soon be needed to be used by HPA resources as well to trigger metric collection. Decider interfaces and types stay KPA specific. * Move the Metrics resource interface next to the metric implementation. * Move Deciders interface for consistency. Apply various fixes pointed out by staticcheck. (knative#4242) * Transform string(buf.Bytes()) to buf.String(). * Remove a bunch of unused code. * Fix error capitalization. * Fix issue with error overlapping. * Fix deprecated usage of Apps without version. * Fix file permission resolution. * Fix comparison to boolean. * Fix issue with variable never being used. * Remove unused conditionsets. * Fix error checks after fixing capitalization. * Remove unused values in performance tests. * Remove some more unused code. steadier state Format markdown (knative#4240) Produced via: `prettier --write --prose-wrap=always $(find -name '*.md' | grep -v vendor | grep -v .github)` Drop DeprecatedName from service_test.go (knative#4243) some junk things work
* Scaling 2019 roadmap stub. * Descriptions for all 2019 goals. * Goals, POCs and Github projects for each. * Remove recap (will do later). * Remove indent. * Add Pluggability and HPA line item. * Yanwei as POC for layering. * Update docs/roadmap/scaling-2019.md Co-Authored-By: josephburnett <josephburnett79@gmail.com> * Update docs/roadmap/scaling-2019.md Co-Authored-By: josephburnett <josephburnett79@gmail.com> * Clarify overload handling for 0 and non-0 cases. * Refactor cold-start goal. * Remove POC. * Autoscaler scalability. * More edits. * HPA Interation. * Minor edits. * Propose section on migration K8s Deployments * Reworked parts of the Scaling roadmap. - Unified some wording (capitalization mostly). - Removed prescriptive key steps. These should be captured by the respective projects, which will be more dynamically changeable than this document.
This is what we hope to accomplish in 2019.