-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Metrics Server (for resource metrics API) #271
Comments
I think this work is strictly under SIG/instrumentation, not SIG/autoscaling. I also feel that someone from that sig should be approver. Another reviewer should be someone from api-machinery to make sure we will be using aggregator correctly. |
Thanks @DirectXMan12 for creating the issue. I updated it a bit. I agree that this is purely sig-instrumentation work. @DirectXMan12 is involved in sig-instrumentation too and I think he is the best main approver here, though probably it would make sense to have someone from api-machinery involved too. |
Fine by me (although, FWIW, I've been doing a lot of standing up different add-on API servers recently, so I can prob review the aggregator-related stuff too, at least for the initial reviews). |
@DirectXMan12 please, provide us with the design proposal link and docs PR link (and update the features tracking spreadsheet with it). |
this is @piosz's project, but I'm uncertain if it's going to make 1.7. |
@piosz ? |
@idvoretskyi the design was reviewed and approved in the doc. I forgot to make it a proposal on github. I'll turn it into the proposal ASAP. Regarding the feature itself it's almost implemented (the last PR is in flight kubernetes-sigs/metrics-server#4). I'll also add there some documentation. Since this is an alpha feature in incubator I don't think we need an official documentation in Kubernetes docs. |
@idvoretskyi I'd like to graduate this to beta in 1.8. Should I create a new feature entry or reuse this one? |
@piosz Thanks, yeah, please reuse. |
@piosz yes, please reuse the current one (as it will be the same feature, in a different stage). |
Automatic merge from submit-queue (batch tested with PRs 49727, 51792) Introducing metrics-server ref kubernetes/enhancements#271 There is still some work blocked on problems with repo synchronization: - migrate to `v1beta1` introduced in #51653 - bump deps to HEAD Will do it in a follow up PRs once the issue is resolved. ```release-note Introduced Metrics Server ```
The remaining part of work here is to sync metrics-server repo with the recent changes in Kubernetes kubernetes/kubernetes#51653. The problem is that syncing script is broken and it needs to be done manually. |
@piosz do you have an update on missing docs? The PR for this is due today. |
Automatic merge from submit-queue (batch tested with PRs 52488, 52548). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.. Bumped Metrics Server to v0.2.0 ref kubernetes/enhancements#271 **Release note**: ```release-note Introduced Metrics Server in version v0.2.0. For more details see https://github.com/kubernetes-incubator/metrics-server/releases/tag/v0.2.0. ```
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.. Added OWNERS for metrics-server kubernetes/enhancements#271
@piosz we need docs ASAP for this in order to make 1.8 documentation |
@jdumars I'm working on this. We be ready today or tomorrow. |
@piosz when you have a PR could you please link it to this issue? Thanks! |
Filed kubernetes/kubernetes#52811 to track missing e2e tests |
This was discussed today during our release team burndown. We're extremely worried about adding e2e tests this late in the cycle. Best case scenario those tests get in today or Monday, we wait a day for the e2e tests to give us signal, then have to de-bug an flakes in the test, or identify any actual failures. This is all before the release on Wednesday. Can someone try to address these concerns? What's the risk if we leave this feature off by default? cc @kubernetes/kubernetes-release-managers |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@piosz If so, can you please ensure the feature is up-to-date with the appropriate:
cc @idvoretskyi |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Updated ipv6 support docs
Feature Description
The text was updated successfully, but these errors were encountered: