Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dynamic Maximum volume count #554

Closed
gnufied opened this issue Mar 29, 2018 · 73 comments
Closed

Dynamic Maximum volume count #554

gnufied opened this issue Mar 29, 2018 · 73 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Milestone

Comments

@gnufied
Copy link
Member

gnufied commented Mar 29, 2018

Feature Description

@gnufied gnufied changed the title Dynamic Max. volume count Dynamic Maximum volume count Mar 29, 2018
@gnufied
Copy link
Member Author

gnufied commented Mar 29, 2018

/assign

@klausenbusk
Copy link

Is it intended for this to work with flexvolume plugins? We still lack a solution for that use-case.

@gnufied
Copy link
Member Author

gnufied commented Apr 6, 2018

It is intended to work with all volume types. Including Flexvolume and CSI.

@idvoretskyi
Copy link
Member

@kubernetes/sig-storage-feature-requests

@k8s-ci-robot k8s-ci-robot added sig/storage Categorizes an issue or PR as relevant to SIG Storage. kind/feature Categorizes issue or PR as related to a new feature. labels Apr 12, 2018
@idvoretskyi
Copy link
Member

/assign @gnufied

@idvoretskyi idvoretskyi added this to the v1.11 milestone Apr 12, 2018
@justaugustus justaugustus added stage/alpha Denotes an issue tracking an enhancement targeted for Alpha status tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team labels Apr 29, 2018
@mdlinville
Copy link

@gnufied please fill out the appropriate line item of the
1.11 feature tracking spreadsheet
and open a placeholder docs PR against the
release-1.11 branch
by 5/25/2018 (tomorrow as I write this) if new docs or docs changes are
needed and a relevant PR has not yet been opened.

@justaugustus
Copy link
Member

@gnufied -- What's the current status of this feature?
As we haven't heard from you with regards to some items, this feature has been moved to the Milestone risks sheet within the 1.11 Features tracking spreadsheet.

Please update the line item for this feature on the Milestone risks sheet ASAP AND ping myself and @idvoretskyi, so we can assess the feature status or we will need to officially remove it from the milestone.

@gnufied
Copy link
Member Author

gnufied commented Jun 1, 2018

The PR is still on track for 1.11. The implementation PR is here - kubernetes/kubernetes#64154

We have approval from @saad-ali . We are waiting on approvals from @liggitt and @bsalamat . I just had to rebase the PR because the upstream changed and jordan requested some naming changes.

@justaugustus
Copy link
Member

@gnufied -- there needs to be a Docs PR issued as well, as Misty mentioned above.
Please update the Features tracking sheet with that information, so that we can remove this feature from the Milestone risks tab.

@gnufied
Copy link
Member Author

gnufied commented Jun 1, 2018

Added docs PR - kubernetes/website#8871

@justaugustus
Copy link
Member

@gnufied thanks for the update! I've moved this feature back into the main sheet.

@justaugustus justaugustus removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jul 18, 2018
k8s-github-robot pushed a commit to kubernetes/kubernetes that referenced this issue Jul 21, 2018
Automatic merge from submit-queue (batch tested with PRs 66410, 66398, 66061, 66397, 65558). If you want to cherry-pick this change to another branch, please follow the instructions <a href="https://github.com/kubernetes/community/blob/master/contributors/devel/cherry-picks.md">here</a>.

Fix volume limit for EBS on m5 and c5 instances

This is a fix for lower volume limits on m5 and c5 instance types while we wait for kubernetes/enhancements#554 to land GA.

This problem became urgent because many of our users are trying to migrate to those instance types in light of spectre/meltdown vulnerability but  lower volume limit on those instance types often causes cluster instability. Yes they can workaround by configuring the scheduler with lower limit but often this becomes somewhat difficult to do when cluster is mixed. 

The newer default limits were picked from https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html

Text about spectre/meltdown is available on - https://community.bitnami.com/t/spectre-variant-2/54961/5

/sig storage
/sig scheduling

```release-note
Fix volume limit for EBS on m5 and c5 instance types
```
@kacole2
Copy link

kacole2 commented Jul 23, 2018

@gnufied This feature was worked on in the previous milestone, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.12 as mentioned in your original post. This still has the alpha tag as well so we need to update it accordingly.

If there are any updates, please explicitly ping @justaugustus, @kacole2, @robertsandoval, @rajendar38 to note that it is ready to be included in the Features Tracking Spreadsheet for Kubernetes 1.12.


Please note that the Features Freeze is July 31st, after which any incomplete Feature issues will require an Exception request to be accepted into the milestone.

In addition, please be aware of the following relevant deadlines:

  • Docs deadline (open placeholder PRs): 8/21
  • Test case freeze: 8/28

Please make sure all PRs for features have relevant release notes included as well.

Happy shipping!

@gnufied
Copy link
Member Author

gnufied commented Jul 23, 2018

@kacole2 For 1.12 the plan is to further expand this feature to cover more volume types. Add CSI support. The decision to whether to move to beta in 1.12 or not will be taken in a day or two.

@kcmartin
Copy link

kcmartin commented Oct 2, 2019

Hello @gnufied -- 1.17 Enhancement Shadow here! 🙂

I wanted to reach out to see if this enhancement will be graduating to stable in 1.17?


Please let me know so that this enhancement can be added to 1.17 tracking sheet.

Thank you!

🔔Friendly Reminder

The current release schedule is

  • Monday, September 23 - Release Cycle Begins
  • Tuesday, October 15, EOD PST - Enhancements Freeze
  • Thursday, November 14, EOD PST - Code Freeze
  • Tuesday, November 19 - Docs must be completed and reviewed
  • Monday, December 9 - Kubernetes 1.17.0 Released

@bertinatto
Copy link
Member

/assign

@bertinatto
Copy link
Member

Hi @kcmartin; we intend to graduate this feature to GA in v1.17.

@kcmartin
Copy link

kcmartin commented Oct 4, 2019

Thanks @bertinatto !
/milestone v1.17

@k8s-ci-robot k8s-ci-robot added this to the v1.17 milestone Oct 4, 2019
@kcmartin kcmartin added tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team and removed tracked/no Denotes an enhancement issue is NOT actively being tracked by the Release Team labels Oct 4, 2019
@kcmartin
Copy link

kcmartin commented Oct 4, 2019

/stage stable

@k8s-ci-robot k8s-ci-robot added stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status and removed stage/beta Denotes an issue tracking an enhancement targeted for Beta status labels Oct 4, 2019
@irvifa
Copy link
Member

irvifa commented Oct 21, 2019

Hello @gnufied I'm one of the v1.17 docs shadows.
Does this enhancement for (or the work planned for v1.17) require any new docs (or modifications to existing docs)? If not, can you please update the 1.17 Enhancement Tracker Sheet (or let me know and I'll do so)

If so, just a friendly reminder we're looking for a PR against k/website (branch dev-1.17) due by Friday, November 8th, it can just be a placeholder PR at this time. Let me know if you have any questions!

@irvifa
Copy link
Member

irvifa commented Nov 1, 2019

@gnufied

Since we're approaching Docs placeholder PR deadline on Nov 8th. Please try to get one in against k/website dev-1.17 branch.

@bertinatto
Copy link
Member

@irvifa: created placeholder PR here: kubernetes/website#17432

@kcmartin
Copy link

kcmartin commented Nov 6, 2019

Hi @gnufied
I am one of the Enhancements Shadows for the 1.17 Release Team. We are very near to Code Freeze (Nov 14th) for this release cycle. Just checking in about the progress of this enhancement. I see that https://github.com/kubernetes/kubernetes/pull/77595 was filed in relation to this.

Are there any other PRs related to this enhancement? If yes, can you please link them here?

Thank you in advance 😄

@bertinatto
Copy link
Member

Hi @kcmartin.
These are the open PRs related to this feature:
Docs: kubernetes/website#17432
Kubernetes: kubernetes/kubernetes#83568

@kcmartin
Copy link

kcmartin commented Nov 8, 2019

Thank you @bertinatto !

@jeremyrickard
Copy link
Contributor

Hey @gnufied @bertinatto , Happy New Year! 1.18 Enhancements lead here 👋 Thanks for getting this across the line in 1.17!!

I'm going though and doing some cleanup for the milestone and checking on things that graduated in the last release. Since this graduated to GA in 1.17, I'd like to close this issue out but the KEP is still marked as implementable. Could you submit a PR to update the KEP to implemented and then we can close this issue out?

Thanks so much!

@jeremyrickard jeremyrickard removed the tracked/yes Denotes an enhancement issue is actively being tracked by the Release Team label Jan 3, 2020
@bertinatto
Copy link
Member

Hey @gnufied @bertinatto , Happy New Year! 1.18 Enhancements lead here wave Thanks for getting this across the line in 1.17!!

I'm going though and doing some cleanup for the milestone and checking on things that graduated in the last release. Since this graduated to GA in 1.17, I'd like to close this issue out but the KEP is still marked as implementable. Could you submit a PR to update the KEP to implemented and then we can close this issue out?

Thanks so much!

Hi @jeremyrickard,

I just created #1433 to update the KEP status.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2020
@palnabarun
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 5, 2020
@palnabarun
Copy link
Member

Hi @bertinatto, thank you so much for updating the status. :)

@palnabarun
Copy link
Member

Closing this enhancement issue since the KEP has been implemented.

/close

@k8s-ci-robot
Copy link
Contributor

@palnabarun: Closing this issue.

In response to this:

Closing this enhancement issue since the KEP has been implemented.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/storage Categorizes an issue or PR as relevant to SIG Storage. stage/stable Denotes an issue tracking an enhancement targeted for Stable/GA status
Projects
None yet
Development

No branches or pull requests