-
Notifications
You must be signed in to change notification settings - Fork 717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RFE: change cluster config as day 2 operation #1581
Comments
possible overlap with: |
so here are some use cases i do see @anguslees maybe you want to chime in and add your thoughts / use cases ?
@neolit123 i've deliberately avoided providing specific flags as is very hard to know what folks wants to change in a prod env. As such my view is that all flags should be allowed to be changed. |
@fabriziopandini |
@fabriziopandini i've changed the title and we can use this ticket as the tracking issue for "change the cluster". we can update the OP with user stories, pending PRs and KEP links. in terms of KEPs the deadline passed, so this would need an exception (deadline for exceptions is the 19th of Aug). please move the milestone to 1.17 if see fit. |
@neolit123 |
@fabriziopandini |
closing in favor of: (it links to this issue for example user story) |
MOVED
"change the cluster" is now tracked in:
#970
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: FEATURE REQUEST
Feature description
Following a deployment a K8s cluster using
kubeadm
on a cloud and/or on-prem environement, i'd like to be able to change the default values for K8s core components (scheduler, api, kube-prox, controller, kubelet) as well as addons like coreDNS without having to scale up/ add a new control plane node.On slack we had a very useful conversation and with help from @fabriziopandini we found out that there is already a discussion going on - see here.
While i understand that maybe day 2 operation might be covered by cluster-api i think we should definitely have the option on kubeadm (as standalone) just because a lot of folks already started using kubeadm while cluster-api is not ready for consumption and missing this feature does cause a bad experience for operators
The text was updated successfully, but these errors were encountered: