Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

re-evaluate kubeadm-config download and dynamic defaults #2328

Open
3 tasks
neolit123 opened this issue Oct 16, 2020 · 10 comments
Open
3 tasks

re-evaluate kubeadm-config download and dynamic defaults #2328

neolit123 opened this issue Oct 16, 2020 · 10 comments
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/design Categorizes issue or PR as related to design. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Milestone

Comments

@neolit123
Copy link
Member

neolit123 commented Oct 16, 2020

here we saw a few problems with how kubeadm handles download of configuration from the cluster and then defaults it:
#2323

a couple of task here are:

@neolit123 neolit123 added kind/design Categorizes issue or PR as related to design. kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Oct 16, 2020
@neolit123 neolit123 self-assigned this Oct 16, 2020
@neolit123 neolit123 added this to the v1.20 milestone Oct 16, 2020
@fabriziopandini
Copy link
Member

I'm still making up my mind around this topic, but what about rephrasing the second goal in

  • "don't apply dynamic defaults when downloading the config"

My assumption is that defaults should be applied only the first time a config is processed, during init or join, then all the other commands should rely on the values stored in the ConfigMap; Nb. due to the fact that we are not storing node-specific configuration, there could be some possible exception to this rule e.g for upgrades or renew-certs, but this requires further investigation

@neolit123
Copy link
Member Author

"don't apply dynamic defaults when downloading the config"

would that work for join --control-plane too?

@fabriziopandini
Copy link
Member

"don't apply dynamic defaults when downloading the config"

would that work for join --control-plane too?

My assumption is that defaults should be applied only the first time a config is processed, during init or join (or join control-plane)

@neolit123
Copy link
Member Author

this PR reduced the unit test overhead by using static defaults (instead of dynamic) in most places:
kubernetes/kubernetes#98638

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 7, 2021
@neolit123 neolit123 modified the milestones: v1.22, v1.23 Jul 5, 2021
@neolit123
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2021
@neolit123
Copy link
Member Author

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 24, 2021
@neolit123 neolit123 modified the milestones: v1.23, v1.24 Nov 23, 2021
@neolit123 neolit123 changed the title revaluate kubeadm-config download and dynamic defaults re-evaluate kubeadm-config download and dynamic defaults Jan 11, 2022
@neolit123 neolit123 removed this from the v1.24 milestone Mar 29, 2022
@neolit123 neolit123 added this to the v1.25 milestone Mar 29, 2022
@neolit123 neolit123 removed their assignment May 10, 2022
@neolit123 neolit123 added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels May 11, 2022
@neolit123 neolit123 modified the milestones: v1.25, Next May 11, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 9, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 8, 2022
@neolit123 neolit123 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/cleanup Categorizes issue or PR as related to cleaning up code, process, or technical debt. kind/design Categorizes issue or PR as related to design. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

5 participants