Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

machine-controller pods should restart on rotation of credentials #2180

Closed
dharapvj opened this issue Jul 22, 2022 · 5 comments · Fixed by #2214
Closed

machine-controller pods should restart on rotation of credentials #2180

dharapvj opened this issue Jul 22, 2022 · 5 comments · Fixed by #2214
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.

Comments

@dharapvj
Copy link
Contributor

Description of the feature you would like to add / User story

Currently, for few cloud providers, like Openstack and Azure, the credentials for service account expire after certain number of days. It is not possible to get un-expiring passwords for service accounts for some of these cloud providers. So we must rotate the credentials by running kubeone apply --force-upgrade

But currently, post rotation, machine-controller pods (controller as well as webhook) do not get restarted and they continue to use old credentials.

So.. we can do one of the below options:

  1. Make changes in kubeone logic to restart machine-controller in such cases.
  2. Add annotations for stakater reloader for on machine-controller and webhook pods so we can add stakater reloader as kubeone addon and it will take care of restarting machine-controller pods when the secrets get updated with new credentials.
@dharapvj dharapvj added kind/feature Categorizes issue or PR as related to a new feature. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management. labels Jul 22, 2022
@stroebitzer
Copy link
Member

I had a very similar issue with Grafana and KKP version 2.20.4

I had a misconfiguration in the url in the values.yaml file in the Grafana section.

Fixing the url in the values.yaml did not cause the Grafana Pod to get restarted. The change was rolled out via Helm.

@dharapvj
Copy link
Contributor Author

Added issue on kubermatic for grafana as well. I will provide fix for grafana via checksum annotation already

@kron4eg
Copy link
Member

kron4eg commented Jul 23, 2022

Easiest way to handle this is like we do in KKP, all dependent resources are hashed and their hashes are used as annotations in the pod spec. Once hash is changed pod spec will be altered and that will cause automatic rollout of deployment.

@ahmedwaleedmalik
Copy link
Member

ahmedwaleedmalik commented Jul 24, 2022

reloader would be the simplest solution for this tbh. Because unlike KKP for KubeOne there is no active reconciliation going on so maybe it'd be helpful as an addon in general as well.

@kron4eg
Copy link
Member

kron4eg commented Jul 25, 2022

we redeploy machine-controller every time, regardless if Deployment has changed or not, it's like kinda reconciliation 🤷

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. sig/cluster-management Denotes a PR or issue as being assigned to SIG Cluster Management.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants