-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm upgrade does not apply etcd configuration changes #1564
Comments
/assign @fabriziopandini @rosti We talked about this issue, but I forgot if we are addressing for 1.15? |
i think this is a valid observation. |
@timothysc @neolit123 I still don't buy in the idea to run However. upgrade has many implementation assumptions, and what was discussed recently is that changes to the certificates are never applied. This issue hits another upgrade Vs change the cluster "glitch" |
i am also -1 on this as a use case. i need a clearer picture of all the reasons we added |
Is there a better approach to handle configuration changes? I find it useful, because I can just run it whenever the configuration file changes, no matter what changed in it. |
@mxey the only alternative ATM is to use phases; there was some discussion about creating a better alternative, but there is no real consensus/commitment yet
Unfortunately, that's not true (e.g. changes to certificates, but I have doubts that there also some more stop-gap wrt HA cluster). It works in some cases, but it was not designed/implemented with those use case in mind |
punting to the following tracking issue for cluster reconf
+1 |
Faced the same issue with etcd listen-metrics-urls argument (to bypass client certificate flow for Prometheus metrics). Successfully patched via kubeadm (in running cluster). Added following lines to kubeadm-config configmap in kube-system namespace: ...
etcd:
local:
extraArgs:
listen-metrics-urls: http://0.0.0.0:2381
... And applied patch via |
Is this a BUG REPORT or FEATURE REQUEST?
Choose one: BUG REPORT
Versions
kubeadm version (use
kubeadm version
):v1.14.1Environment:
kubectl version
): v1.14.1uname -a
): 3.10.0-957.10.1.el7.x86_64What happened?
I changed etcd.local.extraArgs in my kubeadm configuration and ran
kubeadm upgrade
. kubeadm did not update the etcd static pod manifest. I had to runkubeadm init phase etcd local
, which does not have the same checks and potential rollback that kubeadm uses for the other pods.What you expected to happen?
kubeadm updates the etcd static pod manifests, in the same way as when I change extraArgs for any of the control-plane pods
How to reproduce it (as minimally and precisely as possible)?
Anything else we need to know?
I think the problem is in https://github.com/kubernetes/kubernetes/blob/v1.14.1/cmd/kubeadm/app/phases/upgrade/staticpods.go#L301. If the version is the same, the upgrade is not performed. That check is not necessary, because if the pod manifest hash is the same as before, the upgrade will also be skipped in the next check anyway.
The text was updated successfully, but these errors were encountered: