Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm upgrade does not apply etcd configuration changes #1564

Closed
mxey opened this issue May 14, 2019 · 8 comments
Closed

kubeadm upgrade does not apply etcd configuration changes #1564

mxey opened this issue May 14, 2019 · 8 comments
Assignees
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@mxey
Copy link

mxey commented May 14, 2019

Is this a BUG REPORT or FEATURE REQUEST?

Choose one: BUG REPORT

Versions

kubeadm version (use kubeadm version):v1.14.1

Environment:

  • Kubernetes version (use kubectl version): v1.14.1
  • Cloud provider or hardware configuration: QEMU/KVM
  • OS (e.g. from /etc/os-release): CentOS Linux 7 (Core)
  • Kernel (e.g. uname -a): 3.10.0-957.10.1.el7.x86_64
  • Others:

What happened?

I changed etcd.local.extraArgs in my kubeadm configuration and ran kubeadm upgrade. kubeadm did not update the etcd static pod manifest. I had to run kubeadm init phase etcd local, which does not have the same checks and potential rollback that kubeadm uses for the other pods.

What you expected to happen?

kubeadm updates the etcd static pod manifests, in the same way as when I change extraArgs for any of the control-plane pods

How to reproduce it (as minimally and precisely as possible)?

cat > kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
etcd:
  local:
    extraArgs: {}
EOF

kubeadm init --config ./kubeadm.yaml
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml


cat > kubeadm.yaml <<EOF
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
etcd:
  local:
    extraArgs: 
        listen-metrics-urls: http://localhost:9379/
EOF

kubeadm upgrade apply --yes --config ./kubeadm.yaml

grep metrics /etc/kubernetes/manifests/etcd.yaml 
(no output)

Anything else we need to know?

I think the problem is in https://github.com/kubernetes/kubernetes/blob/v1.14.1/cmd/kubeadm/app/phases/upgrade/staticpods.go#L301. If the version is the same, the upgrade is not performed. That check is not necessary, because if the pod manifest hash is the same as before, the upgrade will also be skipped in the next check anyway.

@timothysc
Copy link
Member

/assign @fabriziopandini @rosti

We talked about this issue, but I forgot if we are addressing for 1.15?

@timothysc timothysc added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. kind/feature Categorizes issue or PR as related to a new feature. triage/needs-information Indicates an issue needs more information in order to work on it. labels May 14, 2019
@neolit123
Copy link
Member

If the version is the same, the upgrade is not performed. That check is not necessary, because if the pod manifest hash is the same as before, the upgrade will also be skipped in the next check anyway.

i think this is a valid observation.

@fabriziopandini
Copy link
Member

fabriziopandini commented May 14, 2019

@timothysc @neolit123 I still don't buy in the idea to run kubeadm upgrade for changing flags...

However. upgrade has many implementation assumptions, and what was discussed recently is that changes to the certificates are never applied. This issue hits another upgrade Vs change the cluster "glitch"

@neolit123
Copy link
Member

I still don't buy in the idea to run kubeadm upgrade for changing flags...

i am also -1 on this as a use case.
but we allow that by passing --config on apply.

i need a clearer picture of all the reasons we added --config.

@mxey
Copy link
Author

mxey commented May 15, 2019

I still don't buy in the idea to run kubeadm upgrade for changing flags...

Is there a better approach to handle configuration changes? I find it useful, because I can just run it whenever the configuration file changes, no matter what changed in it.

@fabriziopandini
Copy link
Member

@mxey the only alternative ATM is to use phases; there was some discussion about creating a better alternative, but there is no real consensus/commitment yet

I can just run it whenever the configuration file changes, no matter what changed in it.

Unfortunately, that's not true (e.g. changes to certificates, but I have doubts that there also some more stop-gap wrt HA cluster). It works in some cases, but it was not designed/implemented with those use case in mind

@neolit123
Copy link
Member

punting to the following tracking issue for cluster reconf
#1581

Unfortunately, that's not true (e.g. changes to certificates, but I have doubts that there also some more stop-gap wrt HA cluster). It works in some cases, but it was not designed/implemented with those use case in mind

+1

@ofen
Copy link

ofen commented Dec 25, 2020

Faced the same issue with etcd listen-metrics-urls argument (to bypass client certificate flow for Prometheus metrics). Successfully patched via kubeadm (in running cluster).

Added following lines to kubeadm-config configmap in kube-system namespace:

...
etcd:
  local:
    extraArgs:
      listen-metrics-urls: http://0.0.0.0:2381
...

And applied patch via kubeadm init phase etcd local --config <(kubeadm config view) (on each master)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

6 participants