Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeadm.skip-phases=addon/kube-proxy is not honored when restarting an existing cluster #12389

Closed
twz123 opened this issue Sep 2, 2021 · 6 comments · Fixed by #13121 or #13538
Closed
Assignees
Labels
co/kubeadm Issues relating to kubeadm kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@twz123
Copy link

twz123 commented Sep 2, 2021

Steps to reproduce the issue:

#!/usr/bin/env sh

set -eux

minikube_start() {
  minikube start \
    --driver kvm \
    --extra-config=kubeadm.skip-phases=addon/kube-proxy
  #--kubernetes-version=v1.18.14 \
}

export LANG=C
export MINIKUBE_PROFILE=proof

minikube version
minikube delete || true
minikube_start

sleep 5
kubectl --context "$MINIKUBE_PROFILE" -n kube-system get daemonset -owide -l k8s-app=kube-proxy
sleep 5
kubectl --context "$MINIKUBE_PROFILE" -n kube-system get daemonset -owide -l k8s-app=kube-proxy

minikube stop
minikube_start

sleep 5
kubectl --context "$MINIKUBE_PROFILE" -n kube-system get daemonset -owide -l k8s-app=kube-proxy

Output is as follows for me:

+ export LANG=C
+ LANG=C
+ export MINIKUBE_PROFILE=proof
+ MINIKUBE_PROFILE=proof
+ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
+ minikube delete
* Deleting "proof" in kvm2 ...
* Removed all traces of the "proof" cluster.
+ minikube_start
+ minikube start --driver kvm --extra-config=kubeadm.skip-phases=addon/kube-proxy
* [proof] minikube v1.22.0 on Nixos 21.05.20210831.60712d4 (Okapi)
  - MINIKUBE_PROFILE=proof
* Using the kvm2 driver based on user configuration
* Starting control plane node proof in cluster proof
* Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
* Preparing Kubernetes v1.21.2 on Docker 20.10.6 ...
  - kubeadm.skip-phases=addon/kube-proxy
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass

! /home/t.wieczorek/.local/bin/kubectl is version 1.19.14, which may have incompatibilites with Kubernetes 1.21.2.
  - Want kubectl v1.21.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "proof" cluster and "default" namespace by default
+ sleep 5
+ kubectl --context proof -n kube-system get daemonset -owide -l k8s-app=kube-proxy
No resources found in kube-system namespace.
+ sleep 5
+ kubectl --context proof -n kube-system get daemonset -owide -l k8s-app=kube-proxy
No resources found in kube-system namespace.
+ minikube stop
* Stopping node "proof"  ...
* 1 nodes stopped.
+ minikube_start
+ minikube start --driver kvm --extra-config=kubeadm.skip-phases=addon/kube-proxy
* [proof] minikube v1.22.0 on Nixos 21.05.20210831.60712d4 (Okapi)
  - MINIKUBE_PROFILE=proof
* Using the kvm2 driver based on existing profile
* Starting control plane node proof in cluster proof
* Restarting existing kvm2 VM for "proof" ...
* Preparing Kubernetes v1.21.2 on Docker 20.10.6 ...
  - kubeadm.skip-phases=addon/kube-proxy
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass

! /home/t.wieczorek/.local/bin/kubectl is version 1.19.14, which may have incompatibilites with Kubernetes 1.21.2.
  - Want kubectl v1.21.2? Try 'minikube kubectl -- get pods -A'
* Done! kubectl is now configured to use "proof" cluster and "default" namespace by default
+ sleep 5
+ kubectl --context proof -n kube-system get daemonset -owide -l k8s-app=kube-proxy
NAME         DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE   CONTAINERS   IMAGES                          SELECTOR
kube-proxy   0         0         0       0            0           kubernetes.io/os=linux   5s    kube-proxy   k8s.gcr.io/kube-proxy:v1.21.2   k8s-app=kube-proxy

After the restart, the kube-proxy daemonset is installed, which shouldn't be the case.

minikube logs --file=minikube.log
minikube.log

@RA489
Copy link

RA489 commented Sep 6, 2021

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Sep 6, 2021
@spowelljr
Copy link
Member

Hi @twz123, thanks for reporting your issue with minikube!

I was able to reproduce using your steps and can confirm this is a bug with minikube.

@spowelljr spowelljr added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. co/kubeadm Issues relating to kubeadm and removed kind/support Categorizes issue or PR as a support question. labels Sep 7, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 6, 2021
@spowelljr spowelljr added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 6, 2021
@spowelljr spowelljr self-assigned this Dec 6, 2021
@spowelljr spowelljr added this to the 1.25.0 milestone Dec 6, 2021
@spowelljr spowelljr removed their assignment Dec 6, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Dec 6, 2021
@spowelljr spowelljr removed this from the 1.25.0 milestone Dec 6, 2021
@spowelljr
Copy link
Member

So the main issue is we do a different path on restart, and in that path is:

_, err := k.c.RunCmd(exec.Command("/bin/bash", "-c", fmt.Sprintf("%s phase addon all --config %s", baseCmd, conf)))

So we're hard coding installing all addons

@spowelljr spowelljr self-assigned this Dec 7, 2021
@spowelljr spowelljr added this to the 1.25.0 milestone Dec 7, 2021
@spowelljr spowelljr added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Dec 7, 2021
@twz123
Copy link
Author

twz123 commented Jan 11, 2022

@spowelljr Thnaks for fixing. Unfortunately, as a consequence of recreating the cluster, it's empty afterwards and all of its contents have to be recreated. This means that restarting such a cluster is not possible at all. Any chance to make restarts work again?

@spowelljr
Copy link
Member

Once the revert takes place this can be revisited.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/kubeadm Issues relating to kubeadm kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
5 participants