Skip to content
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.

K8s delete the node when I shutdown the vm? #469

Open
shenxg13 opened this issue Mar 28, 2018 · 1 comment
Open

K8s delete the node when I shutdown the vm? #469

shenxg13 opened this issue Mar 28, 2018 · 1 comment
Labels

Comments

@shenxg13
Copy link

What happened:
k8s delete the node when I shutdown the vm and rejoin the node when vm restart, but the node label no longer there.

What you expected to happen:

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know?:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 Ready 24m v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02,level=2
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2

After I shutdown node k8s-192-168-0-53-kube02:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2

After I restart that node, the node rejoin the cluster but the label level=2 no longer there.
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 NotReady 1s v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:59:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    vSphere 6.5

  • OS (e.g. from /etc/os-release):
    CentOS Linux release 7.4.1708 (Core)

  • Kernel (e.g. uname -a):

  • Install tools:

  • Others:

@embano1
Copy link

embano1 commented Jun 7, 2018

Just to clarify, the node has to rejoin the cluster, i.e. the node object in K8s API is deleted and a new one is created in your case?

Just rebooting a worker VM should not cause this behaviour.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants