You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 7, 2022. It is now read-only.
What happened:
k8s delete the node when I shutdown the vm and rejoin the node when vm restart, but the node label no longer there.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 Ready 24m v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02,level=2
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
After I shutdown node k8s-192-168-0-53-kube02:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
After I restart that node, the node rejoin the cluster but the label level=2 no longer there.
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 NotReady 1s v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
What happened:
k8s delete the node when I shutdown the vm and rejoin the node when vm restart, but the node label no longer there.
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know?:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 Ready 24m v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02,level=2
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
After I shutdown node k8s-192-168-0-53-kube02:
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
After I restart that node, the node rejoin the cluster but the label level=2 no longer there.
[root@k8s-192-168-0-52-kube01 mongodb]# kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
k8s-192-168-0-52-kube01 Ready 22h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-52-kube01,level=1
k8s-192-168-0-53-kube02 NotReady 1s v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-53-kube02
k8s-192-168-0-54-kube03 Ready 21h v1.8.7 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=k8s-192-168-0-54-kube03,level=2
Environment:
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:59:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:52:38Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration:
vSphere 6.5
OS (e.g. from /etc/os-release):
CentOS Linux release 7.4.1708 (Core)
Kernel (e.g.
uname -a
):Install tools:
Others:
The text was updated successfully, but these errors were encountered: