Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: cache images to local daemon if running instead of as tarballs on machine #7254

Closed
eoinreilly93 opened this issue Mar 26, 2020 · 16 comments
Labels
co/none-driver kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@eoinreilly93
Copy link

I am trying to install Minikube 1.8.2 on a work machine that has has no access to the internet. To do this, I installed minikube locally on my Windows 10 laptop which worked fine except for this warning message I received about not being able to access 'k8s.gcr.io', but I was told in this ticket I raised that that is should not cause any issues. I then brought across the '~/.minikube/cache' directory to the other environment, which is a VM running CentOS 7.7 and Docker 19.03.3. According to the cache documentation, minikube should use the data in the cache if it is present instead of trying to download dependencies. However when I run the start command, it errors out with the message below, saying it timed out trying to pull images.

I have tried changing permissions on the cache directory to be 777, but this has not helped. Please see my cache structure below:

[root@SR-SVR-206 cache]# pwd
/root/.minikube/cache
[root@SR-SVR-206 cache]# ls -R
.:
images  iso  linux

./images:
gcr.io  k8s.gcr.io  kubernetesui

./images/gcr.io:
k8s-minikube

./images/gcr.io/k8s-minikube:
storage-provisioner_v1.8.1

./images/k8s.gcr.io:
coredns_1.6.2  etcd_3.3.15-0  kube-apiserver_v1.16.3  kube-controller-manager_v1.16.3  kube-proxy_v1.16.3  kube-scheduler_v1.16.3  pause_3.1

./images/kubernetesui:
dashboard_v2.0.0-beta8  metrics-scraper_v1.0.2

./iso:
minikube-v1.8.0.iso

./linux:
v1.16.3

./linux/v1.16.3:
kubeadm  kubectl  kubelet
[root@SR-SVR-206 cache]#

As you can see all the images it is trying to pull exist in the cache, so I do not understand why it still attempting to pull them from the docker registry.

The exact command to reproduce the issue:
minikube start --kubernetes-version=1.16.3 --driver-none

The full output of the command that failed:


! minikube v1.8.2 on Centos 7.7.1908

  • Using the none driver based on user configuration
  • Running on localhost (CPUs=8, Memory=62753MB, Disk=49096MB) ...
  • OS release is CentOS Linux 7 (Core)
    ! Node may be unable to resolve external DNS records
    ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository
  • Preparing Kubernetes v1.16.3 on Docker 19.03.3 ...
  • Launching Kubernetes ...

X Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.16.3\n[preflight] Running pre-flight checks\n[preflight] Pulling image s required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can als o perform this action in beforehand using 'kubeadm config images pull'\n\n-- /stdout --\n** stderr ** \n\t[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly\n\t[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended d river is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING Swap]: running with swap on is not supported. Please disable swap\n\t [WARNING FileExisting-socat]: socat not found in system path\n\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.3. Lat est validated version: 18.09\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror execution phase prefligh t: [preflight] Some fatal errors occurred:\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.16.3: output: Error response from daemon: Get https:/ /k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waitin g for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) \n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/h ttp: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k 8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded wh ile awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.15-0: output: Error response from daemon: Get https://k8s.g cr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)\n, error: exit status 1\n\t[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client .Timeout exceeded while awaiting headers)\n, error: exit status 1\n[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-e rrors=...\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.3:$PATH ku beadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-li b-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-ma nifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": exit status 1
stdout:
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'

stderr:
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kuberne tes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.3. Latest validated version: 18.09
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-apiserver:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request can celed while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-controller-manager:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: re quest canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-scheduler:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request can celed while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/kube-proxy:v1.16.3: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request cancele d while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/pause:3.1: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while w aiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/etcd:3.3.15-0: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled whi le waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[ERROR ImagePull]: failed to pull image k8s.gcr.io/coredns:1.6.2: output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled whi le waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
To see the stack trace of this error execute with --v=5 or higher

The output of the minikube logs command:

  • ==> Docker <==
  • -- Logs begin at Wed 2020-03-25 17:01:47 GMT, end at Wed 2020-03-25 17:43:19 GMT. --
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net systemd[1]: Starting Docker Application Container Engine...
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.910514437Z" level=info msg="Starting up"
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.912839167Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.912862214Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.912886760Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.912900398Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.937236692Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.937272615Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.937296485Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc
  • Mar 25 17:17:01 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:01.937311767Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.038067507Z" level=info msg="Loading containers: start."
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.323413871Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.487213414Z" level=info msg="Loading containers: done."
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.518905943Z" level=info msg="Docker daemon" commit=a872fc2f86 graphdriver(s)=overlay2 version=19.03.3
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.519116887Z" level=info msg="Daemon has completed initialization"
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:17:02.553372204Z" level=info msg="API listen on /var/run/docker.sock"
  • Mar 25 17:17:02 SR-SVR-206.dev.os.net systemd[1]: Started Docker Application Container Engine.
  • Mar 25 17:18:33 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:18:33.538550091Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:18:33 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:18:33.539735451Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:18:33 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:18:33.539803931Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:19:53 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:19:53.465817198Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:19:53 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:19:53.465890063Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:19:53 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:19:53.465926860Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:21:15 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:21:15.331898866Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:21:15 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:21:15.331956255Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:21:15 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:21:15.331992661Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:22:46 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:22:46.620292439Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:22:46 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:22:46.621114673Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:22:46 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:22:46.621167708Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:24:21 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:24:21.754822997Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:24:21 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:24:21.754884060Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:24:21 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:24:21.754934353Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:25:42 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:25:42.843620760Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:25:42 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:25:42.843683186Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:25:42 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:25:42.843717767Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:27:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:27:02.219780306Z" level=warning msg="Error getting v2 registry: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:27:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:27:02.219841231Z" level=info msg="Attempting next endpoint for pull after error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • Mar 25 17:27:02 SR-SVR-206.dev.os.net dockerd[5128]: time="2020-03-25T17:27:02.219879073Z" level=error msg="Handler for POST /v1.40/images/create returned error: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
  • ==> container status <==
  • which: no crictl in (/root/.minikube/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
  • sudo: crictl: command not found
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • ==> dmesg <==
  • dmesg: invalid option -- '='
  • Usage:
  • dmesg [options]
  • Options:
  • -C, --clear clear the kernel ring buffer
  • -c, --read-clear read and clear all messages
  • -D, --console-off disable printing messages to console
  • -d, --show-delta show time delta between printed messages
  • -e, --reltime show local time and time delta in readable format
  • -E, --console-on enable printing messages to console
  • -F, --file use the file instead of the kernel log buffer
  • -f, --facility restrict output to defined facilities
  • -H, --human human readable output
  • -k, --kernel display kernel messages
  • -L, --color colorize messages
  • -l, --level restrict output to defined levels
  • -n, --console-level set level of messages printed to console
  • -P, --nopager do not pipe output into a pager
  • -r, --raw print the raw message buffer
  • -S, --syslog force to use syslog(2) rather than /dev/kmsg
  • -s, --buffer-size buffer size to query the kernel ring buffer
  • -T, --ctime show human readable timestamp (could be
  •                            inaccurate if you have used SUSPEND/RESUME)
    
  • -t, --notime don't print messages timestamp
  • -u, --userspace display userspace messages
  • -w, --follow wait for new messages
  • -x, --decode decode facility and level to readable string
  • -h, --help display this help and exit
  • -V, --version output version information and exit
  • Supported log facilities:
  • kern - kernel messages
    
  • user - random user-level messages
    
  • mail - mail system
    
  • daemon - system daemons
  • auth - security/authorization messages
    
  • syslog - messages generated internally by syslogd
  •  lpr - line printer subsystem
    
  • news - network news subsystem
    
  • Supported log levels (priorities):
  • emerg - system is unusable
  • alert - action must be taken immediately
  • crit - critical conditions
    
  •  err - error conditions
    
  • warn - warning conditions
    
  • notice - normal but significant condition
  • info - informational
    
  • debug - debug-level messages
  • For more details see dmesg(q).
  • ==> kernel <==
  • 17:43:20 up 41 min, 1 user, load average: 0.15, 0.06, 0.08
  • Linux SR-SVR-206.dev.os.net 3.10.0-1062.12.1.el7.x86_64 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • PRETTY_NAME="CentOS Linux 7 (Core)"
  • ==> kubelet <==
  • -- Logs begin at Wed 2020-03-25 17:01:47 GMT, end at Wed 2020-03-25 17:43:20 GMT. --
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net kubelet[1080]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net kubelet[1080]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net kubelet[1080]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net kubelet[1080]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net kubelet[1080]: F0325 17:43:16.822136 1080 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net systemd[1]: Unit kubelet.service entered failed state.
  • Mar 25 17:43:16 SR-SVR-206.dev.os.net systemd[1]: kubelet.service failed.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: kubelet.service holdoff time over, scheduling restart.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: Started kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net kubelet[1094]: F0325 17:43:17.566439 1094 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: Unit kubelet.service entered failed state.
  • Mar 25 17:43:17 SR-SVR-206.dev.os.net systemd[1]: kubelet.service failed.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: kubelet.service holdoff time over, scheduling restart.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: Started kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net kubelet[1106]: F0325 17:43:18.320714 1106 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: Unit kubelet.service entered failed state.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: kubelet.service failed.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: kubelet.service holdoff time over, scheduling restart.
  • Mar 25 17:43:18 SR-SVR-206.dev.os.net systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: Started kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1119]: F0325 17:43:19.062375 1119 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: Unit kubelet.service entered failed state.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: kubelet.service failed.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: kubelet.service holdoff time over, scheduling restart.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: Started kubelet: The Kubernetes Node Agent.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --authorization-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --client-ca-file has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --fail-swap-on has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: Flag --pod-manifest-path has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net kubelet[1198]: F0325 17:43:19.820641 1198 server.go:196] failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file "/var/lib/kubelet/config.yaml", error: open /var/lib/kubelet/config.yaml: no such file or directory
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: Unit kubelet.service entered failed state.
  • Mar 25 17:43:19 SR-SVR-206.dev.os.net systemd[1]: kubelet.service failed.

The operating system version:
CentOS 7.71908
Docker 19.03.3
Minikube 1.8.2

@afbjorklund
Copy link
Collaborator

@eoinreilly93 : the cache is disabled for the "none" driver, automatically

  --cache-images=true: If true, cache docker images for the current bootstrapper and load them into the machine. **Always false with --driver=none.**
  --download-only=false: If true, only download and cache files for later use - don't install or start anything.

The (dubious) theory is that you will cache them in docker, rather than on disk.

#4059 3db7e9e

It would be possible to change the code to pull them in docker instead, for the none driver ?

Instead of saving as "tarball" in the cache directory, it would save them to "daemon" in docker

@afbjorklund afbjorklund added co/none-driver kind/feature Categorizes issue or PR as related to a new feature. labels Mar 26, 2020
@afbjorklund
Copy link
Collaborator

As a workaround, you can load them yourself from the cache (using docker load -i on each)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 26, 2020

Interesting that dmesg failed, it might need some legacy fallback for older operating systems...

@tstromberg tstromberg changed the title Offline installation is not using the cache none: Offline installation is not using the cache Mar 27, 2020
@tstromberg tstromberg added the kind/documentation Categorizes issue or PR as related to documentation. label Mar 27, 2020
@eoinreilly93
Copy link
Author

Thanks for the response. I have tried what you suggested and loaded in the docker images and re-ran the minikube start command, but now I am getting a different error relating to kubelet I believe. I ran systemctl status kubelet which shows that the service is active, so I don't know what the problem is here. Can you advise?

The exact command to reproduce the issue:
minikube start --kubernetes-version=1.16.3 --driver-none

The full output of the command that failed:


[root@SR-SVR-206 kubernetesui]# minikube start --vm-driver=none --kubernetes-version=1.16.3
! minikube v1.8.2 on Centos 7.7.1908

  • Using the none driver based on user configuration
  • Running on localhost (CPUs=8, Memory=62753MB, Disk=49096MB) ...
  • OS release is CentOS Linux 7 (Core)
    ! Node may be unable to resolve external DNS records
    ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository
  • Preparing Kubernetes v1.16.3 on Docker 19.03.3 ...
  • Launching Kubernetes ...

X Error starting cluster: init failed. output: "-- stdout --\n[init] Using Kubernetes version: v1.16.3\n[preflight] Running pre-flight checks\n[preflight] Pulling images required for setting up a Kubernetes cluster\n[preflight] This might take a minute or two, depending on the speed of your internet connection\n[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'\n[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"\n[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"\n[kubelet-start] Activating the kubelet service\n[certs] Using certificateDir folder "/var/lib/minikube/certs"\n[certs] Using existing ca certificate authority\n[certs] Using existing apiserver certificate and key on disk\n[certs] Generating "apiserver-kubelet-client" certificate and key\n[certs] Generating "front-proxy-ca" certificate and key\n[certs] Generating "front-proxy-client" certificate and key\n[certs] Generating "etcd/ca" certificate and key\n[certs] Generating "etcd/server" certificate and key\n[certs] etcd/server serving cert is signed for DNS names [sr-svr-206.dev.os.net localhost] and IPs [15.6.1.206 127.0.0.1 ::1]\n[certs] Generating "etcd/peer" certificate and key\n[certs] etcd/peer serving cert is signed for DNS names [sr-svr-206.dev.os.net localhost] and IPs [15.6.1.206 127.0.0.1 ::1]\n[certs] Generating "etcd/healthcheck-client" certificate and key\n[certs] Generating "apiserver-etcd-client" certificate and key\n[certs] Generating "sa" key and public key\n[kubeconfig] Using kubeconfig folder "/etc/kubernetes"\n[kubeconfig] Writing "admin.conf" kubeconfig file\n[kubeconfig] Writing "kubelet.conf" kubeconfig file\n[kubeconfig] Writing "controller-manager.conf" kubeconfig file\n[kubeconfig] Writing "scheduler.conf" kubeconfig file\n[control-plane] Using manifest folder "/etc/kubernetes/manifests"\n[control-plane] Creating static Pod manifest for "kube-apiserver"\n[control-plane] Creating static Pod manifest for "kube-controller-manager"\n[control-plane] Creating static Pod manifest for "kube-scheduler"\n[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"\n[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s\n[kubelet-check] Initial timeout of 40s passed.\n\nUnfortunately, an error has occurred:\n\ttimed out waiting for the condition\n\nThis error is likely caused by:\n\t- The kubelet is not running\n\t- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)\n\nIf you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:\n\t- 'systemctl status kubelet'\n\t- 'journalctl -xeu kubelet'\n\nAdditionally, a control plane component may have crashed or exited when started by the container runtime.\nTo troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.\nHere is one example how you may list all Kubernetes containers running in docker:\n\t- 'docker ps -a | grep kube | grep -v pause'\n\tOnce you have found the failing container, you can inspect its logs with:\n\t- 'docker logs CONTAINERID'\n\n-- /stdout --\n** stderr ** \n\t[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly\n\t[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/\n\t[WARNING Swap]: running with swap on is not supported. Please disable swap\n\t[WARNING FileExisting-socat]: socat not found in system path\n\t[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.3. Latest validated version: 18.09\n\t[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'\nerror execution phase wait-control-plane: couldn't initialize a Kubernetes cluster\nTo see the stack trace of this error execute with --v=5 or higher\n\n** /stderr **": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.16.3:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,SystemVerification": exit status 1
stdout:
[init] Using Kubernetes version: v1.16.3
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/var/lib/minikube/certs"
[certs] Using existing ca certificate authority
[certs] Using existing apiserver certificate and key on disk
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [sr-svr-206.dev.os.net localhost] and IPs [15.6.1.206 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [sr-svr-206.dev.os.net localhost] and IPs [15.6.1.206 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.

Unfortunately, an error has occurred:
timed out waiting for the condition

This error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'

stderr:
[WARNING Firewalld]: firewalld is active, please ensure ports [8443 10250] are open or your cluster may not function correctly
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING Swap]: running with swap on is not supported. Please disable swap
[WARNING FileExisting-socat]: socat not found in system path
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.3. Latest validated version: 18.09
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher

The output of the minikube logs command:

[root@SR-SVR-206 kubernetesui]# minikube logs

  • ==> Docker <==
  • -- Logs begin at Fri 2020-03-27 10:35:25 GMT, end at Fri 2020-03-27 10:55:30 GMT. --
  • Mar 27 10:55:23 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:23.930185132Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:23 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:23.930794647Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:23.980766713Z" level=error msg="87e22f89144dfb2330c4ad6616084530aa5e3e360c7bf960de45242c2e464582 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.171161084Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.171261144Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.171974592Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.172118603Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.211796645Z" level=error msg="356c7c22f5b072679a2263e0cbc5e86add87e89e5f25deb36c7b9aa6ae9ab6d3 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.217986984Z" level=error msg="db7cbbe78fdf61de6c97e384f2534580eb88e6fa4f2cff0b1b502aabc6929379 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.319634582Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.319634246Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:24 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:24.352089675Z" level=error msg="71d1a395205875021ea75b2ca348b05cd6c7a90a0d5a0a5b4c93efc56117f485 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.122915474Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.123015185Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.154960999Z" level=error msg="45050e4302c2260f64b785a3b3279cdfe2c27c6cb53d7c8b9ae4fbf357344068 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.247503920Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.249437886Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.304361887Z" level=error msg="14d7edd15c55a2e2f2c697f31d9e26c8d8a99cba7b4a10d2606919873a6aa6b2 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.389730260Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.391103685Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.425337578Z" level=error msg="b1df86d24cbf36ac3a127168e0107ca66c945f28e5fe0216e7328f01cf977695 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.519677694Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.519727901Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:25 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:25.563937979Z" level=error msg="b91e61f915f096fed1a7ed47bdfcedc8ae4125efcf4ec3c9ea5721caf60d8387 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.383993441Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.386613459Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.426840125Z" level=error msg="de80dc463740f5851e70830b5742cc801dc30f18d6b56618a49ff3f3281e494d cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.542090904Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.542219560Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.593372269Z" level=error msg="2fabc2bcb5057e1fdf00d32f1b6cc95ea9b6bd9fba307b3cda59085ffc041f96 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.800414505Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.800528828Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.801960101Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.802030142Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.838733245Z" level=error msg="4eeb325b87cdc94178578b3dbab134e926b832d46a091e7a8e3d028e94030385 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:26 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:26.848080067Z" level=error msg="a7936915953ea64813b4e9d38dd3716a06b77b69fdc69255fcb943fad6255ec2 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.709337150Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.709502267Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.831643569Z" level=error msg="92d7adaaa14de6d5393c44e7d12af5a88c809a12a9616c41463590dbd5cd1f5e cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.843244799Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.846488432Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.885774330Z" level=error msg="670ba235945395676d408a4efa8d3ea2341bde411ea6f1b583c2ee4bdfcd322c cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.984451101Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:27 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:27.984978023Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.100033029Z" level=error msg="e54367c41b50693c88453783b95d4d94ca9066f65929e5054b6aa56f5350cb8e cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.110433318Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.110873887Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.147445009Z" level=error msg="1aeb7bc23eaa2c956342e10831566a56e4e73c4952ee965c1e143ad61e321d83 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.960235375Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:28 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:28.960242762Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.017165385Z" level=error msg="39eb53fee5aecb38a55d7fde88c550737dafd6b7c42ecc67bb567b1669c12980 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.099592731Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.099811510Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.141733409Z" level=error msg="e2deb465ddc6dcb96836a3dc57aeabee753807b8ac76acb129879e2863b7abd1 cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.243436795Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.243463364Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.284232025Z" level=error msg="a76680928784281394028db66739c6ab17bf3f6d48d3248fa01f3c7d771b8bce cleanup: failed to delete container from containerd: no such container"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.376118338Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.376152192Z" level=error msg="stream copy error: reading from a closed fifo"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net dockerd[5004]: time="2020-03-27T10:55:29.417473115Z" level=error msg="1b8f5a0eceed31352c246c1bdf79a6b4405c4b85a7d6198a0852db39aa42521c cleanup: failed to delete container from containerd: no such container"
  • ==> container status <==
  • which: no crictl in (/root/.minikube/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
  • sudo: crictl: command not found
  • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  • 682b5d744c96 k8s.gcr.io/pause:3.1 "/pause" Less than a second ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_388
  • e691fc8d399a k8s.gcr.io/pause:3.1 "/pause" Less than a second ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_388
  • c5151871541b k8s.gcr.io/pause:3.1 "/pause" Less than a second ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_386
  • 7b5d762f33a4 k8s.gcr.io/pause:3.1 "/pause" 1 second ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_389
  • 1b8f5a0eceed k8s.gcr.io/pause:3.1 "/pause" 2 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_385
  • a76680928784 k8s.gcr.io/pause:3.1 "/pause" 2 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_388
  • e2deb465ddc6 k8s.gcr.io/pause:3.1 "/pause" 2 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_387
  • 39eb53fee5ae k8s.gcr.io/pause:3.1 "/pause" 2 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_387
  • 1aeb7bc23eaa k8s.gcr.io/pause:3.1 "/pause" 3 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_386
  • e54367c41b50 k8s.gcr.io/pause:3.1 "/pause" 3 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_386
  • 670ba2359453 k8s.gcr.io/pause:3.1 "/pause" 3 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_384
  • 92d7adaaa14d k8s.gcr.io/pause:3.1 "/pause" 3 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_387
  • 4eeb325b87cd k8s.gcr.io/pause:3.1 "/pause" 4 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_385
  • a7936915953e k8s.gcr.io/pause:3.1 "/pause" 4 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_385
  • 2fabc2bcb505 k8s.gcr.io/pause:3.1 "/pause" 4 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_383
  • de80dc463740 k8s.gcr.io/pause:3.1 "/pause" 4 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_386
  • b91e61f915f0 k8s.gcr.io/pause:3.1 "/pause" 5 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_384
  • b1df86d24cbf k8s.gcr.io/pause:3.1 "/pause" 6 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_382
  • 14d7edd15c55 k8s.gcr.io/pause:3.1 "/pause" 6 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_385
  • 45050e4302c2 k8s.gcr.io/pause:3.1 "/pause" 6 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_384
  • 71d1a3952058 k8s.gcr.io/pause:3.1 "/pause" 7 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_384
  • 356c7c22f5b0 k8s.gcr.io/pause:3.1 "/pause" 7 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_383
  • db7cbbe78fdf k8s.gcr.io/pause:3.1 "/pause" 7 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_383
  • 87e22f89144d k8s.gcr.io/pause:3.1 "/pause" 7 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_381
  • 878c5ad6d118 k8s.gcr.io/pause:3.1 "/pause" 8 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_382
  • c032c4964d7b k8s.gcr.io/pause:3.1 "/pause" 8 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_380
  • 931c3ac8455b k8s.gcr.io/pause:3.1 "/pause" 8 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_383
  • fb8d0dc65141 k8s.gcr.io/pause:3.1 "/pause" 8 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_382
  • dccd045733d2 k8s.gcr.io/pause:3.1 "/pause" 9 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_381
  • c16462244a31 k8s.gcr.io/pause:3.1 "/pause" 9 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_381
  • cb5c3004c87e k8s.gcr.io/pause:3.1 "/pause" 9 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_379
  • 69e60207f7da k8s.gcr.io/pause:3.1 "/pause" 9 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_382
  • 242718c927c1 k8s.gcr.io/pause:3.1 "/pause" 10 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_380
  • 3607db04f8b1 k8s.gcr.io/pause:3.1 "/pause" 10 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_380
  • 7c6d06729d32 k8s.gcr.io/pause:3.1 "/pause" 11 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_378
  • 87ba691f918d k8s.gcr.io/pause:3.1 "/pause" 11 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_381
  • bccc75f83b3a k8s.gcr.io/pause:3.1 "/pause" 12 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_379
  • e20c4ed685fa k8s.gcr.io/pause:3.1 "/pause" 12 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_377
  • d62096752e70 k8s.gcr.io/pause:3.1 "/pause" 12 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_380
  • 5a3184eb3004 k8s.gcr.io/pause:3.1 "/pause" 12 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_379
  • 6e7ddbf14a1d k8s.gcr.io/pause:3.1 "/pause" 13 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_376
  • 6d6e182751e5 k8s.gcr.io/pause:3.1 "/pause" 13 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_379
  • 428a54d19c89 k8s.gcr.io/pause:3.1 "/pause" 13 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_378
  • b7ba3263cf38 k8s.gcr.io/pause:3.1 "/pause" 13 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_378
  • 6e6aa3475402 k8s.gcr.io/pause:3.1 "/pause" 14 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_377
  • 96f1e97a92ae k8s.gcr.io/pause:3.1 "/pause" 14 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_377
  • 06c3a25a14b7 k8s.gcr.io/pause:3.1 "/pause" 14 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_375
  • c294db33672f k8s.gcr.io/pause:3.1 "/pause" 14 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_378
  • 07580da2c97f k8s.gcr.io/pause:3.1 "/pause" 15 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_376
  • 5a5fbc9ce958 k8s.gcr.io/pause:3.1 "/pause" 15 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_376
  • 4111f8e09ac0 k8s.gcr.io/pause:3.1 "/pause" 16 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_374
  • 4a6753ed15c2 k8s.gcr.io/pause:3.1 "/pause" 16 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_377
  • 0624917cd4f2 k8s.gcr.io/pause:3.1 "/pause" 17 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_375
  • 6ccc94dba301 k8s.gcr.io/pause:3.1 "/pause" 17 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_373
  • 3b5f3ccbd4e9 k8s.gcr.io/pause:3.1 "/pause" 17 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_376
  • 14ee87c7a8ea k8s.gcr.io/pause:3.1 "/pause" 17 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_375
  • 7505302dd47a k8s.gcr.io/pause:3.1 "/pause" 18 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_374
  • 9de6a296027c k8s.gcr.io/pause:3.1 "/pause" 18 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_374
  • 9fdcca50e297 k8s.gcr.io/pause:3.1 "/pause" 18 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_372
  • bc456566fa9c k8s.gcr.io/pause:3.1 "/pause" 18 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_375
  • 45098f761d84 k8s.gcr.io/pause:3.1 "/pause" 19 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_371
  • eaec28463a04 k8s.gcr.io/pause:3.1 "/pause" 19 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_374
  • cde79ac2a15d k8s.gcr.io/pause:3.1 "/pause" 19 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_373
  • 47bcae813aa9 k8s.gcr.io/pause:3.1 "/pause" 19 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_373
  • c312c0d50d1b k8s.gcr.io/pause:3.1 "/pause" 20 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_372
  • 2c37ef89abca k8s.gcr.io/pause:3.1 "/pause" 20 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_372
  • c65836824275 k8s.gcr.io/pause:3.1 "/pause" 20 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_370
  • 765e17d4b1f8 k8s.gcr.io/pause:3.1 "/pause" 20 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_373
  • b156af1e60ce k8s.gcr.io/pause:3.1 "/pause" 21 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_369
  • 8563f27fce92 k8s.gcr.io/pause:3.1 "/pause" 21 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_372
  • 06639e86dd10 k8s.gcr.io/pause:3.1 "/pause" 21 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_371
  • a27659c0b16d k8s.gcr.io/pause:3.1 "/pause" 21 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_371
  • 4f79ad542229 k8s.gcr.io/pause:3.1 "/pause" 22 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_370
  • ff829a03deab k8s.gcr.io/pause:3.1 "/pause" 22 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_370
  • c84cb0959334 k8s.gcr.io/pause:3.1 "/pause" 23 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_368
  • d7ece838d28c k8s.gcr.io/pause:3.1 "/pause" 23 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_371
  • 721057c43408 k8s.gcr.io/pause:3.1 "/pause" 24 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_369
  • 0066e4908efd k8s.gcr.io/pause:3.1 "/pause" 24 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_369
  • 25eac8c1bf44 k8s.gcr.io/pause:3.1 "/pause" 24 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_367
  • abe5c9f8200f k8s.gcr.io/pause:3.1 "/pause" 24 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_370
  • 96f5ad9dd874 k8s.gcr.io/pause:3.1 "/pause" 25 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_369
  • cdd68980319f k8s.gcr.io/pause:3.1 "/pause" 25 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_368
  • 87c0ec85532b k8s.gcr.io/pause:3.1 "/pause" 25 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_368
  • 53917445514c k8s.gcr.io/pause:3.1 "/pause" 25 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_366
  • 5d53f857886c k8s.gcr.io/pause:3.1 "/pause" 26 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_365
  • ce4d369ba763 k8s.gcr.io/pause:3.1 "/pause" 26 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_368
  • 91c8abb9046f k8s.gcr.io/pause:3.1 "/pause" 26 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_367
  • 101b6d6dfc51 k8s.gcr.io/pause:3.1 "/pause" 26 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_367
  • 3ce4cc054828 k8s.gcr.io/pause:3.1 "/pause" 27 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_366
  • b751227f0320 k8s.gcr.io/pause:3.1 "/pause" 27 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_364
  • b982f35a700e k8s.gcr.io/pause:3.1 "/pause" 27 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_367
  • 14f16cc34dc1 k8s.gcr.io/pause:3.1 "/pause" 27 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_366
  • 4b44f32ff400 k8s.gcr.io/pause:3.1 "/pause" 28 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_365
  • 03c8a00a8900 k8s.gcr.io/pause:3.1 "/pause" 28 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_365
  • c53336e1e7fd k8s.gcr.io/pause:3.1 "/pause" 28 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_363
  • cd3f09f35bfb k8s.gcr.io/pause:3.1 "/pause" 28 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_366
  • 4e44e0c9a17e k8s.gcr.io/pause:3.1 "/pause" 29 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_362
  • 35b6668e168c k8s.gcr.io/pause:3.1 "/pause" 29 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_365
  • e2938cc974db k8s.gcr.io/pause:3.1 "/pause" 29 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_364
  • fc8e320e42d7 k8s.gcr.io/pause:3.1 "/pause" 29 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_364
  • 8f958fce8410 k8s.gcr.io/pause:3.1 "/pause" 30 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_361
  • 8e4ed9ea7aaa k8s.gcr.io/pause:3.1 "/pause" 30 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_364
  • d1080129993a k8s.gcr.io/pause:3.1 "/pause" 30 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_363
  • 1845755a73ce k8s.gcr.io/pause:3.1 "/pause" 30 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_363
  • fc392c4bdf0c k8s.gcr.io/pause:3.1 "/pause" 31 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_363
  • 5678dc47fec4 k8s.gcr.io/pause:3.1 "/pause" 31 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_362
  • 9e720554d813 k8s.gcr.io/pause:3.1 "/pause" 31 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_362
  • 4864aa5887e9 k8s.gcr.io/pause:3.1 "/pause" 31 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_360
  • 97a12ad2e1e6 k8s.gcr.io/pause:3.1 "/pause" 32 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_359
  • 104bf27b1379 k8s.gcr.io/pause:3.1 "/pause" 32 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_362
  • 33056482cfa1 k8s.gcr.io/pause:3.1 "/pause" 33 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_361
  • 093758d45913 k8s.gcr.io/pause:3.1 "/pause" 33 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_361
  • b3929b6d77b9 k8s.gcr.io/pause:3.1 "/pause" 34 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_360
  • d897c47989f9 k8s.gcr.io/pause:3.1 "/pause" 34 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_358
  • 5459119e43c3 k8s.gcr.io/pause:3.1 "/pause" 34 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_361
  • 7325e43e388d k8s.gcr.io/pause:3.1 "/pause" 34 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_360
  • d8a29d5d91ae k8s.gcr.io/pause:3.1 "/pause" 34 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_359
  • fda65e5db2a2 k8s.gcr.io/pause:3.1 "/pause" 35 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_357
  • a46e1bb34731 k8s.gcr.io/pause:3.1 "/pause" 35 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_360
  • 45edcc7c971d k8s.gcr.io/pause:3.1 "/pause" 35 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_359
  • d86a44893423 k8s.gcr.io/pause:3.1 "/pause" 36 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_358
  • 13541d49c9e9 k8s.gcr.io/pause:3.1 "/pause" 36 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_358
  • 66c4c76d8732 k8s.gcr.io/pause:3.1 "/pause" 36 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_356
  • 04db05113634 k8s.gcr.io/pause:3.1 "/pause" 36 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_359
  • 0d9bd7bd8c12 k8s.gcr.io/pause:3.1 "/pause" 37 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_358
  • b210d3594b39 k8s.gcr.io/pause:3.1 "/pause" 37 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_357
  • a11a836dd08d k8s.gcr.io/pause:3.1 "/pause" 37 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_357
  • ce3ec8fca85a k8s.gcr.io/pause:3.1 "/pause" 38 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_357
  • 507d48609b1c k8s.gcr.io/pause:3.1 "/pause" 38 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_356
  • 45e0bfee186c k8s.gcr.io/pause:3.1 "/pause" 38 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_356
  • 7018e0aca78b k8s.gcr.io/pause:3.1 "/pause" 39 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_355
  • 084cf997da37 k8s.gcr.io/pause:3.1 "/pause" 39 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_355
  • 8ef577981db8 k8s.gcr.io/pause:3.1 "/pause" 39 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_356
  • 61ff9e8309d5 k8s.gcr.io/pause:3.1 "/pause" 40 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_355
  • 26405f83fa8f k8s.gcr.io/pause:3.1 "/pause" 40 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_354
  • 23683bc190b2 k8s.gcr.io/pause:3.1 "/pause" 40 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_354
  • 13993f52ec1b k8s.gcr.io/pause:3.1 "/pause" 42 seconds ago Created k8s_POD_etcd-sr-svr-206.dev.os.net_kube-system_30d48268509a16a30f56d893a63e0073_355
  • 2171fc1c1b51 k8s.gcr.io/pause:3.1 "/pause" 42 seconds ago Created k8s_POD_kube-apiserver-sr-svr-206.dev.os.net_kube-system_ceae42b65349a7a3f714122c062acc98_354
  • b17b5205e3c1 k8s.gcr.io/pause:3.1 "/pause" 42 seconds ago Created k8s_POD_kube-controller-manager-sr-svr-206.dev.os.net_kube-system_6800c28a4c0640d4e2cdea5c7cd2dded_353
  • 8959da942bcf k8s.gcr.io/pause:3.1 "/pause" 42 seconds ago Created k8s_POD_kube-scheduler-sr-svr-206.dev.os.net_kube-system_4e1bd6e5b41d60d131353157588ab020_353
  • ==> dmesg <==
  • dmesg: invalid option -- '='
  • Usage:
  • dmesg [options]
  • Options:
  • -C, --clear clear the kernel ring buffer
  • -c, --read-clear read and clear all messages
  • -D, --console-off disable printing messages to console
  • -d, --show-delta show time delta between printed messages
  • -e, --reltime show local time and time delta in readable format
  • -E, --console-on enable printing messages to console
  • -F, --file use the file instead of the kernel log buffer
  • -f, --facility restrict output to defined facilities
  • -H, --human human readable output
  • -k, --kernel display kernel messages
  • -L, --color colorize messages
  • -l, --level restrict output to defined levels
  • -n, --console-level set level of messages printed to console
  • -P, --nopager do not pipe output into a pager
  • -r, --raw print the raw message buffer
  • -S, --syslog force to use syslog(2) rather than /dev/kmsg
  • -s, --buffer-size buffer size to query the kernel ring buffer
  • -T, --ctime show human readable timestamp (could be
  •                            inaccurate if you have used SUSPEND/RESUME)
    
  • -t, --notime don't print messages timestamp
  • -u, --userspace display userspace messages
  • -w, --follow wait for new messages
  • -x, --decode decode facility and level to readable string
  • -h, --help display this help and exit
  • -V, --version output version information and exit
  • Supported log facilities:
  • kern - kernel messages
    
  • user - random user-level messages
    
  • mail - mail system
    
  • daemon - system daemons
  • auth - security/authorization messages
    
  • syslog - messages generated internally by syslogd
  •  lpr - line printer subsystem
    
  • news - network news subsystem
    
  • Supported log levels (priorities):
  • emerg - system is unusable
  • alert - action must be taken immediately
  • crit - critical conditions
    
  •  err - error conditions
    
  • warn - warning conditions
    
  • notice - normal but significant condition
  • info - informational
    
  • debug - debug-level messages
  • For more details see dmesg(q).
  • ==> kernel <==
  • 10:55:30 up 20 min, 1 user, load average: 4.37, 3.30, 1.73
  • Linux SR-SVR-206.dev.os.net 3.10.0-1062.12.1.el7.x86_64 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Tue Feb 4 23:02:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
  • PRETTY_NAME="CentOS Linux 7 (Core)"
  • ==> kubelet <==
  • -- Logs begin at Fri 2020-03-27 10:35:25 GMT, end at Fri 2020-03-27 10:55:30 GMT. --
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.018276 6155 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod6800c28a4c0640d4e2cdea5c7cd2dded/39eb53fee5aecb38a55d7fde88c550737dafd6b7c42ecc67bb567b1669c12980": none of the resources are being tracked.
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.063909 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.142366 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-apiserver-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.142441 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-apiserver-sr-svr-206.dev.os.net_kube-system(ceae42b65349a7a3f714122c062acc98)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-apiserver-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.142463 6155 kuberuntime_manager.go:710] createPodSandbox for pod "kube-apiserver-sr-svr-206.dev.os.net_kube-system(ceae42b65349a7a3f714122c062acc98)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-apiserver-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.142523 6155 pod_workers.go:191] Error syncing pod ceae42b65349a7a3f714122c062acc98 ("kube-apiserver-sr-svr-206.dev.os.net_kube-system(ceae42b65349a7a3f714122c062acc98)"), skipping: failed to "CreatePodSandbox" for "kube-apiserver-sr-svr-206.dev.os.net_kube-system(ceae42b65349a7a3f714122c062acc98)" with CreatePodSandboxError: "CreatePodSandbox for pod "kube-apiserver-sr-svr-206.dev.os.net_kube-system(ceae42b65349a7a3f714122c062acc98)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-apiserver-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.142886 6155 container.go:409] Failed to create summary reader for "/kubepods/burstable/podceae42b65349a7a3f714122c062acc98/e2deb465ddc6dcb96836a3dc57aeabee753807b8ac76acb129879e2863b7abd1": none of the resources are being tracked.
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.164134 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.212434 6155 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.264305 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.284842 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.284895 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.284912 6155 kuberuntime_manager.go:710] createPodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.284979 6155 pod_workers.go:191] Error syncing pod 30d48268509a16a30f56d893a63e0073 ("etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)"), skipping: failed to "CreatePodSandbox" for "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" with CreatePodSandboxError: "CreatePodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.285252 6155 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod30d48268509a16a30f56d893a63e0073/a76680928784281394028db66739c6ab17bf3f6d48d3248fa01f3c7d771b8bce": none of the resources are being tracked.
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.364472 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.412240 6155 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.417944 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.418001 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.418020 6155 kuberuntime_manager.go:710] createPodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.418081 6155 pod_workers.go:191] Error syncing pod 4e1bd6e5b41d60d131353157588ab020 ("kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" with CreatePodSandboxError: "CreatePodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown"
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.418313 6155 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod4e1bd6e5b41d60d131353157588ab020/1b8f5a0eceed31352c246c1bdf79a6b4405c4b85a7d6198a0852db39aa42521c": none of the resources are being tracked.
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.464775 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.565083 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.612368 6155 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dsr-svr-206.dev.os.net&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.665341 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.765511 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.813360 6155 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.846288 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.846352 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.865653 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.884841 6155 pod_container_deletor.go:75] Container "a76680928784281394028db66739c6ab17bf3f6d48d3248fa01f3c7d771b8bce" not found in pod's containers
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.901609 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.901683 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:29.939585 6155 pod_container_deletor.go:75] Container "1b8f5a0eceed31352c246c1bdf79a6b4405c4b85a7d6198a0852db39aa42521c" not found in pod's containers
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.958234 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:29.958322 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:29 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:29.965804 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:30.001871 6155 pod_container_deletor.go:75] Container "39eb53fee5aecb38a55d7fde88c550737dafd6b7c42ecc67bb567b1669c12980" not found in pod's containers
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:30.010522 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:55:30.010610 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.013673 6155 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dsr-svr-206.dev.os.net&limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:30.064858 6155 pod_container_deletor.go:75] Container "e2deb465ddc6dcb96836a3dc57aeabee753807b8ac76acb129879e2863b7abd1" not found in pod's containers
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.065953 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.166154 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.213556 6155 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.266336 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.290242 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.290314 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.290340 6155 kuberuntime_manager.go:710] createPodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.290434 6155 pod_workers.go:191] Error syncing pod 30d48268509a16a30f56d893a63e0073 ("etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)"), skipping: failed to "CreatePodSandbox" for "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" with CreatePodSandboxError: "CreatePodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509a16a30f56d893a63e0073)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "etcd-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown"
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:30.290737 6155 container.go:409] Failed to create summary reader for "/kubepods/besteffort/pod30d48268509a16a30f56d893a63e0073/7b5d762f33a4e426507aab4bf2a5bd03704800c1f75d306f199c52b5b8bf4272": none of the resources are being tracked.
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.366446 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.413527 6155 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp [::1]:8443: connect: connection refused
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.466619 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.497435 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.497487 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.497504 6155 kuberuntime_manager.go:710] createPodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "write /proc/self/attr/keycreate: permission denied"": unknown
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:55:30.497558 6155 pod_workers.go:191] Error syncing pod 4e1bd6e5b41d60d131353157588ab020 ("kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)"), skipping: failed to "CreatePodSandbox" for "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" with CreatePodSandboxError: "CreatePodSandbox for pod "kube-scheduler-sr-svr-206.dev.os.net_kube-system(4e1bd6e5b41d60d131353157588ab020)" failed: rpc error: code = Unknown desc = failed to start sandbox container for pod "kube-scheduler-sr-svr-206.dev.os.net": Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused \"write /proc/self/attr/keycreate: permission denied\"": unknown"
  • Mar 27 10:55:30 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:55:30.498050 6155 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod4e1bd6e5b41d60d131353157588ab020/c5151871541b97eec9ba6a7210af91ed6a50b71b127da2b7237e966130ace01a": none of the resources are being tracked.
    [root@SR-SVR-206 kubernetesui]#

The output of the systemctl status kubelet command:

[root@SR-SVR-206 kubernetesui]# systemctl status kubelet
● kubelet.service - kubelet: The Kubernetes Node Agent
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; disabled; vendor preset: disabled)
Drop-In: /etc/systemd/system/kubelet.service.d
└─10-kubeadm.conf
Active: active (running) since Fri 2020-03-27 10:47:04 GMT; 4min 55s ago
Docs: http://kubernetes.io/docs/
Main PID: 6155 (kubelet)
Tasks: 27
Memory: 48.6M
CGroup: /system.slice/kubelet.service
└─6155 /var/lib/minikube/binaries/v1.16.3/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-...

Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.226896 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.232862 6155 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get...ion refused
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:52:00.254078 6155 pod_container_deletor.go:75] Container "912e25d9a50b24ddaab52e074789ac6ab1859f675b106d435b63... containers
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:52:00.254163 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: I0327 10:52:00.254284 6155 kubelet_node_status.go:286] Setting node annotation to enable volume controller attach/detach
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: W0327 10:52:00.300130 6155 pod_container_deletor.go:75] Container "a353e83918e9d813db800d8b1ff68818ef1827af30d131ca718f... containers
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.327021 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.427237 6155 kubelet.go:2267] node "sr-svr-206.dev.os.net" not found
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.433807 6155 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: ...ion refused
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.508257 6155 remote_runtime.go:105] RunPodSandbox from runtime service failed: rpc error: code = Unknown desc = fail...
Mar 27 10:52:00 SR-SVR-206.dev.os.net kubelet[6155]: E0327 10:52:00.508353 6155 kuberuntime_sandbox.go:68] CreatePodSandbox for pod "etcd-sr-svr-206.dev.os.net_kube-system(30d48268509...
Hint: Some lines were ellipsized, use -l to show in full.
[root@SR-SVR-206 kubernetesui]#

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2020

Seems like a timeout. If you haven't yet disabled SELinux, you need to do that.

https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/


# Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

@eoinreilly93
Copy link
Author

Thanks! That seems to have solved that problem. When I run minikube status it says everything is up and running, however when I run minikube dashboard I get an error saying "kubectl not found in the PATH, but is required for the dashboard."

Do you know why this might be? I installed minikube as root using install minikube /usr/local/bin

@priyawadhwa
Copy link

Hey @eoinreilly93, what is the output of which kubectl?

@afbjorklund
Copy link
Collaborator

You need to install kubectl externally for the dashboard to work, it needs it to be in the PATH.
https://kubernetes.io/docs/tasks/tools/install-kubectl/

This might change in a future release, to use the built-in minikube kubectl instead.

@afbjorklund
Copy link
Collaborator

Once we start testing (#3552) and documenting (#6166) running on CentOS, this might get easier.

@afbjorklund afbjorklund added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Mar 28, 2020
@eoinreilly93
Copy link
Author

@afbjorklund Thanks! I've installed it separately which looks to have solved the issue. I have other problems starting the dashboard, but I've raised them under a new ticket now.

@eoinreilly93
Copy link
Author

@priyawadhwa /usr/bin/kubectl

@priyawadhwa priyawadhwa removed the kind/documentation Categorizes issue or PR as related to documentation. label Mar 30, 2020
@priyawadhwa priyawadhwa changed the title none: Offline installation is not using the cache none: cache images to local daemon if running instead of as tarballs on machine Mar 30, 2020
@sharifelgamal
Copy link
Collaborator

@eoinreilly93 is anything else needed here? Seems like the issues for this issue have been resolved.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 21, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 20, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants