Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Warning: VM is unable to access k8s.gcr.io #7229

Closed
eoinreilly93 opened this issue Mar 25, 2020 · 2 comments
Closed

Warning: VM is unable to access k8s.gcr.io #7229

eoinreilly93 opened this issue Mar 25, 2020 · 2 comments
Labels
area/dns DNS issues kind/documentation Categorizes issue or PR as related to documentation. kind/support Categorizes issue or PR as a support question. needs-faq-entry Things that could use documentation in a FAQ

Comments

@eoinreilly93
Copy link

eoinreilly93 commented Mar 25, 2020

I am trying to install the latest version of minikube (1.8.2) locally on my Windows 10 machine, so that I bring across the cache folder to my corporate network that has no access to the internet, so we can install it there. It appears to have installed correctly but I see the warning message

"Node may be unable to resolve external DNS record. VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository".

I have seen a few other open issues regarding this, but they seem to be issues with users running in China (I am in the UK), or using Hyperkit and not Virtualbox like I am. I have tried running curl https://k8s.gcr.io/ and get back the following response:

<TITLE>302 Moved</TITLE>

302 Moved

The document has moved here.

My main question is has the software downloaded everything it needs to run on an environment with no internet access, even with this error? If not, how can I resolve this issue so that it pulls all the images it needs into the .minikube\cache directory?

The exact command to reproduce the issue:
minikube start --vm-driver=virtualbox --kubernetes-version=1.16.3

The full output of the command that failed:

  • minikube v1.8.2 on Microsoft Windows 10 Enterprise 10.0.17763 Build 17763
  • Using the virtualbox driver based on user configuration

! 'virtualbox' driver reported an issue: C:\Program Files\Oracle\VirtualBox\VBoxManage.exe list hostinfo failed:

  • Suggestion: Install the latest version of VirtualBox

  • Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/virtualbox/

  • Downloading VM boot image ...

    minikube-v1.8.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s
    minikube-v1.8.0.iso: 173.56 MiB / 173.56 MiB [-] 100.00% 3.14 MiB p/s 56s

  • Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
    ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository

  • Preparing Kubernetes v1.16.3 on Docker 19.03.6 ...

    kubelet.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    kubectl.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    kubeadm.sha1: 41 B / 41 B [----------------------------] 100.00% ? p/s 0s
    kubelet: 11.30 MiB / 117.43 MiB [->] 9.62% 3.98 MiB p/s ETA 26s2.20 MiB [>] 1.41% 549.53 KiB p/s > kubelet: 11.96 MiB / 117.43 MiB [->] 120 MiB [>] 1.93% 549.53 KiB p/s ETA 1m17s0.19% 3.98 MiB p/s > kubelet: 20.55 MiB / 117.43 MiB [->] 17.50% 3.67 MiB p/s ETA 26s 3.52 MiB / 42.20 MiB [>] 8.33 > kubelet: 21.37 MiB / 117.43 MiB [-->] 18.20% 3.58 Mieadm: 3.84 MiB / 42.20 MiB [->__________] 9.11% 674.79 > kubelet: 23.71 MiB / 117.43 MiB [-->] 20.19% 3.50 MiB p/s ETA 26s>] 11.18% 689.73 KiB p/s ETA 55s > kubelet: 26.05 MiB / 117.43 MiB [-->] 22.19% 3.32 MiB p/s ETA 27s->] 13.59% 705.35 KiB p/s ETA 52 > kubelet: 27.70 MiB / 117.43 MiB [-->] 23.58% 3.16 MiB p/s ETA 28s20 MiB [->] 15.62% 752.14 KiB p/ > kubelet: 29.73 MiB / 117.43 MiB [-->] 25.32% 3.10 MiB p/s ETA 28s[-->] 18.22% 782.70 KiB p/s ETA 4 > kubelet: 30.10 MiB / 117.43 MiB [-->] 25.63% 3.01 MiB p/s ETA 28sMiB / 42.20 MiB [-->] 18.84% 802. > kubelet: 33.09 MiB / 117.43 MiB [--->_eadm: 9.89 MiB / 42.20 MiB [-->] 23.44% 854.90 KiB p/s ETA 38s______ > kubelet: 34.03 MiB / 117.43 MiB [--->] 28.98% 2.69 MiB p/s ETA 31s10.55 MiB / 42.20 MiB [-->] 24.99% > kubelet: 35.19 MiB / 117.43 MiB [--->] 29.97% 2.47 MiB p/s ETA 33s / 42.20 MiB [-->] 27.66% 884.05 K > kubelet: 39.43 MiB / 117.43 MiB [--->] 33.58% 1.99 MiB p/s ETA 39s / 42.20 MiB [--->] 36.32% 879.23 K > kubelet: 39.71 MiB / 117.43 MiB [--->] 33.82% 1.96 MiB p/s ETA 39sMiB [--->] 37.06% 916.31 KiB p/s ET > kubelet: 41.05 MiB / 117.43 MiB [--->] 34.95% 1.90 MiB p/s ETA 40sMiB [--->] 39.69% 927.79 KiB p/s ET > kubelet: 54.97 MiB / 117.43 MiB [----->] 46.81% 1.55 MiB p/s ETA 40s9.36 MiB / 42.20 MiB [-------->] 69.57 > kubelet: 57.07 MiB / 117.43 MiB [----->] 48.60% 1.51 MiB p/s ETA 39s2.20 MiB [--------->] 74.49% 1.15 MiB > kubelet: 60.67 MiB / 117.43 MiB [----->_____] 51.67% 1.37 MiB p/s ETA 41s > kubeadm: 33.97 MiB / 42.20 MiB [--- > kubelet: 67.42 MiB / 117.43 MiB [------>] 57.42% 1.20 MiB p/s ETA 41s MiB [--------->] 89.19% 825.27 KiB p/s > kubelet: 70.50 MiB / 117.43 MiB [----->] 60.03% 858.93 KiB p/s ETA 55s[---------->] 96.78% 668.53 KiB p/s ETA 2 > kubelet: 70.50 MiB / 117.43 MiB [----->] 60.03% 803.52 KiB p/s ETA 59s 42.20 MiB [---------->] 96.78% 625.40 Ki > kubectl: 44.52 MiB / 44.52 MiB [-------------] 100.00% 908.58 KiB p/s 50ss ETA 3sETA 2s
    kubelet: 70.50 MiB / 117.43 MiB [---->
    ] 60.03% 385.83 KiB p/s 42.20 MiB [---------->] 96.78% 300.30 KiB p/s ET > kubelet: 70.50 MiB / 117.43 MiB [---->] 60.03% 385.83 KiB p/s ETA 2m4s.84 MiB / 42.20 MiB [---------->] 96.78% > kubelet: 70.50 MiB / 117.43 MiB [---->] 60.03% 360.94 KiB p/s ETA 2m13s[---------->] 96.78% 280.93 KiB p/s ETA 4 > kubeadm: 42.20 MiB / 42.20 MiB [-------------] 100.00% 798.21 KiB p/s 55s.70% 376.81 KiB p/s ETA 0sTA 1m43s
    kubelet: 117.43 MiB / 117.43 MiB [-----------] 100.00% 1.63 MiB p/s 1m12s

  • Launching Kubernetes ...

  • Enabling addons: default-storageclass, storage-provisioner

  • Waiting for cluster to come online ...

  • Done! kubectl is now configured to use "minikube"

The output of the minikube logs command:

  • ==> Docker <==
  • -- Logs begin at Wed 2020-03-25 11:29:00 UTC, end at Wed 2020-03-25 11:37:12 UTC. --
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.846565514Z" level=info msg="loading plugin "io.containerd.service.v1.snapshots-service"..." type=io.containerd.service.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.846708255Z" level=info msg="loading plugin "io.containerd.runtime.v1.linux"..." type=io.containerd.runtime.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.846963597Z" level=info msg="loading plugin "io.containerd.runtime.v2.task"..." type=io.containerd.runtime.v2
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.847189360Z" level=info msg="loading plugin "io.containerd.monitor.v1.cgroups"..." type=io.containerd.monitor.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.847990254Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848081392Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848176645Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848240912Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848302381Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848366471Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848426947Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848487628Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848553028Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848615496Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848675835Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848793908Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848860291Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848925603Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.848986961Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.849148448Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.849374054Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.849454913Z" level=info msg="containerd successfully booted in 0.012821s"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.860982769Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.861187239Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.861353434Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.861574693Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.862991996Z" level=info msg="parsed scheme: "unix"" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.863120920Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.863199669Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.863264645Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892840923Z" level=warning msg="Your kernel does not support cgroup blkio weight"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892878660Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892887552Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892912996Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892920871Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.892928506Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.893121726Z" level=info msg="Loading containers: start."
  • Mar 25 11:29:17 minikube dockerd[2393]: time="2020-03-25T11:29:17.979066694Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
  • Mar 25 11:29:18 minikube dockerd[2393]: time="2020-03-25T11:29:18.042018186Z" level=info msg="Loading containers: done."
  • Mar 25 11:29:18 minikube dockerd[2393]: time="2020-03-25T11:29:18.073324601Z" level=info msg="Docker daemon" commit=369ce74a3c graphdriver(s)=overlay2 version=19.03.6
  • Mar 25 11:29:18 minikube dockerd[2393]: time="2020-03-25T11:29:18.073519877Z" level=info msg="Daemon has completed initialization"
  • Mar 25 11:29:18 minikube dockerd[2393]: time="2020-03-25T11:29:18.092750136Z" level=info msg="API listen on /var/run/docker.sock"
  • Mar 25 11:29:18 minikube systemd[1]: Started Docker Application Container Engine.
  • Mar 25 11:29:18 minikube dockerd[2393]: time="2020-03-25T11:29:18.093375634Z" level=info msg="API listen on [::]:2376"
  • Mar 25 11:31:58 minikube dockerd[2393]: time="2020-03-25T11:31:58.655594685Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f7dd14e3ad155c71d30e576793abd4c76434f16ad5940a3b5fafd4ef2b13a464/shim.sock" debug=false pid=3761
  • Mar 25 11:31:58 minikube dockerd[2393]: time="2020-03-25T11:31:58.658534052Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/429d8fd641d4355af4ff3503661a85c7acfd51c1f08ed258f419853623b1699e/shim.sock" debug=false pid=3762
  • Mar 25 11:31:58 minikube dockerd[2393]: time="2020-03-25T11:31:58.850873914Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d76cb0c8474a070c2c3a89425dac55820e9c6f9c23b631589acf8674cd705317/shim.sock" debug=false pid=3839
  • Mar 25 11:31:59 minikube dockerd[2393]: time="2020-03-25T11:31:59.124292508Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/54beef3e00347ee44de762b761a3b7a51eb2c963063ff7ff4a606aec9107241d/shim.sock" debug=false pid=3927
  • Mar 25 11:31:59 minikube dockerd[2393]: time="2020-03-25T11:31:59.246303460Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c1b7703ac1e8b77696aff9bd6d84dbe55dc18c97c6f9c7e7ddb48ff37bb68cfa/shim.sock" debug=false pid=3957
  • Mar 25 11:31:59 minikube dockerd[2393]: time="2020-03-25T11:31:59.265094819Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0517284e551a217938d3dfbe1f1cff5176c5d7589922d6dde906ca7315bdb900/shim.sock" debug=false pid=3968
  • Mar 25 11:31:59 minikube dockerd[2393]: time="2020-03-25T11:31:59.282756020Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5a5a0fc3b5c4cf5c23559efe865996a35c24d45076ca39e0474ac0fcc11c02e1/shim.sock" debug=false pid=3983
  • Mar 25 11:31:59 minikube dockerd[2393]: time="2020-03-25T11:31:59.654789909Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/0325fa454b2d95f2f44f2158ce90f37c6b57ebed5d1130cef070ed057934a1ff/shim.sock" debug=false pid=4114
  • Mar 25 11:32:22 minikube dockerd[2393]: time="2020-03-25T11:32:22.387189744Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/211c6828c0e1bf9343b2c91d4627992a4f2dc44a43a8ff300c5dc15564baa10d/shim.sock" debug=false pid=4471
  • Mar 25 11:32:22 minikube dockerd[2393]: time="2020-03-25T11:32:22.420661955Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bb8a55695b34bb1bc232b6e10e5a4a234e30233bac34ff9f150bd844c2cb3bc2/shim.sock" debug=false pid=4482
  • Mar 25 11:32:23 minikube dockerd[2393]: time="2020-03-25T11:32:23.442196942Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/13a26ccae63b20ec544986a83078a94f787668f19ac865e925c6b98936ef59e3/shim.sock" debug=false pid=4566
  • Mar 25 11:32:23 minikube dockerd[2393]: time="2020-03-25T11:32:23.498374564Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20844aa8cdd97c8a3d91c14c594557b30c5da3abeaf3517750fd6e9db94f59ce/shim.sock" debug=false pid=4577
  • Mar 25 11:32:23 minikube dockerd[2393]: time="2020-03-25T11:32:23.550813555Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/468be718b18a5bc36ec2706f1cd2567e9ccebb38edae8ce4c8b04bd048cc43a2/shim.sock" debug=false pid=4593
  • Mar 25 11:32:23 minikube dockerd[2393]: time="2020-03-25T11:32:23.554987448Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a3a27c58edec6b0c540f03a8d4c91225c21bf660b0290f190f403e3851704299/shim.sock" debug=false pid=4595
  • Mar 25 11:32:24 minikube dockerd[2393]: time="2020-03-25T11:32:24.926257524Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/e235bdc8fda4147c0b024c6be14277ee4f38a6a909c27a09bf356823772033f9/shim.sock" debug=false pid=4768
  • Mar 25 11:32:24 minikube dockerd[2393]: time="2020-03-25T11:32:24.928549163Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a2d7785366ec88b0fcb44bb8324d8bcfebd2d79852a187b31d55f79235e1cfe1/shim.sock" debug=false pid=4769
  • ==> container status <==
  • CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
  • a2d7785366ec8 bf261d1579144 4 minutes ago Running coredns 0 468be718b18a5
  • e235bdc8fda41 bf261d1579144 4 minutes ago Running coredns 0 a3a27c58edec6
  • 20844aa8cdd97 9b65a0f78b091 4 minutes ago Running kube-proxy 0 bb8a55695b34b
  • 13a26ccae63b2 4689081edb103 4 minutes ago Running storage-provisioner 0 211c6828c0e1b
  • 0325fa454b2d9 b2756210eeabf 5 minutes ago Running etcd 0 54beef3e00347
  • 5a5a0fc3b5c4c 98fecf43a54fd 5 minutes ago Running kube-scheduler 0 d76cb0c8474a0
  • c1b7703ac1e8b bb16442bcd949 5 minutes ago Running kube-controller-manager 0 429d8fd641d43
  • 0517284e551a2 df60c7526a3dc 5 minutes ago Running kube-apiserver 0 f7dd14e3ad155
  • ==> coredns [a2d7785366ec] <==
  • .:53
  • 2020-03-25T11:32:28.014Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
  • 2020-03-25T11:32:28.014Z [INFO] CoreDNS-1.6.2
  • 2020-03-25T11:32:28.014Z [INFO] linux/amd64, go1.12.8, 795a3eb
  • CoreDNS-1.6.2
  • linux/amd64, go1.12.8, 795a3eb
  • ==> coredns [e235bdc8fda4] <==
  • .:53
  • 2020-03-25T11:32:28.014Z [INFO] plugin/reload: Running configuration MD5 = f64cb9b977c7dfca58c4fab108535a76
  • 2020-03-25T11:32:28.014Z [INFO] CoreDNS-1.6.2
  • 2020-03-25T11:32:28.014Z [INFO] linux/amd64, go1.12.8, 795a3eb
  • CoreDNS-1.6.2
  • linux/amd64, go1.12.8, 795a3eb
  • ==> dmesg <==
  • [ +5.002977] hpet1: lost 318 rtc interrupts
  • [ +5.002198] hpet1: lost 319 rtc interrupts
  • [ +8.003573] kauditd_printk_skb: 43 callbacks suppressed
  • [ +7.020274] hpet_rtc_timer_reinit: 3 callbacks suppressed
  • [ +0.000003] hpet1: lost 320 rtc interrupts
  • [ +5.006266] hpet1: lost 318 rtc interrupts
  • [ +5.006769] hpet1: lost 319 rtc interrupts
  • [ +5.004782] hpet1: lost 318 rtc interrupts
  • [ +5.014019] hpet1: lost 319 rtc interrupts
  • [ +5.010676] hpet1: lost 319 rtc interrupts
  • [Mar25 11:33] hpet1: lost 318 rtc interrupts
  • [ +5.004429] hpet1: lost 318 rtc interrupts
  • [ +5.005800] hpet1: lost 318 rtc interrupts
  • [ +5.001546] hpet1: lost 320 rtc interrupts
  • [ +5.002655] hpet1: lost 318 rtc interrupts
  • [ +5.007051] hpet1: lost 318 rtc interrupts
  • [ +5.006150] hpet1: lost 319 rtc interrupts
  • [ +5.005015] hpet1: lost 318 rtc interrupts
  • [ +5.007356] hpet1: lost 318 rtc interrupts
  • [ +5.012660] hpet1: lost 319 rtc interrupts
  • [ +5.008192] hpet1: lost 319 rtc interrupts
  • [ +5.012176] hpet1: lost 318 rtc interrupts
  • [Mar25 11:34] hpet1: lost 319 rtc interrupts
  • [ +5.016157] hpet1: lost 319 rtc interrupts
  • [ +5.006208] hpet1: lost 319 rtc interrupts
  • [ +5.004727] hpet1: lost 318 rtc interrupts
  • [ +5.005585] hpet1: lost 318 rtc interrupts
  • [ +5.007791] hpet1: lost 319 rtc interrupts
  • [ +5.007878] hpet1: lost 318 rtc interrupts
  • [ +5.006074] hpet1: lost 319 rtc interrupts
  • [ +5.007033] hpet1: lost 318 rtc interrupts
  • [ +5.005103] hpet1: lost 318 rtc interrupts
  • [ +5.004367] hpet1: lost 320 rtc interrupts
  • [ +5.005777] hpet1: lost 318 rtc interrupts
  • [Mar25 11:35] hpet1: lost 319 rtc interrupts
  • [ +5.005434] hpet1: lost 318 rtc interrupts
  • [ +5.005864] hpet1: lost 318 rtc interrupts
  • [ +5.006218] hpet1: lost 319 rtc interrupts
  • [ +5.008885] hpet1: lost 318 rtc interrupts
  • [ +5.006128] hpet1: lost 319 rtc interrupts
  • [ +5.006946] hpet1: lost 318 rtc interrupts
  • [ +5.005127] hpet1: lost 318 rtc interrupts
  • [ +5.005263] hpet1: lost 319 rtc interrupts
  • [ +5.009468] hpet1: lost 318 rtc interrupts
  • [ +5.017689] hpet1: lost 319 rtc interrupts
  • [ +5.010806] hpet1: lost 319 rtc interrupts
  • [Mar25 11:36] hpet1: lost 319 rtc interrupts
  • [ +4.993157] hpet1: lost 317 rtc interrupts
  • [ +5.001692] hpet1: lost 318 rtc interrupts
  • [ +5.002057] hpet1: lost 319 rtc interrupts
  • [ +5.001233] hpet1: lost 319 rtc interrupts
  • [ +5.008173] hpet1: lost 318 rtc interrupts
  • [ +4.996169] hpet1: lost 318 rtc interrupts
  • [ +5.004256] hpet1: lost 318 rtc interrupts
  • [ +5.002381] hpet1: lost 318 rtc interrupts
  • [ +5.001494] hpet1: lost 318 rtc interrupts
  • [ +5.000255] hpet1: lost 318 rtc interrupts
  • [ +5.000811] hpet1: lost 318 rtc interrupts
  • [Mar25 11:37] hpet1: lost 319 rtc interrupts
  • [ +4.995477] hpet1: lost 318 rtc interrupts
  • ==> kernel <==
  • 11:37:13 up 8 min, 0 users, load average: 1.10, 1.19, 0.68
  • Linux minikube 4.19.94 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Fri Mar 6 11:41:28 PST 2020 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2019.02.9"
  • ==> kube-apiserver [0517284e551a] <==
  • I0325 11:32:04.011732 1 client.go:357] parsed scheme: "endpoint"
  • I0325 11:32:04.011930 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0325 11:32:04.022694 1 client.go:357] parsed scheme: "endpoint"
  • I0325 11:32:04.022910 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0325 11:32:04.040592 1 client.go:357] parsed scheme: "endpoint"
  • I0325 11:32:04.040946 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • W0325 11:32:04.282453 1 genericapiserver.go:404] Skipping API batch/v2alpha1 because it has no resources.
  • W0325 11:32:04.310899 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
  • W0325 11:32:04.335745 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
  • W0325 11:32:04.340163 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
  • W0325 11:32:04.373863 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
  • W0325 11:32:04.415925 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
  • W0325 11:32:04.416015 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
  • I0325 11:32:04.431613 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
  • I0325 11:32:04.432006 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
  • I0325 11:32:04.435322 1 client.go:357] parsed scheme: "endpoint"
  • I0325 11:32:04.435700 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0325 11:32:04.449799 1 client.go:357] parsed scheme: "endpoint"
  • I0325 11:32:04.450270 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
  • I0325 11:32:10.593931 1 secure_serving.go:123] Serving securely on [::]:8443
  • I0325 11:32:10.596003 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
  • I0325 11:32:10.596105 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
  • I0325 11:32:10.596126 1 available_controller.go:383] Starting AvailableConditionController
  • I0325 11:32:10.596138 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
  • I0325 11:32:10.596256 1 autoregister_controller.go:140] Starting autoregister controller
  • I0325 11:32:10.596282 1 cache.go:32] Waiting for caches to sync for autoregister controller
  • I0325 11:32:10.596374 1 crd_finalizer.go:274] Starting CRDFinalizer
  • I0325 11:32:10.597091 1 controller.go:81] Starting OpenAPI AggregationController
  • I0325 11:32:10.602119 1 log.go:172] http: TLS handshake error from 127.0.0.1:40540: EOF
  • I0325 11:32:10.613112 1 log.go:172] http: TLS handshake error from 127.0.0.1:40542: EOF
  • I0325 11:32:10.618339 1 log.go:172] http: TLS handshake error from 127.0.0.1:40544: EOF
  • I0325 11:32:10.653666 1 crdregistration_controller.go:111] Starting crd-autoregister controller
  • I0325 11:32:10.653959 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
  • I0325 11:32:10.654042 1 controller.go:85] Starting OpenAPI controller
  • I0325 11:32:10.654114 1 customresource_discovery_controller.go:208] Starting DiscoveryController
  • I0325 11:32:10.654174 1 naming_controller.go:288] Starting NamingConditionController
  • I0325 11:32:10.654229 1 establishing_controller.go:73] Starting EstablishingController
  • I0325 11:32:10.654281 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
  • I0325 11:32:10.654337 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
  • E0325 11:32:10.654572 1 controller.go:154] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.103, ResourceVersion: 0, AdditionalErrorMsg:
  • I0325 11:32:10.772501 1 shared_informer.go:204] Caches are synced for crd-autoregister
  • I0325 11:32:10.817178 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
  • I0325 11:32:10.817474 1 cache.go:39] Caches are synced for autoregister controller
  • I0325 11:32:10.897359 1 cache.go:39] Caches are synced for AvailableConditionController controller
  • I0325 11:32:11.594286 1 controller.go:107] OpenAPI AggregationController: Processing item
  • I0325 11:32:11.594306 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
  • I0325 11:32:11.594453 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
  • I0325 11:32:11.602199 1 storage_scheduling.go:139] created PriorityClass system-node-critical with value 2000001000
  • I0325 11:32:11.612231 1 storage_scheduling.go:139] created PriorityClass system-cluster-critical with value 2000000000
  • I0325 11:32:11.612308 1 storage_scheduling.go:148] all system priority classes are created successfully or already exist.
  • I0325 11:32:12.468094 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
  • I0325 11:32:12.527797 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
  • W0325 11:32:12.740474 1 lease.go:222] Resetting endpoints for master service "kubernetes" to [192.168.99.103]
  • I0325 11:32:12.742300 1 controller.go:606] quota admission added evaluator for: endpoints
  • I0325 11:32:13.808146 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
  • I0325 11:32:14.147172 1 controller.go:606] quota admission added evaluator for: serviceaccounts
  • I0325 11:32:14.189109 1 controller.go:606] quota admission added evaluator for: deployments.apps
  • I0325 11:32:14.427519 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
  • I0325 11:32:21.659270 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
  • I0325 11:32:22.022999 1 controller.go:606] quota admission added evaluator for: replicasets.apps
  • ==> kube-controller-manager [c1b7703ac1e8] <==
  • I0325 11:32:20.342333 1 shared_informer.go:197] Waiting for caches to sync for disruption
  • I0325 11:32:20.533905 1 controllermanager.go:534] Started "cronjob"
  • I0325 11:32:20.534196 1 cronjob_controller.go:96] Starting CronJob Manager
  • I0325 11:32:20.682615 1 controllermanager.go:534] Started "csrsigning"
  • I0325 11:32:20.682674 1 certificate_controller.go:113] Starting certificate controller
  • I0325 11:32:20.682788 1 shared_informer.go:197] Waiting for caches to sync for certificate
  • I0325 11:32:20.841889 1 controllermanager.go:534] Started "csrapproving"
  • I0325 11:32:20.842287 1 certificate_controller.go:113] Starting certificate controller
  • I0325 11:32:20.842407 1 shared_informer.go:197] Waiting for caches to sync for certificate
  • I0325 11:32:21.092666 1 controllermanager.go:534] Started "pv-protection"
  • I0325 11:32:21.092792 1 pv_protection_controller.go:81] Starting PV protection controller
  • I0325 11:32:21.093227 1 shared_informer.go:197] Waiting for caches to sync for PV protection
  • I0325 11:32:21.350680 1 controllermanager.go:534] Started "namespace"
  • I0325 11:32:21.351162 1 namespace_controller.go:186] Starting namespace controller
  • I0325 11:32:21.351482 1 shared_informer.go:197] Waiting for caches to sync for namespace
  • I0325 11:32:21.581321 1 controllermanager.go:534] Started "deployment"
  • I0325 11:32:21.581733 1 shared_informer.go:197] Waiting for caches to sync for resource quota
  • I0325 11:32:21.582181 1 deployment_controller.go:152] Starting deployment controller
  • I0325 11:32:21.582272 1 shared_informer.go:197] Waiting for caches to sync for deployment
  • W0325 11:32:21.626514 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="m01" does not exist
  • I0325 11:32:21.630227 1 shared_informer.go:204] Caches are synced for GC
  • I0325 11:32:21.640034 1 shared_informer.go:204] Caches are synced for daemon sets
  • I0325 11:32:21.643376 1 shared_informer.go:204] Caches are synced for certificate
  • I0325 11:32:21.649105 1 shared_informer.go:204] Caches are synced for taint
  • I0325 11:32:21.652245 1 node_lifecycle_controller.go:1208] Initializing eviction metric for zone:
  • W0325 11:32:21.652526 1 node_lifecycle_controller.go:903] Missing timestamp for Node m01. Assuming now as a timestamp.
  • I0325 11:32:21.652705 1 node_lifecycle_controller.go:1108] Controller detected that zone is now in state Normal.
  • I0325 11:32:21.654117 1 taint_manager.go:186] Starting NoExecuteTaintManager
  • I0325 11:32:21.656264 1 event.go:274] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"f8204460-7c6c-4d42-91dd-0b7689bad6e2", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node m01 event: Registered Node m01 in Controller
  • I0325 11:32:21.682090 1 shared_informer.go:204] Caches are synced for ReplicaSet
  • I0325 11:32:21.682771 1 shared_informer.go:204] Caches are synced for service account
  • I0325 11:32:21.683118 1 shared_informer.go:204] Caches are synced for certificate
  • I0325 11:32:21.687383 1 shared_informer.go:204] Caches are synced for HPA
  • I0325 11:32:21.687444 1 shared_informer.go:204] Caches are synced for job
  • I0325 11:32:21.696868 1 shared_informer.go:204] Caches are synced for ReplicationController
  • I0325 11:32:21.701557 1 shared_informer.go:204] Caches are synced for TTL
  • I0325 11:32:21.708448 1 event.go:274] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"32ae71ac-16f0-4d17-a30f-59f4e10c82e0", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-4btg2
  • I0325 11:32:21.726080 1 log.go:172] [INFO] signed certificate with serial number 325569609540307438026450107500662832423788835279
  • I0325 11:32:21.734640 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
  • I0325 11:32:21.762352 1 shared_informer.go:204] Caches are synced for namespace
  • I0325 11:32:21.790997 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
  • I0325 11:32:21.875202 1 shared_informer.go:204] Caches are synced for stateful set
  • I0325 11:32:21.882031 1 shared_informer.go:204] Caches are synced for PVC protection
  • I0325 11:32:21.894237 1 shared_informer.go:204] Caches are synced for PV protection
  • I0325 11:32:21.894457 1 shared_informer.go:204] Caches are synced for expand
  • I0325 11:32:21.933159 1 shared_informer.go:204] Caches are synced for persistent volume
  • I0325 11:32:21.956469 1 shared_informer.go:204] Caches are synced for bootstrap_signer
  • I0325 11:32:21.960523 1 shared_informer.go:204] Caches are synced for attach detach
  • I0325 11:32:22.007798 1 shared_informer.go:204] Caches are synced for deployment
  • I0325 11:32:22.042886 1 shared_informer.go:204] Caches are synced for disruption
  • I0325 11:32:22.043188 1 disruption.go:341] Sending events to api server.
  • I0325 11:32:22.104294 1 event.go:274] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"357df76d-1ddf-428f-9617-634b3a6251df", APIVersion:"apps/v1", ResourceVersion:"178", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-5644d7b6d9 to 2
  • I0325 11:32:22.113435 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"1d5a335f-7e3c-48f8-8a0e-7f4c7ad4fefb", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-g6vv2
  • I0325 11:32:22.156467 1 event.go:274] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-5644d7b6d9", UID:"1d5a335f-7e3c-48f8-8a0e-7f4c7ad4fefb", APIVersion:"apps/v1", ResourceVersion:"342", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-5644d7b6d9-rgsvj
  • I0325 11:32:22.182498 1 shared_informer.go:204] Caches are synced for resource quota
  • I0325 11:32:22.186918 1 shared_informer.go:204] Caches are synced for resource quota
  • I0325 11:32:22.193610 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0325 11:32:22.211686 1 shared_informer.go:204] Caches are synced for garbage collector
  • I0325 11:32:22.211709 1 garbagecollector.go:139] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
  • I0325 11:32:22.231514 1 shared_informer.go:204] Caches are synced for endpoint
  • ==> kube-proxy [20844aa8cdd9] <==
  • W0325 11:32:24.575545 1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
  • I0325 11:32:24.611224 1 node.go:135] Successfully retrieved node IP: 192.168.99.103
  • I0325 11:32:24.611322 1 server_others.go:149] Using iptables Proxier.
  • W0325 11:32:24.611527 1 proxier.go:287] clusterCIDR not specified, unable to distinguish between internal and external traffic
  • I0325 11:32:24.612002 1 server.go:529] Version: v1.16.3
  • I0325 11:32:24.618137 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
  • I0325 11:32:24.618186 1 conntrack.go:52] Setting nf_conntrack_max to 131072
  • I0325 11:32:24.626465 1 conntrack.go:83] Setting conntrack hashsize to 32768
  • I0325 11:32:24.632568 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
  • I0325 11:32:24.633165 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
  • I0325 11:32:24.633864 1 config.go:313] Starting service config controller
  • I0325 11:32:24.633923 1 shared_informer.go:197] Waiting for caches to sync for service config
  • I0325 11:32:24.634025 1 config.go:131] Starting endpoints config controller
  • I0325 11:32:24.634041 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
  • I0325 11:32:24.734691 1 shared_informer.go:204] Caches are synced for service config
  • I0325 11:32:24.734733 1 shared_informer.go:204] Caches are synced for endpoints config
  • ==> kube-scheduler [5a5a0fc3b5c4] <==
  • I0325 11:32:00.923239 1 serving.go:319] Generated self-signed cert in-memory
  • W0325 11:32:10.688066 1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
  • W0325 11:32:10.688118 1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
  • W0325 11:32:10.688130 1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
  • W0325 11:32:10.688138 1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
  • I0325 11:32:10.701202 1 server.go:148] Version: v1.16.3
  • I0325 11:32:10.701434 1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
  • W0325 11:32:10.793947 1 authorization.go:47] Authorization is disabled
  • W0325 11:32:10.794103 1 authentication.go:79] Authentication is disabled
  • I0325 11:32:10.794132 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
  • I0325 11:32:10.795214 1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
  • E0325 11:32:10.827697 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0325 11:32:10.831717 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0325 11:32:10.832067 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0325 11:32:10.832464 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0325 11:32:10.833257 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0325 11:32:10.833421 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0325 11:32:10.833526 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0325 11:32:10.833588 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0325 11:32:10.833637 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0325 11:32:10.833684 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0325 11:32:10.833728 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • E0325 11:32:11.831559 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
  • E0325 11:32:11.834067 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
  • E0325 11:32:11.850320 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
  • E0325 11:32:11.850175 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
  • E0325 11:32:11.853159 1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:250: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
  • E0325 11:32:11.862063 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
  • E0325 11:32:11.862306 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
  • E0325 11:32:11.862430 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
  • E0325 11:32:11.864161 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
  • E0325 11:32:11.872821 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
  • E0325 11:32:11.875570 1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
  • I0325 11:32:13.001422 1 leaderelection.go:241] attempting to acquire leader lease kube-system/kube-scheduler...
  • I0325 11:32:13.115147 1 leaderelection.go:251] successfully acquired lease kube-system/kube-scheduler
  • ==> kubelet <==
  • -- Logs begin at Wed 2020-03-25 11:29:00 UTC, end at Wed 2020-03-25 11:37:13 UTC. --
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.028408 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.128882 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.231068 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.331872 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.433059 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: W0325 11:32:10.437403 3377 status_manager.go:529] Failed to get status for pod "kube-controller-manager-m01_kube-system(293cd0e794eac60a394a39ebf097e04f)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-m01: net/http: TLS handshake timeout
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.533887 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.594936 3377 controller.go:135] failed to ensure node lease exists, will retry in 3.2s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m01?timeout=10s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.635849 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: I0325 11:32:10.649947 3377 trace.go:116] Trace[145790916]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:459 (started: 2020-03-25 11:32:00.440406012 +0000 UTC m=+3.445667063) (total time: 10.209484454s):
  • Mar 25 11:32:10 minikube kubelet[3377]: Trace[145790916]: [10.209484454s] [10.209484454s] END
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.650130 3377 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dm01&limit=500&resourceVersion=0: net/http: TLS handshake timeout
  • Mar 25 11:32:10 minikube kubelet[3377]: I0325 11:32:10.680078 3377 trace.go:116] Trace[414055393]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/kubelet.go:450 (started: 2020-03-25 11:32:00.444664857 +0000 UTC m=+3.449925898) (total time: 10.235367455s):
  • Mar 25 11:32:10 minikube kubelet[3377]: Trace[414055393]: [10.235359888s] [10.235359888s] Objects listed
  • Mar 25 11:32:10 minikube kubelet[3377]: I0325 11:32:10.716408 3377 trace.go:116] Trace[1973489295]: "Reflector ListAndWatch" name:k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46 (started: 2020-03-25 11:32:00.447750713 +0000 UTC m=+3.453011763) (total time: 10.268535715s):
  • Mar 25 11:32:10 minikube kubelet[3377]: Trace[1973489295]: [10.26849804s] [10.26849804s] Objects listed
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.736865 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: I0325 11:32:10.817011 3377 reconciler.go:154] Reconciler: start to sync state
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.837884 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.880741 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b7f7805d19", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6200db19, ext:575742501, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6200db19, ext:575742501, loc:(*time.Location)(0x7982100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:10 minikube kubelet[3377]: I0325 11:32:10.938374 3377 kubelet_node_status.go:75] Successfully registered node m01
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.938899 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:10 minikube kubelet[3377]: E0325 11:32:10.943902 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011daa9b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node m01 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e289b, ext:737046396, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e289b, ext:737046396, loc:(*time.Location)(0x7982100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.006948 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dbe67", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node m01 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e3c67, ext:737051465, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e3c67, ext:737051465, loc:(*time.Location)(0x7982100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.039646 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.062821 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dc9df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node m01 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e47df, ext:737054401, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e47df, ext:737054401, loc:(*time.Location)(0x7982100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.122220 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011daa9b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node m01 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e289b, ext:737046396, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b705d91a2, ext:816699529, loc:(*time.Location)(0x7982100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.141084 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.185412 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dbe67", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node m01 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e3c67, ext:737051465, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b705db243, ext:816707879, loc:(*time.Location)(0x7982100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.242485 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.245080 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dc9df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node m01 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e47df, ext:737054401, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b705dbf4b, ext:816711213, loc:(*time.Location)(0x7982100)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.299048 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b80aa7779c", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b7527f59c, ext:897072266, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b7527f59c, ext:897072266, loc:(*time.Location)(0x7982100)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.343424 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.361098 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011daa9b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node m01 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e289b, ext:737046396, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b7a3e2ced, ext:982414293, loc:(*time.Location)(0x7982100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.445138 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.545873 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.646325 3377 kubelet.go:2267] node "m01" not found
  • Mar 25 11:32:11 minikube kubelet[3377]: E0325 11:32:11.790449 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dbe67", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node m01 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e3c67, ext:737051465, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b7a3e6301, ext:982428137, loc:(*time.Location)(0x7982100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:12 minikube kubelet[3377]: E0325 11:32:12.139313 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011dc9df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node m01 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e47df, ext:737054401, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b7a3e7401, ext:982432490, loc:(*time.Location)(0x7982100)}}, Count:3, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:12 minikube kubelet[3377]: E0325 11:32:12.537750 3377 event.go:256] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"m01.15ff88b8011daa9b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"m01", UID:"m01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node m01 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"m01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b6b9e289b, ext:737046396, loc:(*time.Location)(0x7982100)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf96ee2b94127213, ext:1342014197, loc:(*time.Location)(0x7982100)}}, Count:4, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!)
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.837275 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-729fr" (UniqueName: "kubernetes.io/secret/1f0c35c6-2510-4079-a143-91c60177ac4a-kube-proxy-token-729fr") pod "kube-proxy-4btg2" (UID: "1f0c35c6-2510-4079-a143-91c60177ac4a")
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.839511 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-8rv7v" (UniqueName: "kubernetes.io/secret/466c66fc-3481-4a7a-9f58-dfc8d5767fa4-storage-provisioner-token-8rv7v") pod "storage-provisioner" (UID: "466c66fc-3481-4a7a-9f58-dfc8d5767fa4")
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.839683 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1f0c35c6-2510-4079-a143-91c60177ac4a-kube-proxy") pod "kube-proxy-4btg2" (UID: "1f0c35c6-2510-4079-a143-91c60177ac4a")
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.839831 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/1f0c35c6-2510-4079-a143-91c60177ac4a-xtables-lock") pod "kube-proxy-4btg2" (UID: "1f0c35c6-2510-4079-a143-91c60177ac4a")
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.840991 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/466c66fc-3481-4a7a-9f58-dfc8d5767fa4-tmp") pod "storage-provisioner" (UID: "466c66fc-3481-4a7a-9f58-dfc8d5767fa4")
  • Mar 25 11:32:21 minikube kubelet[3377]: I0325 11:32:21.841350 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/1f0c35c6-2510-4079-a143-91c60177ac4a-lib-modules") pod "kube-proxy-4btg2" (UID: "1f0c35c6-2510-4079-a143-91c60177ac4a")
  • Mar 25 11:32:22 minikube kubelet[3377]: I0325 11:32:22.373137 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/58de67e8-49e5-48a2-b92d-c8e1b74b8fcc-config-volume") pod "coredns-5644d7b6d9-rgsvj" (UID: "58de67e8-49e5-48a2-b92d-c8e1b74b8fcc")
  • Mar 25 11:32:22 minikube kubelet[3377]: I0325 11:32:22.373341 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-bvd8m" (UniqueName: "kubernetes.io/secret/58de67e8-49e5-48a2-b92d-c8e1b74b8fcc-coredns-token-bvd8m") pod "coredns-5644d7b6d9-rgsvj" (UID: "58de67e8-49e5-48a2-b92d-c8e1b74b8fcc")
  • Mar 25 11:32:22 minikube kubelet[3377]: I0325 11:32:22.373620 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e68f2f60-0da4-41c7-bef4-593270ce4e80-config-volume") pod "coredns-5644d7b6d9-g6vv2" (UID: "e68f2f60-0da4-41c7-bef4-593270ce4e80")
  • Mar 25 11:32:22 minikube kubelet[3377]: I0325 11:32:22.373781 3377 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-bvd8m" (UniqueName: "kubernetes.io/secret/e68f2f60-0da4-41c7-bef4-593270ce4e80-coredns-token-bvd8m") pod "coredns-5644d7b6d9-g6vv2" (UID: "e68f2f60-0da4-41c7-bef4-593270ce4e80")
  • Mar 25 11:32:22 minikube kubelet[3377]: W0325 11:32:22.829020 3377 pod_container_deletor.go:75] Container "bb8a55695b34bb1bc232b6e10e5a4a234e30233bac34ff9f150bd844c2cb3bc2" not found in pod's containers
  • Mar 25 11:32:22 minikube kubelet[3377]: W0325 11:32:22.851001 3377 pod_container_deletor.go:75] Container "211c6828c0e1bf9343b2c91d4627992a4f2dc44a43a8ff300c5dc15564baa10d" not found in pod's containers
  • Mar 25 11:32:24 minikube kubelet[3377]: W0325 11:32:24.477384 3377 pod_container_deletor.go:75] Container "a3a27c58edec6b0c540f03a8d4c91225c21bf660b0290f190f403e3851704299" not found in pod's containers
  • Mar 25 11:32:24 minikube kubelet[3377]: W0325 11:32:24.480682 3377 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-g6vv2 through plugin: invalid network status for
  • Mar 25 11:32:24 minikube kubelet[3377]: W0325 11:32:24.490565 3377 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-rgsvj through plugin: invalid network status for
  • Mar 25 11:32:24 minikube kubelet[3377]: W0325 11:32:24.573251 3377 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-rgsvj through plugin: invalid network status for
  • Mar 25 11:32:24 minikube kubelet[3377]: W0325 11:32:24.575926 3377 pod_container_deletor.go:75] Container "468be718b18a5bc36ec2706f1cd2567e9ccebb38edae8ce4c8b04bd048cc43a2" not found in pod's containers
  • Mar 25 11:32:25 minikube kubelet[3377]: W0325 11:32:25.649430 3377 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-rgsvj through plugin: invalid network status for
  • Mar 25 11:32:25 minikube kubelet[3377]: W0325 11:32:25.696740 3377 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-5644d7b6d9-g6vv2 through plugin: invalid network status for
  • Mar 25 11:32:27 minikube kubelet[3377]: I0325 11:32:27.131045 3377 transport.go:132] certificate rotation detected, shutting down client connections to start using new credentials
  • ==> storage-provisioner [13a26ccae63b] <==

The operating system version:
Windows 10 Enterprise
VirtualBox-6.1.4-136177
kubectl 1.16.3

@tstromberg tstromberg added area/dns DNS issues needs-faq-entry Things that could use documentation in a FAQ labels Mar 25, 2020
@tstromberg
Copy link
Contributor

The good news is that it's just a warning that something unusual is going on which may prevent you from pulling a new image in the future. minikube should have all that it needs to function offline.

It is a bit curious though that you were able to use curl: was that from within the minikube VM, or on your host?

This comes up often enough that I nominate it as needing a FAQ entry.

@tstromberg tstromberg added the kind/documentation Categorizes issue or PR as related to documentation. label Mar 25, 2020
@tstromberg tstromberg changed the title VM is unable to access k8s.gcr.io Warning: VM is unable to access k8s.gcr.io Mar 25, 2020
@eoinreilly93
Copy link
Author

eoinreilly93 commented Mar 25, 2020

That's great, thanks for the quick response!

I ran that curl command from my host machine. Should I try for within the VM instead?

Would it be possible to update the information located here: https://minikube.sigs.k8s.io/docs/reference/disk_cache/? The reason I was unsure if all the images had downloaded correctly was that my cache directory structure looks slightly different to the one outlined on that page, so I wasn't sure if it had downloaded everything correctly. I know the one on the link is for version 1.0, but I think updating it to the latest version would be very helpful to others, or at least providing the different structures for different versions!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dns DNS issues kind/documentation Categorizes issue or PR as related to documentation. kind/support Categorizes issue or PR as a support question. needs-faq-entry Things that could use documentation in a FAQ
Projects
None yet
Development

No branches or pull requests

3 participants