Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker macOS: This control plane is not running! (state=Stopped) #7296

Closed
cwansart opened this issue Mar 28, 2020 · 20 comments · Fixed by #7310
Closed

docker macOS: This control plane is not running! (state=Stopped) #7296

cwansart opened this issue Mar 28, 2020 · 20 comments · Fixed by #7310
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@cwansart
Copy link

I have tried to open a tunnel to use the LoadBalancer as described in the documentation. (https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/) Currently it fails on my MacBook.

The exact command to reproduce the issue:

$ minikube start --container-runtime=docker --driver=docker
$ minikube tunnel

The full output of the command that failed:

$ minikube start --container-runtime=docker --driver=docker
😄 minikube v1.9.0 on Darwin 10.14.6
✨ Using the docker driver based on user configuration
🚜 Pulling base image ...
🔥 Creating Kubernetes in docker container with (CPUs=2) (4 available), Memory=4000MB (5948MB available) ...
🐳 Preparing Kubernetes v1.18.0 on Docker 19.03.2 ...
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🌟 Enabling addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"

$ minikube tunnel
🤷 This control plane is not running! (state=Stopped)
❗ This is unusual - you may want to investigate using "minikube logs"
👉 To fix this, run: minikube start

The output of the minikube logs command:

==> Docker <==
-- Logs begin at Sat 2020-03-28 09:38:06 UTC, end at Sat 2020-03-28 09:41:14 UTC. --
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.465616600Z" level=info msg="loading plugin "io.containerd.service.v1.tasks-service"..." type=io.containerd.service.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.465744804Z" level=info msg="loading plugin "io.containerd.internal.v1.restart"..." type=io.containerd.internal.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.465889093Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.465992204Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466068011Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466140591Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466213423Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466291643Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466364839Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466441731Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466514690Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466665718Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466793300Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466887456Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.466961366Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.467242373Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.467388174Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.467619476Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.467860326Z" level=info msg="containerd successfully booted in 0.042684s"
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.473027087Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0001300d0, READY" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.475899757Z" level=info msg="parsed scheme: "unix"" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476008023Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476090212Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476165162Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476339363Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006665b0, CONNECTING" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476780598Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc0006665b0, READY" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.476416046Z" level=info msg="blockingPicker: the picked transport is not ready, loop back to repick" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.477595125Z" level=info msg="parsed scheme: "unix"" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.477636157Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.477653036Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.477661956Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.477701728Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000130ab0, CONNECTING" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.478118907Z" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc000130ab0, READY" module=grpc
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.481828312Z" level=info msg="[graphdriver] using prior storage driver: overlay2"
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.491798318Z" level=info msg="Loading containers: start."
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.625413967Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.18.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.684470479Z" level=info msg="Loading containers: done."
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.706302751Z" level=info msg="Docker daemon" commit=6a30dfca03 graphdriver(s)=overlay2 version=19.03.2
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.706395685Z" level=info msg="Daemon has completed initialization"
Mar 28 09:38:18 minikube systemd[1]: Started Docker Application Container Engine.
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.732315007Z" level=info msg="API listen on /var/run/docker.sock"
Mar 28 09:38:18 minikube dockerd[504]: time="2020-03-28T09:38:18.733326173Z" level=info msg="API listen on [::]:2376"
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.399033579Z" level=info msg="shim containerd-shim started" address=/containerd-shim/3c4f3aac345dcafd53d2f0b47a22c028f032e4b5018dd7be2bba090579788683.sock debug=false pid=1643
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.429207998Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a80ced283495b5a9fe979978727190584ccbff8c00b8df5ab97944598a386cd1.sock debug=false pid=1662
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.438550687Z" level=info msg="shim containerd-shim started" address=/containerd-shim/01925aeed00689a217f4b5ed448385a2f34c904e9b02e0378618eeb328f44972.sock debug=false pid=1669
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.439814411Z" level=info msg="shim containerd-shim started" address=/containerd-shim/a6834529cace5a9d99dce23b26f6e416d1c0903bab0ef97febfa3c27cf820c89.sock debug=false pid=1678
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.703189263Z" level=info msg="shim containerd-shim started" address=/containerd-shim/1980b36e17f2fa019851afe74fc5aa9eb09794046ffd7b50958c362d97e84789.sock debug=false pid=1792
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.737713517Z" level=info msg="shim containerd-shim started" address=/containerd-shim/ed31bcc2b81462de1b4cb4beac02f3523b139381e5108722a4542d08cd65af74.sock debug=false pid=1812
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.744742173Z" level=info msg="shim containerd-shim started" address=/containerd-shim/7caced97bdb36a1e1b3ce62578f116e80e67ddf42b972319e4eca704895954e7.sock debug=false pid=1816
Mar 28 09:38:35 minikube dockerd[504]: time="2020-03-28T09:38:35.807439251Z" level=info msg="shim containerd-shim started" address=/containerd-shim/5728ee147335539cf9df33e3b00b12e483f1509ed05a304c74678fc716f5577b.sock debug=false pid=1857
Mar 28 09:39:02 minikube dockerd[504]: time="2020-03-28T09:39:02.916872491Z" level=info msg="shim containerd-shim started" address=/containerd-shim/24a986dcd63f3e37c38c2c107317fbdf1c8ce7e07bcb585f612e2cf01aebd65a.sock debug=false pid=2789
Mar 28 09:39:02 minikube dockerd[504]: time="2020-03-28T09:39:02.931445705Z" level=info msg="shim containerd-shim started" address=/containerd-shim/b2b3e6902b8faa44b57ddfa15cc1d9f3d0ce1291dcb0ef81f4b2070e892bec4f.sock debug=false pid=2801
Mar 28 09:39:03 minikube dockerd[504]: time="2020-03-28T09:39:03.216035019Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e4c32ac895b091f0ccce28c9858af808357da9f0b3fadc45e52027e9ca6879fe.sock debug=false pid=2870
Mar 28 09:39:03 minikube dockerd[504]: time="2020-03-28T09:39:03.222698158Z" level=info msg="shim containerd-shim started" address=/containerd-shim/8914f15eee61ec2de24cfdfaedc3030d73f03c0ac25e0312ccc2a32ad4c33c94.sock debug=false pid=2877
Mar 28 09:39:04 minikube dockerd[504]: time="2020-03-28T09:39:04.482886617Z" level=info msg="shim containerd-shim started" address=/containerd-shim/db9e6b306529b7ca4eb500ec6526f1ab873573fa4ffb479b9e0f996a0317c350.sock debug=false pid=3028
Mar 28 09:39:04 minikube dockerd[504]: time="2020-03-28T09:39:04.496947832Z" level=info msg="shim containerd-shim started" address=/containerd-shim/4e07753b3bfdf494c4716a9dee58ce1d977097b62ebeb156cd53472627303828.sock debug=false pid=3042
Mar 28 09:39:05 minikube dockerd[504]: time="2020-03-28T09:39:05.230774083Z" level=info msg="shim containerd-shim started" address=/containerd-shim/e0a5c77d6ccd1e294eaf68bd0b2d099e3dbdc6deed40557272a4da59d4d91153.sock debug=false pid=3168
Mar 28 09:39:05 minikube dockerd[504]: time="2020-03-28T09:39:05.247032243Z" level=info msg="shim containerd-shim started" address=/containerd-shim/612e0f178c637f33c8b4ba422e1c47b1259dc110cf28d04653ed18c31bf77777.sock debug=false pid=3183
Mar 28 09:39:10 minikube dockerd[504]: time="2020-03-28T09:39:10.798701611Z" level=info msg="shim containerd-shim started" address=/containerd-shim/472c518db7516c39702637337526ca18e652db0d56f5d5ec434794da2c84f2c7.sock debug=false pid=3400
Mar 28 09:39:10 minikube dockerd[504]: time="2020-03-28T09:39:10.968559127Z" level=info msg="shim containerd-shim started" address=/containerd-shim/322dc68c84414a30ec572d1dbb12cd7a7a10dba45c9a5a3fae59239daf0dfae9.sock debug=false pid=3430

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
0ef5a4adebde8 4689081edb103 2 minutes ago Running storage-provisioner 0 f3b145f1a0d95
d93a9441a43ca 67da37a9a360e 2 minutes ago Running coredns 0 0198de6532fe9
77658c66a2efd 67da37a9a360e 2 minutes ago Running coredns 0 7ddc004dc5f41
86eb97648b187 aa67fec7d7ef7 2 minutes ago Running kindnet-cni 0 1b95dcb8b2d6c
2586b1911767a 43940c34f24f3 2 minutes ago Running kube-proxy 0 b1b4292126068
222d9cc57a5f9 303ce5db0e90d 2 minutes ago Running etcd 0 a6fb8318314b0
b5f8ff08f2df6 d3e55153f52fb 2 minutes ago Running kube-controller-manager 0 22ff38cb17bf3
ed551651020b2 a31f78c7c8ce1 2 minutes ago Running kube-scheduler 0 11ca70aa0d5cc
9f597719254a9 74060cea7f704 2 minutes ago Running kube-apiserver 0 bbf9f4c1f7304

==> coredns [77658c66a2ef] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> coredns [d93a9441a43c] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.7
linux/amd64, go1.13.6, da7f65b

==> describe nodes <==
Name: minikube
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=minikube
kubernetes.io/os=linux
minikube.k8s.io/commit=48fefd43444d2f8852f527c78f0141b377b1e42a
minikube.k8s.io/name=minikube
minikube.k8s.io/updated_at=2020_03_28T10_38_44_0700
minikube.k8s.io/version=v1.9.0
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp: Sat, 28 Mar 2020 09:38:40 +0000
Taints:
Unschedulable: false
Lease:
HolderIdentity: minikube
AcquireTime:
RenewTime: Sat, 28 Mar 2020 09:41:14 +0000
Conditions:
Type Status LastHeartbeatTime LastTransitionTime Reason Message


MemoryPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available
DiskPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure
PIDPressure False Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:36 +0000 KubeletHasSufficientPID kubelet has sufficient PID available
Ready True Sat, 28 Mar 2020 09:39:14 +0000 Sat, 28 Mar 2020 09:38:54 +0000 KubeletReady kubelet is posting ready status
Addresses:
InternalIP: 172.17.0.2
Hostname: minikube
Capacity:
cpu: 4
ephemeral-storage: 61255492Ki
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 6091056Ki
pods: 110
Allocatable:
cpu: 4
ephemeral-storage: 56453061334
hugepages-1Gi: 0
hugepages-2Mi: 0
memory: 5988656Ki
pods: 110
System Info:
Machine ID: ea7877d5ddb44eccbd34b60333376efb
System UUID: bbadecaf-4cdf-4bdd-984c-81ac84fa3b6f
Boot ID: ae5d8a5d-3015-4e39-9ee4-7d2967aed3a7
Kernel Version: 4.19.76-linuxkit
OS Image: Ubuntu 19.10
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://19.3.2
Kubelet Version: v1.18.0
Kube-Proxy Version: v1.18.0
PodCIDR: 10.244.0.0/24
PodCIDRs: 10.244.0.0/24
Non-terminated Pods: (9 in total)
Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE


kube-system coredns-66bff467f8-txjxd 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 2m14s
kube-system coredns-66bff467f8-zs5ms 100m (2%) 0 (0%) 70Mi (1%) 170Mi (2%) 2m14s
kube-system etcd-minikube 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m31s
kube-system kindnet-zbng4 100m (2%) 100m (2%) 50Mi (0%) 50Mi (0%) 2m14s
kube-system kube-apiserver-minikube 250m (6%) 0 (0%) 0 (0%) 0 (0%) 2m31s
kube-system kube-controller-manager-minikube 200m (5%) 0 (0%) 0 (0%) 0 (0%) 2m31s
kube-system kube-proxy-t5bhd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m14s
kube-system kube-scheduler-minikube 100m (2%) 0 (0%) 0 (0%) 0 (0%) 2m31s
kube-system storage-provisioner 0 (0%) 0 (0%) 0 (0%) 0 (0%) 2m5s
Allocated resources:
(Total limits may be over 100 percent, i.e., overcommitted.)
Resource Requests Limits


cpu 850m (21%) 100m (2%)
memory 190Mi (3%) 390Mi (6%)
ephemeral-storage 0 (0%) 0 (0%)
hugepages-1Gi 0 (0%) 0 (0%)
hugepages-2Mi 0 (0%) 0 (0%)
Events:
Type Reason Age From Message


Normal Starting 2m41s kubelet, minikube Starting kubelet.
Warning ImageGCFailed 2m41s kubelet, minikube failed to get imageFs info: unable to find data in memory cache
Normal NodeHasSufficientMemory 2m41s (x3 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal NodeHasNoDiskPressure 2m41s (x3 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m41s (x2 over 2m41s) kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeAllocatableEnforced 2m41s kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeHasSufficientMemory 2m31s kubelet, minikube Node minikube status is now: NodeHasSufficientMemory
Normal Starting 2m31s kubelet, minikube Starting kubelet.
Normal NodeHasNoDiskPressure 2m31s kubelet, minikube Node minikube status is now: NodeHasNoDiskPressure
Normal NodeHasSufficientPID 2m31s kubelet, minikube Node minikube status is now: NodeHasSufficientPID
Normal NodeNotReady 2m31s kubelet, minikube Node minikube status is now: NodeNotReady
Normal NodeAllocatableEnforced 2m31s kubelet, minikube Updated Node Allocatable limit across pods
Normal NodeReady 2m21s kubelet, minikube Node minikube status is now: NodeReady
Normal Starting 2m12s kube-proxy, minikube Starting kube-proxy.

==> dmesg <==
[Mar28 08:30] virtio-pci 0000:00:01.0: can't derive routing for PCI INT A
[ +0.001161] virtio-pci 0000:00:01.0: PCI INT A: no GSI
[ +0.003167] virtio-pci 0000:00:07.0: can't derive routing for PCI INT A
[ +0.001123] virtio-pci 0000:00:07.0: PCI INT A: no GSI
[ +0.051801] Hangcheck: starting hangcheck timer 0.9.1 (tick is 180 seconds, margin is 60 seconds).
[ +0.014755] ahci 0000:00:02.0: can't derive routing for PCI INT A
[ +0.000851] ahci 0000:00:02.0: PCI INT A: no GSI
[ +0.612184] i8042: Can't read CTR while initializing i8042
[ +0.000833] i8042: probe of i8042 failed with error -5
[ +0.001747] ata1.00: ATA Identify Device Log not supported
[ +0.000001] ata1.00: Security Log not supported
[ +0.003449] ata1.00: ATA Identify Device Log not supported
[ +0.001311] ata1.00: Security Log not supported
[ +0.002588] ACPI Error: Could not enable RealTimeClock event (20180810/evxfevnt-184)
[ +0.001368] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20180810/evxface-620)
[ +0.210797] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.019778] FAT-fs (sr0): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +3.721949] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!
[ +0.074232] FAT-fs (sr2): utf8 is not a recommended IO charset for FAT filesystems, filesystem will be case sensitive!

==> etcd [222d9cc57a5f] <==
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-28 09:38:36.208919 I | etcdmain: etcd Version: 3.4.3
2020-03-28 09:38:36.209078 I | etcdmain: Git SHA: 3cf2f69b5
2020-03-28 09:38:36.209195 I | etcdmain: Go Version: go1.12.12
2020-03-28 09:38:36.209255 I | etcdmain: Go OS/Arch: linux/amd64
2020-03-28 09:38:36.209393 I | etcdmain: setting maximum number of CPUs to 4, total number of available CPUs is 4
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-03-28 09:38:36.209975 I | embed: peerTLS: cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-28 09:38:36.211017 I | embed: name = minikube
2020-03-28 09:38:36.211148 I | embed: data dir = /var/lib/minikube/etcd
2020-03-28 09:38:36.211228 I | embed: member dir = /var/lib/minikube/etcd/member
2020-03-28 09:38:36.211300 I | embed: heartbeat = 100ms
2020-03-28 09:38:36.211370 I | embed: election = 1000ms
2020-03-28 09:38:36.211439 I | embed: snapshot count = 10000
2020-03-28 09:38:36.211515 I | embed: advertise client URLs = https://172.17.0.2:2379
2020-03-28 09:38:36.220648 I | etcdserver: starting member b8e14bda2255bc24 in cluster 38b0e74a458e7a1f
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=()
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became follower at term 0
raft2020/03/28 09:38:36 INFO: newRaft b8e14bda2255bc24 [peers: [], term: 0, commit: 0, applied: 0, lastindex: 0, lastterm: 0]
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became follower at term 1
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-03-28 09:38:36.234756 W | auth: simple token is not cryptographically signed
2020-03-28 09:38:36.238881 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
2020-03-28 09:38:36.241727 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file =
2020-03-28 09:38:36.242053 I | embed: listening for metrics on http://127.0.0.1:2381
2020-03-28 09:38:36.242451 I | etcdserver: b8e14bda2255bc24 as single-node; fast-forwarding 9 ticks (election ticks 10)
2020-03-28 09:38:36.243368 I | embed: listening for peers on 172.17.0.2:2380
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 switched to configuration voters=(13322012572989635620)
2020-03-28 09:38:36.244290 I | etcdserver/membership: added member b8e14bda2255bc24 [https://172.17.0.2:2380] to cluster 38b0e74a458e7a1f
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 is starting a new election at term 1
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became candidate at term 2
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 received MsgVoteResp from b8e14bda2255bc24 at term 2
raft2020/03/28 09:38:36 INFO: b8e14bda2255bc24 became leader at term 2
raft2020/03/28 09:38:36 INFO: raft.node: b8e14bda2255bc24 elected leader b8e14bda2255bc24 at term 2
2020-03-28 09:38:36.826449 I | etcdserver: published {Name:minikube ClientURLs:[https://172.17.0.2:2379]} to cluster 38b0e74a458e7a1f
2020-03-28 09:38:36.826758 I | embed: ready to serve client requests
2020-03-28 09:38:36.827032 I | etcdserver: setting up the initial cluster version to 3.4
2020-03-28 09:38:36.831215 I | embed: ready to serve client requests
2020-03-28 09:38:36.832483 I | embed: serving client requests on 127.0.0.1:2379
2020-03-28 09:38:36.833318 I | embed: serving client requests on 172.17.0.2:2379
2020-03-28 09:38:36.841334 N | etcdserver/membership: set the initial cluster version to 3.4
2020-03-28 09:38:36.897047 I | etcdserver/api: enabled capabilities for version 3.4
2020-03-28 09:38:57.492383 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-controller-manager" " with result "range_response_count:1 size:506" took too long (218.835202ms) to execute
2020-03-28 09:38:57.492418 W | etcdserver: read-only range request "key:"/registry/minions/" range_end:"/registry/minions0" " with result "range_response_count:1 size:5167" took too long (140.980503ms) to execute
2020-03-28 09:39:00.090050 W | etcdserver: read-only range request "key:"/registry/serviceaccounts/kube-system/replicaset-controller" " with result "range_response_count:1 size:210" took too long (189.603111ms) to execute
2020-03-28 09:39:02.522064 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result "range_response_count:1 size:480" took too long (389.983443ms) to execute
2020-03-28 09:39:02.522122 W | etcdserver: read-only range request "key:"/registry/minions/minikube" " with result "range_response_count:1 size:5394" took too long (373.291648ms) to execute
2020-03-28 09:39:02.522417 W | etcdserver: read-only range request "key:"/registry/namespaces/default" " with result "range_response_count:1 size:257" took too long (189.432827ms) to execute
2020-03-28 09:39:02.726565 W | etcdserver: read-only range request "key:"/registry/services/endpoints/default/kubernetes" " with result "range_response_count:1 size:286" took too long (146.232111ms) to execute
2020-03-28 09:39:10.751531 W | etcdserver: read-only range request "key:"/registry/leases/kube-system/kube-scheduler" " with result "range_response_count:1 size:480" took too long (148.581724ms) to execute

==> kernel <==
09:41:17 up 1:10, 0 users, load average: 0.36, 0.68, 0.55
Linux minikube 4.19.76-linuxkit #1 SMP Thu Oct 17 19:31:58 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"

==> kube-apiserver [9f597719254a] <==
W0328 09:38:38.881376 1 genericapiserver.go:409] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0328 09:38:38.892702 1 genericapiserver.go:409] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0328 09:38:38.910286 1 genericapiserver.go:409] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0328 09:38:38.920234 1 genericapiserver.go:409] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0328 09:38:38.944454 1 genericapiserver.go:409] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0328 09:38:38.958766 1 genericapiserver.go:409] Skipping API apps/v1beta2 because it has no resources.
W0328 09:38:38.958808 1 genericapiserver.go:409] Skipping API apps/v1beta1 because it has no resources.
I0328 09:38:38.965492 1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
I0328 09:38:38.965531 1 plugins.go:161] Loaded 10 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,CertificateSubjectRestriction,ValidatingAdmissionWebhook,ResourceQuota.
I0328 09:38:38.966920 1 client.go:361] parsed scheme: "endpoint"
I0328 09:38:38.966967 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0328 09:38:38.975166 1 client.go:361] parsed scheme: "endpoint"
I0328 09:38:38.975215 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0328 09:38:40.781098 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0328 09:38:40.781162 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0328 09:38:40.781400 1 dynamic_serving_content.go:130] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0328 09:38:40.781806 1 tlsconfig.go:240] Starting DynamicServingCertificateController
I0328 09:38:40.781798 1 secure_serving.go:178] Serving securely on [::]:8443
I0328 09:38:40.782100 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0328 09:38:40.782196 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0328 09:38:40.782108 1 available_controller.go:387] Starting AvailableConditionController
I0328 09:38:40.782341 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0328 09:38:40.782624 1 controller.go:81] Starting OpenAPI AggregationController
I0328 09:38:40.785075 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0328 09:38:40.785105 1 shared_informer.go:223] Waiting for caches to sync for cluster_authentication_trust_controller
I0328 09:38:40.785133 1 autoregister_controller.go:141] Starting autoregister controller
I0328 09:38:40.785137 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0328 09:38:40.785171 1 crd_finalizer.go:266] Starting CRDFinalizer
E0328 09:38:40.789020 1 controller.go:152] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/172.17.0.2, ResourceVersion: 0, AdditionalErrorMsg:
I0328 09:38:40.819119 1 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0328 09:38:40.819191 1 dynamic_cafile_content.go:167] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0328 09:38:40.819472 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0328 09:38:40.819500 1 shared_informer.go:223] Waiting for caches to sync for crd-autoregister
I0328 09:38:40.822145 1 controller.go:86] Starting OpenAPI controller
I0328 09:38:40.822188 1 customresource_discovery_controller.go:209] Starting DiscoveryController
I0328 09:38:40.822208 1 naming_controller.go:291] Starting NamingConditionController
I0328 09:38:40.822219 1 establishing_controller.go:76] Starting EstablishingController
I0328 09:38:40.822227 1 nonstructuralschema_controller.go:186] Starting NonStructuralSchemaConditionController
I0328 09:38:40.822263 1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0328 09:38:40.894207 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0328 09:38:40.894972 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0328 09:38:40.895687 1 shared_informer.go:230] Caches are synced for cluster_authentication_trust_controller
I0328 09:38:40.895713 1 cache.go:39] Caches are synced for autoregister controller
I0328 09:38:40.922665 1 shared_informer.go:230] Caches are synced for crd-autoregister
I0328 09:38:41.782050 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0328 09:38:41.782432 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0328 09:38:41.792862 1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
I0328 09:38:41.800677 1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
I0328 09:38:41.800800 1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
I0328 09:38:42.189109 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0328 09:38:42.231574 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0328 09:38:42.351592 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [172.17.0.2]
I0328 09:38:42.352935 1 controller.go:606] quota admission added evaluator for: endpoints
I0328 09:38:42.357931 1 controller.go:606] quota admission added evaluator for: endpointslices.discovery.k8s.io
I0328 09:38:43.617131 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0328 09:38:43.632632 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0328 09:38:43.811242 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0328 09:38:43.853944 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0328 09:39:01.274275 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0328 09:39:01.557048 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps

==> kube-controller-manager [b5f8ff08f2df] <==
I0328 09:39:01.150476 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for daemonsets.apps
I0328 09:39:01.150488 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for limitranges
I0328 09:39:01.150555 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for horizontalpodautoscalers.autoscaling
I0328 09:39:01.150597 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for ingresses.networking.k8s.io
I0328 09:39:01.150639 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for cronjobs.batch
I0328 09:39:01.150706 1 resource_quota_monitor.go:228] QuotaMonitor created object count evaluator for leases.coordination.k8s.io
I0328 09:39:01.150768 1 controllermanager.go:533] Started "resourcequota"
I0328 09:39:01.150841 1 resource_quota_controller.go:272] Starting resource quota controller
I0328 09:39:01.150851 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0328 09:39:01.150865 1 resource_quota_monitor.go:303] QuotaMonitor running
I0328 09:39:01.151290 1 shared_informer.go:223] Waiting for caches to sync for garbage collector
I0328 09:39:01.191963 1 shared_informer.go:230] Caches are synced for ReplicationController
I0328 09:39:01.196572 1 shared_informer.go:230] Caches are synced for HPA
I0328 09:39:01.199685 1 shared_informer.go:230] Caches are synced for expand
I0328 09:39:01.199812 1 shared_informer.go:230] Caches are synced for job
I0328 09:39:01.203085 1 shared_informer.go:230] Caches are synced for stateful set
I0328 09:39:01.215034 1 shared_informer.go:230] Caches are synced for namespace
I0328 09:39:01.231468 1 shared_informer.go:230] Caches are synced for bootstrap_signer
I0328 09:39:01.238290 1 shared_informer.go:230] Caches are synced for PVC protection
I0328 09:39:01.266152 1 shared_informer.go:230] Caches are synced for PV protection
I0328 09:39:01.271528 1 shared_informer.go:230] Caches are synced for deployment
I0328 09:39:01.276712 1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"6123c7f4-a62d-47b1-b8cd-a3effade9814", APIVersion:"apps/v1", ResourceVersion:"190", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-66bff467f8 to 2
I0328 09:39:01.296555 1 shared_informer.go:230] Caches are synced for service account
I0328 09:39:01.307552 1 shared_informer.go:230] Caches are synced for ReplicaSet
I0328 09:39:01.315671 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"10138dac-8a85-4fbb-bd97-5c96bcc1d5f4", APIVersion:"apps/v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-zs5ms
I0328 09:39:01.327367 1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-66bff467f8", UID:"10138dac-8a85-4fbb-bd97-5c96bcc1d5f4", APIVersion:"apps/v1", ResourceVersion:"359", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-66bff467f8-txjxd
W0328 09:39:01.504571 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0328 09:39:01.506102 1 shared_informer.go:230] Caches are synced for persistent volume
I0328 09:39:01.512313 1 shared_informer.go:230] Caches are synced for attach detach
I0328 09:39:01.521994 1 shared_informer.go:230] Caches are synced for GC
I0328 09:39:01.546440 1 shared_informer.go:230] Caches are synced for daemon sets
I0328 09:39:01.547305 1 shared_informer.go:230] Caches are synced for taint
I0328 09:39:01.547346 1 taint_manager.go:187] Starting NoExecuteTaintManager
I0328 09:39:01.547372 1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone:
W0328 09:39:01.547585 1 node_lifecycle_controller.go:1048] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0328 09:39:01.547636 1 node_lifecycle_controller.go:1249] Controller detected that zone is now in state Normal.
I0328 09:39:01.547664 1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"2671bd9f-a9f6-40b1-b197-00a5c4bc9afa", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0328 09:39:01.573408 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kindnet", UID:"a6e8ed72-f57f-4d84-ae0d-9bb4902e06c1", APIVersion:"apps/v1", ResourceVersion:"216", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kindnet-zbng4
I0328 09:39:01.595325 1 shared_informer.go:230] Caches are synced for node
I0328 09:39:01.595362 1 range_allocator.go:172] Starting range CIDR allocator
I0328 09:39:01.595368 1 shared_informer.go:223] Waiting for caches to sync for cidrallocator
I0328 09:39:01.595374 1 shared_informer.go:230] Caches are synced for cidrallocator
I0328 09:39:01.599304 1 shared_informer.go:230] Caches are synced for certificate-csrapproving
I0328 09:39:01.612334 1 event.go:278] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"12557541-b7da-4b49-aca4-116336c7452a", APIVersion:"apps/v1", ResourceVersion:"200", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-t5bhd
I0328 09:39:01.612403 1 shared_informer.go:230] Caches are synced for TTL
I0328 09:39:01.634595 1 range_allocator.go:373] Set node minikube PodCIDR to [10.244.0.0/24]
I0328 09:39:01.703091 1 shared_informer.go:230] Caches are synced for certificate-csrsigning
I0328 09:39:01.704942 1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator
I0328 09:39:01.795124 1 shared_informer.go:230] Caches are synced for disruption
I0328 09:39:01.795180 1 disruption.go:339] Sending events to api server.
E0328 09:39:01.801453 1 daemon_controller.go:292] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"12557541-b7da-4b49-aca4-116336c7452a", ResourceVersion:"200", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63720985123, loc:(*time.Location)(0x6d021e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc00012d660), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00012d680)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc00012d6a0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc00092fb80), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00012d6e0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc00012d720), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.0", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc00012d7e0)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(0xc000d82f50), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0002c7608), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000380fc0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc000130de8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc0002c7688)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:0, NumberReady:0, ObservedGeneration:0, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:0, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
I0328 09:39:01.845728 1 shared_informer.go:230] Caches are synced for endpoint_slice
I0328 09:39:01.846387 1 shared_informer.go:230] Caches are synced for endpoint
I0328 09:39:01.851925 1 shared_informer.go:230] Caches are synced for garbage collector
I0328 09:39:01.893238 1 shared_informer.go:230] Caches are synced for resource quota
I0328 09:39:01.893785 1 shared_informer.go:230] Caches are synced for garbage collector
I0328 09:39:01.893819 1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0328 09:39:02.298773 1 request.go:621] Throttling request took 1.049085992s, request: GET:https://172.17.0.2:8443/apis/rbac.authorization.k8s.io/v1?timeout=32s
I0328 09:39:02.900126 1 shared_informer.go:223] Waiting for caches to sync for resource quota
I0328 09:39:02.900160 1 shared_informer.go:230] Caches are synced for resource quota

==> kube-proxy [2586b1911767] <==
W0328 09:39:03.448258 1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
I0328 09:39:03.503576 1 node.go:136] Successfully retrieved node IP: 172.17.0.2
I0328 09:39:03.503631 1 server_others.go:186] Using iptables Proxier.
I0328 09:39:03.504727 1 server.go:583] Version: v1.18.0
I0328 09:39:03.505230 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0328 09:39:03.505319 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0328 09:39:03.505370 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0328 09:39:03.513398 1 config.go:133] Starting endpoints config controller
I0328 09:39:03.513417 1 shared_informer.go:223] Waiting for caches to sync for endpoints config
I0328 09:39:03.514926 1 config.go:315] Starting service config controller
I0328 09:39:03.514943 1 shared_informer.go:223] Waiting for caches to sync for service config
I0328 09:39:03.614774 1 shared_informer.go:230] Caches are synced for endpoints config
I0328 09:39:03.615772 1 shared_informer.go:230] Caches are synced for service config

==> kube-scheduler [ed551651020b] <==
I0328 09:38:36.243873 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0328 09:38:36.244008 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0328 09:38:37.231503 1 serving.go:313] Generated self-signed cert in-memory
W0328 09:38:40.828334 1 authentication.go:349] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0328 09:38:40.828417 1 authentication.go:297] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0328 09:38:40.828635 1 authentication.go:298] Continuing without authentication configuration. This may treat all requests as anonymous.
W0328 09:38:40.828696 1 authentication.go:299] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
I0328 09:38:40.845392 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
I0328 09:38:40.845895 1 registry.go:150] Registering EvenPodsSpread predicate and priority function
W0328 09:38:40.894162 1 authorization.go:47] Authorization is disabled
W0328 09:38:40.894676 1 authentication.go:40] Authentication is disabled
I0328 09:38:40.894756 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0328 09:38:40.897686 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0328 09:38:40.898054 1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 09:38:40.898235 1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 09:38:40.898395 1 tlsconfig.go:240] Starting DynamicServingCertificateController
E0328 09:38:40.903863 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0328 09:38:40.904204 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0328 09:38:40.907931 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 09:38:40.911011 1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0328 09:38:40.911580 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0328 09:38:40.911702 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0328 09:38:40.911592 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0328 09:38:40.912164 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0328 09:38:40.914913 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0328 09:38:40.915280 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0328 09:38:40.915816 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0328 09:38:40.916147 1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0328 09:38:40.916721 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0328 09:38:40.917885 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0328 09:38:40.922104 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0328 09:38:40.925482 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0328 09:38:40.928720 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0328 09:38:40.929954 1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
I0328 09:38:42.898728 1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0328 09:38:43.801875 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0328 09:38:43.821643 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Sat 2020-03-28 09:38:06 UTC, end at Sat 2020-03-28 09:41:19 UTC. --
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.430356 2272 kubelet_node_status.go:73] Successfully registered node minikube
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.529587 2272 setters.go:559] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-03-28 09:38:44.529562942 +0000 UTC m=+0.925323260 LastTransitionTime:2020-03-28 09:38:44.529562942 +0000 UTC m=+0.925323260 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Mar 28 09:38:44 minikube kubelet[2272]: E0328 09:38:44.555198 2272 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697420 2272 cpu_manager.go:184] [cpumanager] starting with none policy
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697465 2272 cpu_manager.go:185] [cpumanager] reconciling every 10s
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697491 2272 state_mem.go:36] [cpumanager] initializing new in-memory state store
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697718 2272 state_mem.go:88] [cpumanager] updated default cpuset: ""
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697728 2272 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.697738 2272 policy_none.go:43] [cpumanager] none policy: Start
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.700460 2272 plugin_manager.go:114] Starting Kubelet Plugin Manager
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.955720 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.958117 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.960885 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.962897 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.997813 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-usr-local-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998150 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-ca-certs") pod "kube-apiserver-minikube" (UID: "45e2432c538c36239dfecde67cb91065")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998373 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-etc-ca-certificates") pod "kube-apiserver-minikube" (UID: "45e2432c538c36239dfecde67cb91065")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998649 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-ca-certs") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998813 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.998973 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999215 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-data") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999363 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-k8s-certs") pod "kube-apiserver-minikube" (UID: "45e2432c538c36239dfecde67cb91065")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999570 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-local-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-usr-local-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "45e2432c538c36239dfecde67cb91065")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999727 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/ca02679f24a416493e1c288b16539a55-etcd-certs") pod "etcd-minikube" (UID: "ca02679f24a416493e1c288b16539a55")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999812 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-kubeconfig") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999844 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/5795d0c442cb997ff93c49feeb9f6386-kubeconfig") pod "kube-scheduler-minikube" (UID: "5795d0c442cb997ff93c49feeb9f6386")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999874 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/45e2432c538c36239dfecde67cb91065-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "45e2432c538c36239dfecde67cb91065")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999909 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "etc-ca-certificates" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-etc-ca-certificates") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999935 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/c92479a2ea69d7c331c16a5105dd1b8c-k8s-certs") pod "kube-controller-manager-minikube" (UID: "c92479a2ea69d7c331c16a5105dd1b8c")
Mar 28 09:38:44 minikube kubelet[2272]: I0328 09:38:44.999950 2272 reconciler.go:157] Reconciler: start to sync state
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.597134 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:39:01 minikube kubelet[2272]: E0328 09:39:01.605430 2272 reflector.go:178] object-"kube-system"/"kindnet-token-w6gbc": Failed to list *v1.Secret: secrets "kindnet-token-w6gbc" is forbidden: User "system:node:minikube" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node "minikube" and this object
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.641195 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.693902 2272 kuberuntime_manager.go:978] updating runtime config through cri with podcidr 10.244.0.0/24
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.694922 2272 docker_service.go:353] docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695410 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni-cfg" (UniqueName: "kubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-cni-cfg") pod "kindnet-zbng4" (UID: "df0ed0d2-969e-4c12-bb72-404a6ae006ee")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695474 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-xtables-lock") pod "kindnet-zbng4" (UID: "df0ed0d2-969e-4c12-bb72-404a6ae006ee")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695568 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/b1bfa933-8399-4f53-be86-778ea4871a4a-xtables-lock") pod "kube-proxy-t5bhd" (UID: "b1bfa933-8399-4f53-be86-778ea4871a4a")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695597 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-zpx5k" (UniqueName: "kubernetes.io/secret/b1bfa933-8399-4f53-be86-778ea4871a4a-kube-proxy-token-zpx5k") pod "kube-proxy-t5bhd" (UID: "b1bfa933-8399-4f53-be86-778ea4871a4a")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695618 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/df0ed0d2-969e-4c12-bb72-404a6ae006ee-lib-modules") pod "kindnet-zbng4" (UID: "df0ed0d2-969e-4c12-bb72-404a6ae006ee")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695636 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kindnet-token-w6gbc" (UniqueName: "kubernetes.io/secret/df0ed0d2-969e-4c12-bb72-404a6ae006ee-kindnet-token-w6gbc") pod "kindnet-zbng4" (UID: "df0ed0d2-969e-4c12-bb72-404a6ae006ee")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695718 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/b1bfa933-8399-4f53-be86-778ea4871a4a-lib-modules") pod "kube-proxy-t5bhd" (UID: "b1bfa933-8399-4f53-be86-778ea4871a4a")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.695738 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/b1bfa933-8399-4f53-be86-778ea4871a4a-kube-proxy") pod "kube-proxy-t5bhd" (UID: "b1bfa933-8399-4f53-be86-778ea4871a4a")
Mar 28 09:39:01 minikube kubelet[2272]: I0328 09:39:01.696398 2272 kubelet_network.go:77] Setting Pod CIDR: -> 10.244.0.0/24
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.836266 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.843999 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915611 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lmhfz" (UniqueName: "kubernetes.io/secret/106b0f2b-83c2-4405-8fef-aba1656f343e-coredns-token-lmhfz") pod "coredns-66bff467f8-zs5ms" (UID: "106b0f2b-83c2-4405-8fef-aba1656f343e")
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915696 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/e22968ac-8af5-4353-afcb-d00802f3155b-config-volume") pod "coredns-66bff467f8-txjxd" (UID: "e22968ac-8af5-4353-afcb-d00802f3155b")
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915716 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-lmhfz" (UniqueName: "kubernetes.io/secret/e22968ac-8af5-4353-afcb-d00802f3155b-coredns-token-lmhfz") pod "coredns-66bff467f8-txjxd" (UID: "e22968ac-8af5-4353-afcb-d00802f3155b")
Mar 28 09:39:03 minikube kubelet[2272]: I0328 09:39:03.915732 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/106b0f2b-83c2-4405-8fef-aba1656f343e-config-volume") pod "coredns-66bff467f8-zs5ms" (UID: "106b0f2b-83c2-4405-8fef-aba1656f343e")
Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.027343 2272 pod_container_deletor.go:77] Container "7ddc004dc5f41d3e1f1203e4e7c605885ef6e16f7d9bb604d4a9eeab84256611" not found in pod's containers
Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.033256 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-txjxd through plugin: invalid network status for
Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.118346 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for
Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.120140 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for
Mar 28 09:39:05 minikube kubelet[2272]: W0328 09:39:05.121366 2272 pod_container_deletor.go:77] Container "0198de6532fe9fdd52523dbd5c64272b5a9e72adbd45424c05fad27584b74b5b" not found in pod's containers
Mar 28 09:39:06 minikube kubelet[2272]: W0328 09:39:06.127617 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-txjxd through plugin: invalid network status for
Mar 28 09:39:06 minikube kubelet[2272]: W0328 09:39:06.131999 2272 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-66bff467f8-zs5ms through plugin: invalid network status for
Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.065362 2272 topology_manager.go:233] [topologymanager] Topology Admit Handler
Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.240134 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/9dfbd932-f851-468e-8db0-b89b0a244254-tmp") pod "storage-provisioner" (UID: "9dfbd932-f851-468e-8db0-b89b0a244254")
Mar 28 09:39:10 minikube kubelet[2272]: I0328 09:39:10.240563 2272 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-4qdt2" (UniqueName: "kubernetes.io/secret/9dfbd932-f851-468e-8db0-b89b0a244254-storage-provisioner-token-4qdt2") pod "storage-provisioner" (UID: "9dfbd932-f851-468e-8db0-b89b0a244254")

==> storage-provisioner [0ef5a4adebde] <==

The operating system version:

macOS 10.14.6

@cwansart
Copy link
Author

I just tested it on a Fedora 31 Linux machine and it works there without any issues.

@cwansart
Copy link
Author

It seem to happen with minikube service commands as well.

@tstromberg
Copy link
Contributor

Thank you for the detailed report. Can you include the output of:

minikube service --alsologtostderr -v=3

I choose service because it logs a lot less than tunnel. My suspicion is that Docker may not be responding fast enough, so minikube assumes it's offline. If so, restarting Docker may help, but the logs should confirm it - #7268 will improve this issue.

--driver=hyperkit will workaround this issue as well.

@tstromberg tstromberg added the kind/support Categorizes issue or PR as a support question. label Mar 28, 2020
@tstromberg tstromberg changed the title minikube tunnel fails on macOS with Minikube 1.9.0 docker: tunnel: This control plane is not running! (state=Stopped) Mar 28, 2020
@tstromberg
Copy link
Contributor

Your logs show that the control-plane is definitely running, hence why I assume a race condition or timeout of some sort:

9f597719254a9 74060cea7f704 2 minutes ago Running kube-apiserver 0 bbf9f4c1f7304

@cwansart
Copy link
Author

Here is the output of your recommended command:

$ minikube service --alsologtostderr -v=3 foo-service
I0329 10:51:49.872578   35366 mustload.go:51] Loading cluster: minikube
I0329 10:51:50.173006   35366 host.go:65] Checking if "minikube" exists ...
I0329 10:51:50.217810   35366 kverify.go:257] Checking apiserver status ...
I0329 10:51:50.217971   35366 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0329 10:51:50.344810   35366 kic_runner.go:91] Run: sudo egrep ^[0-9]+:freezer: /proc/1845/cgroup
I0329 10:51:50.491271   35366 kverify.go:273] apiserver freezer: "7:freezer:/docker/c4f203c92dc767812bb2b7df3fa140aa70e660f9dcdfd3081d9678f94e4d7f7f/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/49ec82e644e386544597b11b54e920a6965081bd1f231f604c624b9a2351b068"
I0329 10:51:50.491593   35366 kic_runner.go:91] Run: sudo cat /sys/fs/cgroup/freezer/docker/c4f203c92dc767812bb2b7df3fa140aa70e660f9dcdfd3081d9678f94e4d7f7f/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/49ec82e644e386544597b11b54e920a6965081bd1f231f604c624b9a2351b068/freezer.state
I0329 10:51:50.644465   35366 kverify.go:287] freezer state: "THAWED"
I0329 10:51:50.644537   35366 kverify.go:297] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0329 10:51:55.019930   35366 kverify.go:307] stopped: https://172.17.0.2:8443/healthz: Get "https://172.17.0.2:8443/healthz": dial tcp 172.17.0.2:8443: connect: network is unreachable
🤷  This control plane is not running! (state=Stopped)
❗  This is unusual - you may want to investigate using "minikube logs"
👉  To fix this, run: minikube start

I cannot use the hyperkit driver, the network does not work. I have not yet investigated the issue so far.

@tstromberg tstromberg changed the title docker: tunnel: This control plane is not running! (state=Stopped) docker: service & tunnel: This control plane is not running! (state=Stopped) Mar 29, 2020
@tstromberg tstromberg added this to the v1.9.1 (regressions) milestone Mar 29, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Mar 29, 2020

I think you may have found a serious regression in the Docker driver that is almost certainly my doing. I can confirm it on my end.

It appears that for Docker on Mac, the control plane verification is checking the incorrect URL.

@cwansart
Copy link
Author

Can I do something to support to fix this issue?

@tstromberg tstromberg changed the title docker: service & tunnel: This control plane is not running! (state=Stopped) docker macOS: This control plane is not running! (state=Stopped) Mar 29, 2020
@tstromberg
Copy link
Contributor

tstromberg commented Mar 29, 2020

@cwansart - Sorry, I started working on a PR before I saw your reply. You could really help me by confirming whether this binary fixes your issue:

http://storage.googleapis.com/minikube-builds/7310/minikube-darwin-amd64

My apologies for the bug. I wrote the PR without considering it's implications on Docker for macOS -- and our macOS integration testing hosts are broken & require physical intervention - something not easily possible due to the quarantine.

@cwansart
Copy link
Author

@tstromberg Your patch works. minikube service and minikube tunnel both work fine now.
Thanks for the quick fix.

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. co/docker-driver Issues related to kubernetes in container os/macos and removed kind/support Categorizes issue or PR as a support question. labels Mar 29, 2020
@jkornata
Copy link

I can confirm that this binary fixes service listing issue.

@Edwin-Luijten
Copy link

Edwin-Luijten commented Mar 31, 2020

I'm experiencing the same on windows:
minikube v1.9.0 on Microsoft Windows 10 Pro N 10.0.18363 Build 18363

`
minikube --alsologtostderr -v=3 service redacted-account-web
I0331 14:15:10.391902 22188 mustload.go:51] Loading cluster: minikube
I0331 14:15:10.504903 22188 host.go:65] Checking if "minikube" exists ...
I0331 14:15:10.610903 22188 kverify.go:257] Checking apiserver status ...
I0331 14:15:10.644903 22188 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.minikube.
I0331 14:15:10.827903 22188 kic_runner.go:91] Run: sudo egrep ^[0-9]+:freezer: /proc/1778/cgroup
I0331 14:15:10.987902 22188 kverify.go:273] apiserver freezer: "7:freezer:/docker/5ec1532f2bb2770cfbe63206923f931551e43843819bfccc7824899331ffe8ea/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/30447b34c30e1d22b26f8c6ada54c783ea1c662affb02f54d261bc53bc02292d"
I0331 14:15:11.016901 22188 kic_runner.go:91] Run: sudo cat /sys/fs/cgroup/freezer/docker/5ec1532f2bb2770cfbe63206923f931551e43843819bfccc7824899331ffe8ea/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/30447b34c30e1d22b26f8c6ada54c783ea1c662affb02f54d261bc53bc02292d/freezer.state
I0331 14:15:11.165902 22188 kverify.go:287] freezer state: "THAWED"
I0331 14:15:11.165902 22188 kverify.go:297] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0331 14:15:32.175867 22188 kverify.go:307] stopped: https://172.17.0.2:8443/healthz: Get https://172.17.0.2:8443/healthz: dial tcp 172.17.0.2:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

  • This control plane is not running! (state=Stopped)
    ! This is unusual - you may want to investigate using "minikube logs"
    • To fix this, run: minikube start
      `

@jimdevops19
Copy link

Hey,
I'm here because i still see this exact error of (This control plane is not running state=Stopped_) , when you I try to run " minikube dashboard "
minikube service works fine.
I downloaded the amd64 executable file today.

@ninjajazza
Copy link

Yep, I'm seeing the same thing with minikube 1.9.0 on Windows 10 Enterprise 1809 build 17762.1098

C:\Users\Jarrett>minikube service webapp --alsologtostderr -v=3
W0401 14:19:56.833606   42200 root.go:248] Error reading config file at C:\Users\Jarrett\.minikube\config\config.json: open C:\Users\Jarrett\.minikube\config\config.json: The system cannot find the file specified.
I0401 14:19:56.839586   42200 mustload.go:51] Loading cluster: minikube
I0401 14:19:57.100631   42200 host.go:65] Checking if "minikube" exists ...
I0401 14:19:57.400762   42200 kverify.go:257] Checking apiserver status ...
I0401 14:19:57.499903   42200 kic_runner.go:91] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0401 14:19:57.959995   42200 kic_runner.go:91] Run: sudo egrep ^[0-9]+:freezer: /proc/1650/cgroup
I0401 14:19:58.326090   42200 kverify.go:273] apiserver freezer: "7:freezer:/docker/a9ddc9779422eb49b4ccdb9d034a994c6e4a3ca9f39321a6837caba22925621e/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/0380130f50bb09728c196a3bcb4ed8406455a63607a08ff7968ebc455d69cdf8"
I0401 14:19:58.441091   42200 kic_runner.go:91] Run: sudo cat /sys/fs/cgroup/freezer/docker/a9ddc9779422eb49b4ccdb9d034a994c6e4a3ca9f39321a6837caba22925621e/kubepods/burstable/pod45e2432c538c36239dfecde67cb91065/0380130f50bb09728c196a3bcb4ed8406455a63607a08ff7968ebc455d69cdf8/freezer.state
I0401 14:19:58.763152   42200 kverify.go:287] freezer state: "THAWED"
I0401 14:19:58.763152   42200 kverify.go:297] Checking apiserver healthz at https://172.17.0.2:8443/healthz ...
I0401 14:20:19.772228   42200 kverify.go:307] stopped: https://172.17.0.2:8443/healthz: Get https://172.17.0.2:8443/healthz: dial tcp 172.17.0.2:8443: connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.
* This control plane is not running! (state=Stopped)
! This is unusual - you may want to investigate using "minikube logs"
  - To fix this, run: minikube start

Seems to occur for other commands like minikube dashboard also.

@cwansart
Copy link
Author

cwansart commented Apr 1, 2020

The Minikube dashboard works with @tstromberg's new binary for macOS (#7296 (comment)), I just tested it.
But it is strange that this happens on Windows too.

@Mason-Chou
Copy link

To make things easier for Windows users, see this link for the Windows version of the patch @tstromberg mentioned.

http://storage.googleapis.com/minikube-builds/7310/minikube-windows-amd64.exe

To apply the patch, just replace your minikube.exe with the patched version linked here.

@jgn-epp
Copy link

jgn-epp commented Apr 3, 2020

Thank you @Mason-Chou

@tstromberg
Copy link
Contributor

This should be fixed in v1.9.2.

If not - please /reopen this. It's possible that we are detecting the condition incorrectly if you run minikube as a non-administrator with the Hyper-V driver.

@muneeb-jan
Copy link

I am getting same issue on Ubuntu 20.04 LTS running in Virtual Box. I am using docker containers.

@secullod
Copy link

secullod commented Jan 7, 2021

I am having the same issue on both macos and ubuntu 20.04 LTS

image

@coultat
Copy link

coultat commented Dec 29, 2022

Same issue with ubuntu 18.04

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. os/macos priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.