Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error restarting cluster: waiting for apiserver : timed out waiting for the condition #4366

Closed
mayankshah1607 opened this issue May 29, 2019 · 9 comments
Assignees
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@mayankshah1607
Copy link

The exact command to reproduce the issue:

minikub start

The full output of the command that failed:

minikube v1.1.0 on linux (amd64)
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
🔄 Relaunching Kubernetes v1.14.2 using kubeadm ...

💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition

The output of the minikube logs command:

==> coredns <==
.:53
2019-05-29T07:20:25.355Z [INFO] CoreDNS-1.3.1
2019-05-29T07:20:25.355Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-05-29T07:20:25.355Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669

==> dmesg <==
[ +5.006574] hpet1: lost 319 rtc interrupts
[ +4.997924] hpet1: lost 318 rtc interrupts
[ +5.002167] hpet1: lost 318 rtc interrupts
[ +5.000487] hpet1: lost 318 rtc interrupts
[May29 07:18] hpet1: lost 318 rtc interrupts
[ +5.010987] hpet1: lost 319 rtc interrupts
[ +5.002877] hpet1: lost 318 rtc interrupts
[ +4.997147] hpet1: lost 318 rtc interrupts
[ +5.019053] hpet1: lost 319 rtc interrupts
[ +4.998686] hpet1: lost 319 rtc interrupts
[ +4.996221] hpet1: lost 318 rtc interrupts
[ +5.001163] hpet1: lost 318 rtc interrupts
[ +5.010704] hpet1: lost 319 rtc interrupts
[ +5.004883] hpet1: lost 318 rtc interrupts
[ +5.008001] hpet1: lost 319 rtc interrupts
[ +5.001818] hpet1: lost 318 rtc interrupts
[May29 07:19] hpet1: lost 318 rtc interrupts
[ +4.995603] hpet1: lost 318 rtc interrupts
[ +5.013366] hpet1: lost 319 rtc interrupts
[ +4.997688] hpet1: lost 318 rtc interrupts
[ +10.013516] hpet_rtc_timer_reinit: 30 callbacks suppressed
[ +0.000023] hpet1: lost 318 rtc interrupts
[ +5.012843] hpet1: lost 320 rtc interrupts
[ +5.014019] hpet1: lost 319 rtc interrupts
[ +5.004419] hpet1: lost 318 rtc interrupts
[ +4.999819] hpet1: lost 319 rtc interrupts
[ +5.003422] hpet1: lost 319 rtc interrupts
[ +5.007775] hpet1: lost 318 rtc interrupts
[May29 07:20] hpet1: lost 318 rtc interrupts
[ +5.009816] hpet1: lost 319 rtc interrupts
[ +5.017589] hpet1: lost 319 rtc interrupts
[ +3.758954] hrtimer: interrupt took 1320200 ns
[ +1.256886] hpet1: lost 320 rtc interrupts
[ +5.012180] hpet1: lost 319 rtc interrupts
[ +5.004530] hpet1: lost 318 rtc interrupts
[ +5.014112] hpet1: lost 319 rtc interrupts
[ +5.009943] hpet1: lost 319 rtc interrupts
[ +5.003496] hpet1: lost 318 rtc interrupts
[ +5.002602] hpet1: lost 318 rtc interrupts
[ +4.996349] hpet1: lost 318 rtc interrupts
[ +5.009938] hpet1: lost 319 rtc interrupts
[May29 07:21] hpet1: lost 318 rtc interrupts
[ +5.006245] hpet1: lost 318 rtc interrupts
[ +5.003826] hpet1: lost 318 rtc interrupts
[ +5.009590] hpet1: lost 319 rtc interrupts
[ +5.005521] hpet1: lost 318 rtc interrupts
[ +5.013026] hpet1: lost 319 rtc interrupts
[ +4.996601] hpet1: lost 318 rtc interrupts
[ +5.005215] hpet1: lost 318 rtc interrupts
[ +5.012784] hpet1: lost 319 rtc interrupts

==> kernel <==
07:21:45 up 7 min, 0 users, load average: 5.53, 5.42, 2.58
Linux minikube 4.15.0 #1 SMP Tue May 21 00:14:40 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelifind: '/etc/kubernetes/admission-contst appsrols': No such/v1/Deployment --prune-whiteflist apps/v1/ReplicaSet --pile or direcrune-whitelist apps/v1/StatefulSet --pruntory
e-whitelist extensions/v1beta1/Ingress ==
E0529 07:19:29.676443 35 request.go:853] Unexpected error when reading response body: http2.GoAwayError{LastStreamID:0x1, ErrINFO: == KuberneCode:0x0, DebutgData:""}
es addon manager started at 201error: Unexpected error htt9-05-29T07:17:34+00:00 withp2.GoAwayError ADDON_CHECK_I{LastStreamID:0x1, ErrCode:0x0, DebugData:""} whNTERVAL_SEC=60 ==
en reading response body. Please renamespace/kube-system unchatry.
nged
INFO: == Successfully started /opt/namespace.yaml in namespace at 2019-05-29T07:19:12+00:00
INFO: == Default service account in the kube-system namespace has token default-token-csxpm ==
INFO: == Entering periodical apply loop at 2019-05-29T07:19:27+00:00 ==
INFO: Leader is
error: no objects passed to apply
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["kind":"ServiceAccount" "metadThe connecata":map["labetion to the server localhost:8443 was rels":map["addofused - did you specify the right host onmanager.kubernetes.io/mode":"r port?
Reconcile"] "name":"storage-provisioner" "namespace":"kube-sINFO: == Kubernetes addon ensure completed at 2019-05-ystem" "annot29T07:19:30+00:00 ==
ations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "apIiVersion":"v1"]}
NFO: == Refrom server for: "/etc/kubernetes/addons/storage-provconciling withisioner.yaml": Get https://localhost:84 deprecated la43/api/v1/namespaces/kube-sbel ==
ystem/serviceaccounts/storage-pINFO: == Reconciling with arovisioner: diddon-manager label ==
al tcp 127.0.0.1:8443: connect: connection refused
INFO: == Kubernetes addon reconcile completed at 2019-05-29T07:19:31+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-29T07:20:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceacecount/storage-provisioner urnrcohanged
r when retrieving cINFO: == Kuberunetes addon rercroncile completeed at 2019-05-n29T07:20:38+00:00 ==
t configuration ofINFO: Leader i:s
minikube
Resource:INFO: == Kubernetes addon e nsure complete"d at 2019-05-2/9T07:21:31+00:v00 ==
1, Resource=podsINFO: == Recon"ciling with deprecated label, ==
GroupVersionKinINFO: == Reconciling with add:don-manager label ==
"/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["kind":"Pod" "metadata":map["serviceaccount/storage-provnamespace":"kube-system" "annotations":misioner unchanged
ap["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["INFO: == Kubernetes addon raddonmanager.econcile compkubernetes.io/mode":"Reconcile" "integration-test":"sleted at 2019-torage-provisioner"] "name"0:"storage-prov5isioner"] "spec":map["containers":[map["command":["/s-29T07:21:36+00:00 ==
torage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["name":"tmp" "mountPath":"/tmp"]]]] "hostNetwork":%!!(MISSING)q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]] "apiVersion":"v1"]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error: no objects passed to apply
error: no objects passed to apply

==> kube-apiserver <==
I0529 07:21:17.521460 1 trace.go:81] Trace[1018313360]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:16.948340095 +0000 UTC m=+91.076732293) (total time: 573.089515ms):
Trace[1018313360]: [573.048623ms] [567.104224ms] Transaction committed
I0529 07:21:17.521943 1 trace.go:81] Trace[469531806]: "Update /api/v1/namespaces/kube-system/endpoints/kube-dns" (started: 2019-05-29 07:21:16.945258327 +0000 UTC m=+91.073650542) (total time: 576.659948ms):
Trace[469531806]: [576.307806ms] [573.33055ms] Object stored in database
I0529 07:21:17.524217 1 trace.go:81] Trace[1369181329]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:16.942332703 +0000 UTC m=+91.070725000) (total time: 581.808261ms):
Trace[1369181329]: [581.754646ms] [580.611722ms] Transaction committed
I0529 07:21:17.524432 1 trace.go:81] Trace[1296896947]: "Update /api/v1/namespaces/default/endpoints/server-cluster-ip-service" (started: 2019-05-29 07:21:16.940848745 +0000 UTC m=+91.069240943) (total time: 583.561002ms):
Trace[1296896947]: [583.407573ms] [582.018239ms] Object stored in database
I0529 07:21:17.923617 1 trace.go:81] Trace[434096516]: "Get /apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy" (started: 2019-05-29 07:21:17.250785179 +0000 UTC m=+91.379177335) (total time: 672.775438ms):
Trace[434096516]: [671.971088ms] [671.955899ms] About to write a response
I0529 07:21:17.930212 1 trace.go:81] Trace[1402708644]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:17.177168729 +0000 UTC m=+91.305560921) (total time: 752.987667ms):
Trace[1402708644]: [752.681564ms] [752.587332ms] About to write a response
I0529 07:21:17.933065 1 trace.go:81] Trace[1543437771]: "List /api/v1/pods" (started: 2019-05-29 07:21:16.95856208 +0000 UTC m=+91.086954273) (total time: 974.44645ms):
Trace[1543437771]: [971.619589ms] [970.736801ms] Listing from storage done
I0529 07:21:18.599962 1 trace.go:81] Trace[1604127098]: "Get /api/v1/namespaces/default" (started: 2019-05-29 07:21:18.024455659 +0000 UTC m=+92.152847929) (total time: 575.449811ms):
Trace[1604127098]: [575.179168ms] [575.148148ms] About to write a response
I0529 07:21:18.601343 1 trace.go:81] Trace[654741462]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:17.937627268 +0000 UTC m=+92.066019530) (total time: 663.667818ms):
Trace[654741462]: [663.640835ms] [663.167039ms] Transaction committed
I0529 07:21:18.601514 1 trace.go:81] Trace[1959586799]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:17.937281442 +0000 UTC m=+92.065673633) (total time: 664.212893ms):
Trace[1959586799]: [664.122909ms] [663.867311ms] Object stored in database
I0529 07:21:19.246695 1 trace.go:81] Trace[1752908758]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:18.60296445 +0000 UTC m=+92.731356586) (total time: 643.660249ms):
Trace[1752908758]: [643.615828ms] [641.350697ms] Transaction committed
I0529 07:21:26.357530 1 trace.go:81] Trace[810658518]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2019-05-29 07:21:22.672892749 +0000 UTC m=+96.801284870) (total time: 3.684612273s):
Trace[810658518]: [3.684582428s] [3.684435028s] Transaction committed
I0529 07:21:26.357725 1 trace.go:81] Trace[1471849622]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-05-29 07:21:22.672417773 +0000 UTC m=+96.800809884) (total time: 3.685293181s):
Trace[1471849622]: [3.685146062s] [3.68470408s] Object stored in database
I0529 07:21:27.409987 1 trace.go:81] Trace[1656638992]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-05-29 07:21:23.986002259 +0000 UTC m=+98.114394451) (total time: 3.423914083s):
Trace[1656638992]: [3.422771799s] [3.422669142s] About to write a response
I0529 07:21:27.412066 1 trace.go:81] Trace[921301053]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:24.676973289 +0000 UTC m=+98.805365481) (total time: 2.735026186s):
Trace[921301053]: [2.734722751s] [2.734619893s] About to write a response
I0529 07:21:27.420665 1 trace.go:81] Trace[381747592]: "List /apis/batch/v1/jobs" (started: 2019-05-29 07:21:24.607230891 +0000 UTC m=+98.735623078) (total time: 2.81338232s):
Trace[381747592]: [2.813220313s] [2.812820234s] Listing from storage done
I0529 07:21:28.599401 1 trace.go:81] Trace[659710806]: "Get /api/v1/namespaces/default" (started: 2019-05-29 07:21:28.023884191 +0000 UTC m=+102.152276339) (total time: 575.438428ms):
Trace[659710806]: [575.070409ms] [575.05643ms] About to write a response
I0529 07:21:28.599427 1 trace.go:81] Trace[1158810893]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:27.815573329 +0000 UTC m=+101.943965462) (total time: 783.800728ms):
Trace[1158810893]: [783.362833ms] [783.351531ms] About to write a response
I0529 07:21:29.679050 1 trace.go:81] Trace[1642499952]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:28.625131404 +0000 UTC m=+102.753523739) (total time: 1.053559647s):
Trace[1642499952]: [1.05350998s] [1.043039473s] Transaction committed
I0529 07:21:34.473952 1 trace.go:81] Trace[231687430]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:33.810510879 +0000 UTC m=+107.938903113) (total time: 663.389626ms):
Trace[231687430]: [663.151563ms] [663.02654ms] About to write a response
I0529 07:21:35.401876 1 trace.go:81] Trace[1097604872]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:34.477924157 +0000 UTC m=+108.606316314) (total time: 923.881882ms):
Trace[1097604872]: [923.816277ms] [922.959909ms] Transaction committed
I0529 07:21:35.402460 1 trace.go:81] Trace[725793672]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:34.477677453 +0000 UTC m=+108.606069645) (total time: 924.701688ms):
Trace[725793672]: [924.287757ms] [924.108225ms] Object stored in database
I0529 07:21:35.411942 1 trace.go:81] Trace[1579696582]: "Get /api/v1/namespaces/kube-system/pods/storage-provisioner" (started: 2019-05-29 07:21:34.481214668 +0000 UTC m=+108.609606804) (total time: 930.217245ms):
Trace[1579696582]: [929.373054ms] [929.359897ms] About to write a response
I0529 07:21:38.571879 1 trace.go:81] Trace[1384862299]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:38.029512373 +0000 UTC m=+112.157904526) (total time: 542.300581ms):
Trace[1384862299]: [542.252635ms] [538.612023ms] Transaction committed
I0529 07:21:44.114161 1 trace.go:81] Trace[913308298]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:43.45967806 +0000 UTC m=+117.588070255) (total time: 654.129295ms):
Trace[913308298]: [653.715989ms] [653.609974ms] About to write a response

==> kube-proxy <==
I0529 07:19:18.233364 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0529 07:19:18.411303 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0529 07:19:18.422011 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0529 07:19:18.422099 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0529 07:19:18.422932 1 config.go:102] Starting endpoints config controller
I0529 07:19:18.422992 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0529 07:19:18.423013 1 config.go:202] Starting service config controller
I0529 07:19:18.423024 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0529 07:19:18.552507 1 controller_utils.go:1034] Caches are synced for service config controller
I0529 07:19:18.552998 1 controller_utils.go:1034] Caches are synced for endpoints config controller
E0529 07:19:28.791816 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=21996&timeout=8m30s&timeoutSeconds=510&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:28.791866 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=21847&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:29.792889 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:29.809107 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:30.794969 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:30.811732 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:31.797705 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:31.817534 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:32.800149 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:32.820550 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:33.802652 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:33.826318 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:34.803625 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:34.832822 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:35.806637 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:35.838829 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:36.811367 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:36.839630 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:37.816418 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:37.850594 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:38.819565 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:38.852663 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:39.821014 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:39.853201 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:40.822731 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:40.854841 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:41.824209 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:41.857868 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:42.826286 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:42.862307 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:43.827342 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:43.864397 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:44.828362 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:44.864999 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:45.829278 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:45.865447 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:46.830085 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:46.866088 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:47.830883 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:47.866957 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused

==> kube-scheduler <==
I0529 07:20:00.859835 1 serving.go:319] Generated self-signed cert in-memory
W0529 07:20:01.750136 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0529 07:20:01.750181 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0529 07:20:01.750188 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0529 07:20:01.750204 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0529 07:20:01.750216 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0529 07:20:01.955377 1 server.go:142] Version: v1.14.2
I0529 07:20:01.955523 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0529 07:20:01.959983 1 authorization.go:47] Authorization is disabled
W0529 07:20:01.960222 1 authentication.go:55] Authentication is disabled
I0529 07:20:01.960303 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0529 07:20:01.962638 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0529 07:20:02.882446 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0529 07:20:02.983037 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0529 07:20:02.983700 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0529 07:20:24.901026 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Wed 2019-05-29 07:15:02 UTC, end at Wed 2019-05-29 07:21:45 UTC. --
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.467896 3250 kubelet_node_status.go:372] Unable to update node status: update node status exceeds retry count
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.600886 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.608607 3250 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.798733 3250 reflector.go:126] object-"kube-system"/"kube-proxy-token-dspx8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)kube-proxy-token-dspx8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.996773 3250 reflector.go:126] object-"kube-system"/"coredns-token-kzq4n": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)coredns-token-kzq4n&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.197187 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.397028 3250 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: W0529 07:19:44.598759 3250 status_manager.go:485] Failed to get status for pod "coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/coredns-fb8b8dccf-rgn86: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.642665 3250 pod_workers.go:190] Error syncing pod 22546764-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.797643 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: W0529 07:19:44.918168 3250 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod21f5d2f1-7c8c-11e9-8033-080027e5c6aa/d4f6ee771d7f12e0e57226d8c271e4b785ffad4a4030e0abd7e7d8c46e61ad4f": none of the resources are being tracked.
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.943150 3250 pod_workers.go:190] Error syncing pod 9b290132363a92652555896288ca3f88 ("kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)"
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.996702 3250 reflector.go:126] object-"default"/"default-token-lnxsw": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/default/secrets?fieldSelector=metadata.name%!D(MISSING)default-token-lnxsw&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.196695 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.397841 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.519300 3250 event.go:200] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/kube-controller-manager-minikube.15a31635931fb913: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.597521 3250 reflector.go:126] object-"kube-system"/"storage-provisioner-token-mzpj8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)storage-provisioner-token-mzpj8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.796924 3250 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: W0529 07:19:45.997288 3250 status_manager.go:485] Failed to get status for pod "client-deployment-848bcddb74-ws7tb_default(0a5ac07f-7e0b-11e9-a74b-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/default/pods/client-deployment-848bcddb74-ws7tb: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.198038 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.397415 3250 reflector.go:126] object-"kube-system"/"kube-proxy-token-dspx8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)kube-proxy-token-dspx8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.598091 3250 reflector.go:126] object-"kube-system"/"coredns-token-kzq4n": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)coredns-token-kzq4n&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: W0529 07:19:46.615069 3250 pod_container_deletor.go:75] Container "7c2638d27e426d5c5f43e550ba036f324ff68f74008803992143ff47608f9e3d" not found in pod's containers
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.695026 3250 remote_runtime.go:321] ContainerStatus "23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.695132 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "postgres-deployment-57c594d6df-6nwld_default(989b50e2-7e17-11e9-a74b-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.796929 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.001482 3250 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.196937 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.397442 3250 reflector.go:126] object-"default"/"default-token-lnxsw": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/default/secrets?fieldSelector=metadata.name%!D(MISSING)default-token-lnxsw&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: W0529 07:19:47.597248 3250 status_manager.go:485] Failed to get status for pod "server-deployment-f4c6f6c8f-gdf6n_default(65ebe59e-7e14-11e9-a74b-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/default/pods/server-deployment-f4c6f6c8f-gdf6n: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.797601 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.803064 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.998002 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:49 minikube kubelet[3250]: W0529 07:19:49.031551 3250 pod_container_deletor.go:75] Container "fe424629af09b5104d90ab8d6f7df0441598dc931629b622767e365bda1dd196" not found in pod's containers
May 29 07:19:49 minikube kubelet[3250]: E0529 07:19:49.032398 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:52 minikube kubelet[3250]: W0529 07:19:52.208054 3250 pod_container_deletor.go:75] Container "f4c22016f32cc71a8a19169b78eff0eebb62913caadc4396ea6eff15fe7e0a2f" not found in pod's containers
May 29 07:19:52 minikube kubelet[3250]: E0529 07:19:52.208476 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:53 minikube kubelet[3250]: E0529 07:19:53.854952 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.262933 3250 remote_runtime.go:321] ContainerStatus "8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.263333 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)" failed: rpc error: code = Unknown desc = Error: No such container: 8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.268640 3250 pod_workers.go:190] Error syncing pod 36130beb-7c8c-11e9-8033-080027e5c6aa ("storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:57 minikube kubelet[3250]: E0529 07:19:57.362488 3250 remote_runtime.go:321] ContainerStatus "058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b
May 29 07:19:57 minikube kubelet[3250]: E0529 07:19:57.362855 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b
May 29 07:20:13 minikube kubelet[3250]: E0529 07:20:13.271915 3250 remote_runtime.go:321] ContainerStatus "6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c
May 29 07:20:13 minikube kubelet[3250]: E0529 07:20:13.271974 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c
May 29 07:20:14 minikube kubelet[3250]: E0529 07:20:14.295004 3250 remote_runtime.go:321] ContainerStatus "67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d
May 29 07:20:14 minikube kubelet[3250]: E0529 07:20:14.295370 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d
May 29 07:20:37 minikube kubelet[3250]: E0529 07:20:37.272727 3250 pod_workers.go:190] Error syncing pod 9c1e365bd18b5d3fc6a5d0ff10c2b125 ("kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)"
May 29 07:20:48 minikube kubelet[3250]: E0529 07:20:48.863646 3250 remote_runtime.go:321] ContainerStatus "387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df
May 29 07:20:48 minikube kubelet[3250]: E0529 07:20:48.864098 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)" failed: rpc error: code = Unknown desc = Error: No such container: 387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df

==> storage-provisioner <==

The operating system version:

Ubuntu 18.04

@tstromberg
Copy link
Contributor

Looking at the events and the timeline, my guess is that we're not waiting long enough for the apiserver to become healthy on this host. It appears that the apiserver became healthy by 07:20:02 (about a minute and a half before the "logs" command was run), roughly 5 minutes after the VM came online.

I notice that minikube only waits 60 seconds for the apiserver to come online. I'm guessing we missed that deadline by about 15 seconds.

Is this problem repeatable when running minikube start?

@tstromberg tstromberg self-assigned this May 29, 2019
@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver labels May 29, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 27, 2019
@severino42
Copy link

severino42 commented Aug 29, 2019

/remove-lifecycle stale

I am seeing this issue now, on a fresh install

n$ minikube start
😄 minikube v1.3.1 on Darwin 10.13.6
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🏃 Using the running virtualbox "minikube" VM ...
⌛ Waiting for the host to be provisioned ...
🐳 Preparing Kubernetes v1.15.2 on Docker 18.09.8 ...
🔄 Relaunching Kubernetes using kubeadm ...

💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition

😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new/choose

$ minikube logs > minikube.logs
$ cat minikube.logs
==> Docker <==
-- Logs begin at Wed 2019-08-28 18:44:41 UTC, end at Thu 2019-08-29 15:59:16 UTC. --
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.670760431Z" level=info msg="shim reaped" id=de22e73d82e1342ec0fc2c9d2ff587cd1056db23585f2409be15705c809996ea
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.672921325Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.673066784Z" level=warning msg="7523ebb763389cc0eb6b015faa4127d2d1e5ab1fd5036d5d1efebfcfc0dc0941 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/7523ebb763389cc0eb6b015faa4127d2d1e5ab1fd5036d5d1efebfcfc0dc0941/mounts/shm, flags: 0x2: no such file or directory"
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.680796599Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.681126912Z" level=warning msg="de22e73d82e1342ec0fc2c9d2ff587cd1056db23585f2409be15705c809996ea cleanup: failed to unmount IPC: umount /var/lib/docker/containers/de22e73d82e1342ec0fc2c9d2ff587cd1056db23585f2409be15705c809996ea/mounts/shm, flags: 0x2: no such file or directory"
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.798943007Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bb9be3db2d154e627332a652ce98f0c5e739ecf70f44124d5ff8e76007ebb0d1/shim.sock" debug=false pid=4916
Aug 28 18:46:34 minikube dockerd[2363]: time="2019-08-28T18:46:34.802168294Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/39bfc30ce8d79611918846873629073fc39d672172556b2f8738caae3aaa2216/shim.sock" debug=false pid=4920
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.079462000Z" level=info msg="shim reaped" id=ebe0099c2c5067a1f546d40c0660935d868f842102c86fcd9bf119535e5e584a
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.089929887Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.090152806Z" level=warning msg="ebe0099c2c5067a1f546d40c0660935d868f842102c86fcd9bf119535e5e584a cleanup: failed to unmount IPC: umount /var/lib/docker/containers/ebe0099c2c5067a1f546d40c0660935d868f842102c86fcd9bf119535e5e584a/mounts/shm, flags: 0x2: no such file or directory"
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.197260051Z" level=info msg="shim reaped" id=c8a64fce6f1925981813dbb1dda0e48295686680d62a5002a384855c08e28d9e
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.207445575Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.368284662Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2da4e9c3f3e4f02e399eaa27ce774a2173df852f55eca9fe9e50b16e3fae78a4/shim.sock" debug=false pid=6251
Aug 28 20:03:04 minikube dockerd[2363]: time="2019-08-28T20:03:04.553541314Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/80236a5b85cba76186b65bc32297f721dc5823212594ee27f4dab3021c7dfb91/shim.sock" debug=false pid=6294
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.741745966Z" level=info msg="Container df16bf7592a4b6e42f9a0c64682d2790106e0602a0361f36dda11b76aad8904c failed to exit within 30 seconds of signal 15 - using the force"
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.805553774Z" level=info msg="shim reaped" id=df16bf7592a4b6e42f9a0c64682d2790106e0602a0361f36dda11b76aad8904c
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.816552639Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.816690501Z" level=warning msg="df16bf7592a4b6e42f9a0c64682d2790106e0602a0361f36dda11b76aad8904c cleanup: failed to unmount IPC: umount /var/lib/docker/containers/df16bf7592a4b6e42f9a0c64682d2790106e0602a0361f36dda11b76aad8904c/mounts/shm, flags: 0x2: no such file or directory"
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.901083122Z" level=info msg="shim reaped" id=63f882cb39da362faa9a16462cdb8bf1432aa02e863e59c46d7363dec893234f
Aug 28 20:03:32 minikube dockerd[2363]: time="2019-08-28T20:03:32.911034169Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 28 20:03:33 minikube dockerd[2363]: time="2019-08-28T20:03:33.205888569Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6b036c4687981342ac68ab703b11e5ba898f4cfc0c334ba06b612ca6c58358e2/shim.sock" debug=false pid=6581
Aug 28 20:03:33 minikube dockerd[2363]: time="2019-08-28T20:03:33.379253935Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d5e2201c3c9120983a81f89b533a70fe4640fe2fd79c00cb421e73c1bb6cc924/shim.sock" debug=false pid=6624
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.336678971Z" level=info msg="Container d5e2201c3c9120983a81f89b533a70fe4640fe2fd79c00cb421e73c1bb6cc924 failed to exit within 30 seconds of signal 15 - using the force"
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.410140244Z" level=info msg="shim reaped" id=d5e2201c3c9120983a81f89b533a70fe4640fe2fd79c00cb421e73c1bb6cc924
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.420928009Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.421132166Z" level=warning msg="d5e2201c3c9120983a81f89b533a70fe4640fe2fd79c00cb421e73c1bb6cc924 cleanup: failed to unmount IPC: umount /var/lib/docker/containers/d5e2201c3c9120983a81f89b533a70fe4640fe2fd79c00cb421e73c1bb6cc924/mounts/shm, flags: 0x2: no such file or directory"
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.514870995Z" level=info msg="shim reaped" id=6b036c4687981342ac68ab703b11e5ba898f4cfc0c334ba06b612ca6c58358e2
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.524967990Z" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.798848291Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c5e421d15c393cfdd37c92122087438b98367957924d7f72737e6524944b8f9a/shim.sock" debug=false pid=12912
Aug 29 15:46:52 minikube dockerd[2363]: time="2019-08-29T15:46:52.975127461Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c962470dc9bfa1d3d91462e4611c603004e6b8fa5388dacbb762d912085efd64/shim.sock" debug=false pid=12951

==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
c962470dc9bfa 119701e77cbc4 12 minutes ago Running kube-addon-manager 2 c5e421d15c393
d5e2201c3c912 119701e77cbc4 20 hours ago Exited kube-addon-manager 1 6b036c4687981
80236a5b85cba 9f5df470155d4 20 hours ago Running kube-controller-manager 0 2da4e9c3f3e4f
bb9be3db2d154 eb516548c180f 21 hours ago Running coredns 1 7c9ebc0f543a1
39bfc30ce8d79 eb516548c180f 21 hours ago Running coredns 1 1fa60deaef765
9baca2e5d525d 4689081edb103 21 hours ago Running storage-provisioner 0 57e1a04b24fb5
de22e73d82e13 eb516548c180f 21 hours ago Exited coredns 0 7c9ebc0f543a1
7523ebb763389 eb516548c180f 21 hours ago Exited coredns 0 1fa60deaef765
0120b9d5ccdbb 167bbf6c93388 21 hours ago Running kube-proxy 0 d2920a8982819
d49a6205caa99 2c4adeb21b4ff 21 hours ago Running etcd 0 b7fb92b5e6292
cf1fa90201466 88fa9cb27bd2d 21 hours ago Running kube-scheduler 0 f6d27f536ffa1
1363d92d3acb5 34a53be6c9a7e 21 hours ago Running kube-apiserver 0 8f3749b339ef4

==> coredns <==
.:53
2019-08-28T18:46:35.061Z [INFO] CoreDNS-1.3.1
2019-08-28T18:46:35.061Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-08-28T18:46:35.062Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843

==> dmesg <==
[ +5.001572] hpet1: lost 319 rtc interrupts
[ +5.000580] hpet1: lost 318 rtc interrupts
[Aug29 15:57] hpet1: lost 318 rtc interrupts
[ +5.000773] hpet1: lost 318 rtc interrupts
[ +5.001126] hpet1: lost 318 rtc interrupts
[ +5.000924] hpet1: lost 318 rtc interrupts
[ +5.000930] hpet1: lost 318 rtc interrupts
[ +5.001284] hpet1: lost 318 rtc interrupts
[ +5.001179] hpet1: lost 318 rtc interrupts
[ +5.001125] hpet1: lost 318 rtc interrupts
[ +5.001549] hpet1: lost 318 rtc interrupts
[ +5.000880] hpet1: lost 318 rtc interrupts
[ +5.001361] hpet1: lost 318 rtc interrupts
[ +5.000902] hpet1: lost 318 rtc interrupts
[Aug29 15:58] hpet1: lost 318 rtc interrupts
[ +5.001487] hpet1: lost 319 rtc interrupts
[ +5.000400] hpet1: lost 318 rtc interrupts
[ +5.001292] hpet1: lost 318 rtc interrupts
[ +5.001573] hpet1: lost 318 rtc interrupts
[ +5.000653] hpet1: lost 318 rtc interrupts
[ +5.001190] hpet1: lost 318 rtc interrupts
[ +5.000741] hpet1: lost 318 rtc interrupts
[ +5.001370] hpet1: lost 318 rtc interrupts
[ +5.001251] hpet1: lost 318 rtc interrupts
[ +5.000672] hpet1: lost 318 rtc interrupts
[ +5.001973] hpet1: lost 318 rtc interrupts
[Aug29 15:59] hpet1: lost 318 rtc interrupts
[ +5.000989] hpet1: lost 319 rtc interrupts
[ +5.001612] hpet1: lost 318 rtc interrupts
[ +5.000623] hpet1: lost 318 rtc interrupts

==> kernel <==
15:59:16 up 4:12, 0 users, load average: 0.15, 0.15, 0.16
Linux minikube 4.15.0 #1 SMP Fri Aug 2 16:17:56 PDT 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2018.05.3"

==> kube-addon-manager <==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-29T15:54:54+00:00 ==
INFO: Leader is minikube
error: no objects passed to apply
INFO: == Kubernetes addon ensure completed at 2019-08-29T15:55:54+00:00 ==
error: no objects passed to apply
INFO: == Reconciling with error: no objects passed to apply
deprecated labelerror: no objects ==
passed to apply
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-29T15:55:55+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-08-29T15:56:54+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-29T15:56:55+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-08-29T15:57:53+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-29T15:57:55+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-08-29T15:58:53+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-08-29T15:58:55+00:00 ==

==> kube-apiserver <==
I0828 18:45:53.593914 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0828 18:45:54.585059 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0828 18:45:54.812005 1 controller.go:606] quota admission added evaluator for: endpoints
I0828 18:45:54.863902 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0828 18:45:55.140620 1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.99.100]
I0828 18:45:56.308508 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0828 18:45:56.377296 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0828 18:45:56.730249 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0828 18:46:03.085585 1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0828 18:46:03.091243 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
E0828 19:02:32.765303 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 19:17:44.840843 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 19:27:21.853439 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 19:41:41.894892 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 19:58:29.944030 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 20:16:21.380207 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 20:24:09.451571 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0828 20:33:58.533977 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 13:48:08.681692 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 13:57:00.759510 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 14:04:03.777337 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 14:19:27.821227 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 14:32:45.861484 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 14:39:57.914237 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 14:48:26.957100 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 15:04:39.977579 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 15:18:12.021192 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 15:24:43.084952 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 15:38:06.192397 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted
E0829 15:45:28.245269 1 watcher.go:214] watch chan error: etcdserver: mvcc: required revision has been compacted

==> kube-proxy <==
W0828 18:46:04.558313 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0828 18:46:04.569598 1 server_others.go:143] Using iptables Proxier.
W0828 18:46:04.570264 1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0828 18:46:04.577187 1 server.go:534] Version: v1.15.2
I0828 18:46:04.598614 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0828 18:46:04.599465 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0828 18:46:04.600175 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0828 18:46:04.605591 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0828 18:46:04.605856 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0828 18:46:04.606055 1 config.go:187] Starting service config controller
I0828 18:46:04.606123 1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0828 18:46:04.606642 1 config.go:96] Starting endpoints config controller
I0828 18:46:04.606862 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0828 18:46:04.706430 1 controller_utils.go:1036] Caches are synced for service config controller
I0828 18:46:04.707232 1 controller_utils.go:1036] Caches are synced for endpoints config controller

==> kube-scheduler <==
W0828 18:45:48.979564 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0828 18:45:48.981063 1 server.go:142] Version: v1.15.2
I0828 18:45:48.981148 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0828 18:45:49.002710 1 authorization.go:47] Authorization is disabled
W0828 18:45:49.002737 1 authentication.go:55] Authentication is disabled
I0828 18:45:49.002750 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0828 18:45:49.003074 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0828 18:45:51.897413 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0828 18:45:51.897459 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0828 18:45:51.897491 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0828 18:45:51.897585 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0828 18:45:51.897624 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0828 18:45:51.897762 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0828 18:45:51.897861 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0828 18:45:51.898031 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0828 18:45:51.898162 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0828 18:45:51.897799 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0828 18:45:52.898914 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0828 18:45:52.900126 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0828 18:45:52.901390 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0828 18:45:52.903201 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0828 18:45:52.905728 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0828 18:45:52.911107 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0828 18:45:52.912329 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0828 18:45:52.918938 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0828 18:45:52.921139 1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0828 18:45:52.922116 1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
I0828 18:45:54.809370 1 leaderelection.go:235] attempting to acquire leader lease kube-system/kube-scheduler...
I0828 18:45:54.814822 1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0828 18:46:03.176471 1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Wed 2019-08-28 18:44:41 UTC, end at Thu 2019-08-29 15:59:16 UTC. --
Aug 28 18:46:03 minikube kubelet[3359]: I0828 18:46:03.314657 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d6871a9c-940a-42a0-94b1-106dc58f0193-config-volume") pod "coredns-5c98db65d4-49d78" (UID: "d6871a9c-940a-42a0-94b1-106dc58f0193")
Aug 28 18:46:03 minikube kubelet[3359]: I0828 18:46:03.314678 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-mbgx6" (UniqueName: "kubernetes.io/secret/d6871a9c-940a-42a0-94b1-106dc58f0193-coredns-token-mbgx6") pod "coredns-5c98db65d4-49d78" (UID: "d6871a9c-940a-42a0-94b1-106dc58f0193")
Aug 28 18:46:03 minikube kubelet[3359]: I0828 18:46:03.314697 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/bda801db-d2a0-4d7c-a450-b908ca8c5359-config-volume") pod "coredns-5c98db65d4-m8grg" (UID: "bda801db-d2a0-4d7c-a450-b908ca8c5359")
Aug 28 18:46:05 minikube kubelet[3359]: I0828 18:46:05.127484 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-8h278" (UniqueName: "kubernetes.io/secret/274d8aa6-e391-4753-84bd-ff362d50f212-storage-provisioner-token-8h278") pod "storage-provisioner" (UID: "274d8aa6-e391-4753-84bd-ff362d50f212")
Aug 28 18:46:05 minikube kubelet[3359]: I0828 18:46:05.127568 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/274d8aa6-e391-4753-84bd-ff362d50f212-tmp") pod "storage-provisioner" (UID: "274d8aa6-e391-4753-84bd-ff362d50f212")
Aug 28 18:46:05 minikube kubelet[3359]: W0828 18:46:05.521019 3359 pod_container_deletor.go:75] Container "57e1a04b24fb5a85dbcd5a1a353b5bccc345ecefa7ac9865e054662195aaee51" not found in pod's containers
Aug 28 20:03:03 minikube kubelet[3359]: E0828 20:03:03.958003 3359 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/kube-apiserver.yaml": /etc/kubernetes/manifests/kube-apiserver.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file.
Aug 28 20:03:03 minikube kubelet[3359]: W0828 20:03:03.987734 3359 kubelet.go:1632] Deleting mirror pod "kube-controller-manager-minikube_kube-system(89c6ccf1-2ad1-40f3-ad0a-215eec3ed5d4)" because it is outdated
Aug 28 20:03:04 minikube kubelet[3359]: I0828 20:03:04.082630 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/0259a68e77df079c104efc084ee6046c-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "0259a68e77df079c104efc084ee6046c")
Aug 28 20:03:04 minikube kubelet[3359]: I0828 20:03:04.082683 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/0259a68e77df079c104efc084ee6046c-ca-certs") pod "kube-controller-manager-minikube" (UID: "0259a68e77df079c104efc084ee6046c")
Aug 28 20:03:04 minikube kubelet[3359]: I0828 20:03:04.082703 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/0259a68e77df079c104efc084ee6046c-k8s-certs") pod "kube-controller-manager-minikube" (UID: "0259a68e77df079c104efc084ee6046c")
Aug 28 20:03:04 minikube kubelet[3359]: I0828 20:03:04.082723 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/0259a68e77df079c104efc084ee6046c-kubeconfig") pod "kube-controller-manager-minikube" (UID: "0259a68e77df079c104efc084ee6046c")
Aug 28 20:03:04 minikube kubelet[3359]: I0828 20:03:04.082742 3359 reconciler.go:203] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/0259a68e77df079c104efc084ee6046c-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "0259a68e77df079c104efc084ee6046c")
Aug 28 20:03:04 minikube kubelet[3359]: W0828 20:03:04.949928 3359 pod_container_deletor.go:75] Container "c8a64fce6f1925981813dbb1dda0e48295686680d62a5002a384855c08e28d9e" not found in pod's containers
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.099806 3359 reconciler.go:177] operationExecutor.UnmountVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-usr-share-ca-certificates") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08")
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100343 3359 reconciler.go:177] operationExecutor.UnmountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-ca-certs") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08")
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100420 3359 reconciler.go:177] operationExecutor.UnmountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-k8s-certs") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08")
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100478 3359 reconciler.go:177] operationExecutor.UnmountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-kubeconfig") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08")
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100577 3359 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100292 3359 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100702 3359 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.100763 3359 operation_generator.go:860] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "374ab60c7462b3f9c28c5cc2355d6d08" (UID: "374ab60c7462b3f9c28c5cc2355d6d08"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.200727 3359 reconciler.go:297] Volume detached for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-usr-share-ca-certificates") on node "minikube" DevicePath ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.200770 3359 reconciler.go:297] Volume detached for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-ca-certs") on node "minikube" DevicePath ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.200778 3359 reconciler.go:297] Volume detached for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-k8s-certs") on node "minikube" DevicePath ""
Aug 28 20:03:06 minikube kubelet[3359]: I0828 20:03:06.200785 3359 reconciler.go:297] Volume detached for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/374ab60c7462b3f9c28c5cc2355d6d08-kubeconfig") on node "minikube" DevicePath ""
Aug 28 20:03:06 minikube kubelet[3359]: W0828 20:03:06.712408 3359 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/374ab60c7462b3f9c28c5cc2355d6d08/volumes" does not exist
Aug 28 20:03:33 minikube kubelet[3359]: W0828 20:03:33.143406 3359 pod_container_deletor.go:75] Container "63f882cb39da362faa9a16462cdb8bf1432aa02e863e59c46d7363dec893234f" not found in pod's containers
Aug 29 15:46:23 minikube kubelet[3359]: E0829 15:46:23.472396 3359 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/kube-scheduler.yaml": /etc/kubernetes/manifests/kube-scheduler.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file.
Aug 29 15:46:52 minikube kubelet[3359]: W0829 15:46:52.690291 3359 pod_container_deletor.go:75] Container "6b036c4687981342ac68ab703b11e5ba898f4cfc0c334ba06b612ca6c58358e2" not found in pod's containers

==> storage-provisioner <==

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 29, 2019
@tstromberg
Copy link
Contributor

@author: I believe this issue is now addressed by minikube v1.4 & Kubernetes v1.16, as it adjusts the way that the apiserver is spawned. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with /reopen

Thank you for reporting this issue!

@tstromberg
Copy link
Contributor

@author: I believe this issue is now addressed by minikube v1.4 & Kubernetes v1.16, as it adjusts the way that the apiserver is spawned. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with /reopen - and be sure to include the output of minikube logs

Thank you for reporting this issue!

@gpedro34
Copy link

gpedro34 commented Oct 24, 2019

/reopen

I'm a newbie in Kubernetes and was just trying my first install on Ubuntu 18.04 when I stumbled into this...

I've tried:

  • running the command again
  • delete and run the command again

Always the same error. Any more data I can collect for you?

Is there a way of passing some parameters or ENV var that overrides the timeouts?
How should I proceed so I can at least test Kubernetes before deploy to the cloud?

I have only 6Gb of RAM free in this machine. Can this be the issue for the long wait for the API?
If so what's the recommended amount of RAM to run the Kubernetes stack on top of docker (run directly in Linux)?

Thank you in advance


EDIT: Typos


EDIT2:

@gpedro34: You can't reopen an issue/PR unless you authored it or you are a collaborator.

Sorry. Didn't noticed the previous comment was directed to @author


$ sudo minikube start --vm-driver=none
😄  minikube v1.4.0 on Ubuntu 18.04
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Using the running none "minikube" VM ...
⌛  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.16.0 on Docker 19.03.2 ...
    ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
🔄  Relaunching Kubernetes using kubeadm ... 

💣  Error restarting cluster: waiting for apiserver: apiserver process never appeared

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose



$ sudo minikube logs


==> Docker <==
-- Logs begin at Thu 2019-10-10 15:05:35 WEST, end at Thu 2019-10-24 04:06:52 WEST. --
out 24 03:22:28 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:28.158172315+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:22:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:32.785454617+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:22:35 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:35.783548380+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:24:18 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:24:18.976375542+01:00" level=warning msg="got error while decoding json" error="unexpected EOF" retries=0
out 24 03:24:18 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:24:18.978442266+01:00" level=warning msg="got error while decoding json" error="unexpected EOF" retries=0
out 24 03:31:12 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:12.749974135+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:31:12 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:12.750009589+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:31:35 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:35.614085919+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:33:40 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:33:40.614827647+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:33:40 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:33:40.614858965+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:34:21 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:34:21.505413966+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:36:26 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:36:26.505991990+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:36:26 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:36:26.505992153+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:37:11 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:37:11.506484238+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:39:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:39:16.507379979+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:39:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:39:16.507401138+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:40:01 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:40:01.508058978+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:40:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:40:41.323684653+01:00" level=error msg="Handler for GET /v1.25/containers/04329f075a4e9b2283dfb432cf38979f3cdf8b9c0f0c55547b67af84302a283a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 03:40:41 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 03:42:06 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:06.508905538+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:42:06 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:06.508981207+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:42:51 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:51.509427138+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:44:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:44:56.509912275+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:44:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:44:56.509926380+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:45:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:45:41.510261676+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.948659139+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951772037+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951833718+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951879509+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.952124946+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:47:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:47:46.510717683+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:47:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:47:46.510732378+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:48:31 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:48:31.511215176+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:49:17 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:49:17.354095126+01:00" level=error msg="Handler for GET /containers/9f1fe68749413f256345c1aafa8d8bbe64c2aa0151292d933528b55ac8ab673a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 03:49:17 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 03:50:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:50:36.511948981+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:50:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:50:36.512055439+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:51:11 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:51:11.811315031+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:53:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:53:16.811975875+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:53:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:53:16.811989076+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:54:02 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:54:01.812487614+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:55:23 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:55:23.446970134+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:56:48 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:06.813230181+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:56:48 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:06.813361611+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:56:52 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:51.813868086+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:58:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:58:56.814775432+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:58:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:58:56.814804146+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:59:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:59:41.815234248+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 04:01:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:46.815716684+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:01:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:46.815730450+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.041909050+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.042231153+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.042263934+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:02:31 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:02:31.816093460+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 04:04:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:04:36.816796634+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:04:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:04:36.816909274+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:05:21 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:05:21.817382482+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"

==> container status <==
sudo: crictl: comando não encontrado
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                    PORTS                                                                 NAMES
e02ef779a9a5        k8s.gcr.io/pause:3.1              "/pause"                 6 minutes ago       Created                                                                                         k8s_POD_etcd-minikube_kube-system_6e4dcdb25e0b9f2e1e4a93933525ea32_0
e34d6113ed9a        k8s.gcr.io/pause:3.1              "/pause"                 6 minutes ago       Created                                                                                         k8s_POD_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0_6b157fbc
f25741120b31        k8s.gcr.io/pause:3.1              "/pause"                 8 minutes ago       Created                                                                                         k8s_POD_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0
67b46fbbb595        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_etcd-minikube_kube-system_6e4dcdb25e0b9f2e1e4a93933525ea32_1
64ab5a342b0a        bd12a212f9dc                      "/opt/kube-addons.sh"    12 minutes ago      Created                                                                                         k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_3_c95737d6
71b30cb467ad        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_kube-controller-manager-minikube_kube-system_e640cf9855d72a2348a56dd1f7180a84_1_d45bc4fe
677bb88c6809        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_kube-apiserver-minikube_kube-system_6d97e54aa68d9ea62372e79cc2821bde_2_b4063728
19a477e6f006        roverr/rtsp-stream:1-management   "supervisord --nodae…"   17 hours ago        Exited (0) 17 hours ago                                                                         frosty_villani
48894c59cf35        sonarqube:lts                     "./bin/run.sh"           31 hours ago        Up 31 hours               0.0.0.0:9000-9001->9000-9001/tcp                                      sonarqube
ddb945810951        gpedro/jenkins:1.0.0              "/sbin/tini -- /usr/…"   31 hours ago        Up 31 hours               0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp                      jenkins
55175b826816        google/cadvisor:v0.30.0           "/usr/bin/cadvisor -…"   31 hours ago        Up 31 hours               0.0.0.0:8081->8080/tcp                                                cadvisor
b5f1a18339b8        datadog/agent:latest              "/init"                  31 hours ago        Up 31 hours (unhealthy)   8125/udp, 0.0.0.0:8126->8126/tcp                                      datadog-agent
5df744e3c405        postgres:12-alpine                "docker-entrypoint.s…"   31 hours ago        Up 31 hours               0.0.0.0:5432->5432/tcp                                                sonarqube-db
d770a333b973        portainer/portainer               "/portainer"             31 hours ago        Up 31 hours               0.0.0.0:9900->9000/tcp                                                portainer
ecd0405457b9        jlesage/nginx-proxy-manager       "/init"                  31 hours ago        Up 31 hours               0.0.0.0:8181->8181/tcp, 0.0.0.0:443->4443/tcp, 0.0.0.0:80->8080/tcp   nginx-proxy-manager
bab4d0e8bba0        uroni/urbackup-server             "/usr/bin/start run"     3 months ago        Up 31 hours               0.0.0.0:35623->35623/udp, 0.0.0.0:55413-55415->55413-55415/tcp        urbackup-server-1

==> dmesg <==
[  +0,000002] RBP: 000000c000e84e30 R08: 0000000000000000 R09: 0000000000000000
[  +0,000001] R10: 0000000000000000 R11: 0000000000000212 R12: ffffffffffffffff
[  +0,000001] R13: 0000000000000004 R14: 0000000000000003 R15: 0000000000000049
[  +0,000006] INFO: task dockerd:2980 blocked for more than 120 seconds.
[  +0,000003]       Not tainted 5.0.0-31-generic #33~18.04.1-Ubuntu
[  +0,000002] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0,000004] Call Trace:
[  +0,000004]  __schedule+0x2c0/0x870
[  +0,000003]  schedule+0x2c/0x70
[  +0,000003]  rwsem_down_read_failed+0xe8/0x180
[  +0,000002]  ? destroy_inode+0x3e/0x60
[  +0,000004]  call_rwsem_down_read_failed+0x18/0x30
[  +0,000003]  ? call_rwsem_down_read_failed+0x18/0x30
[  +0,000003]  ? kswapd_cpu_online+0x90/0xd0
[  +0,000003]  down_read+0x20/0x40
[  +0,000005]  ovl_sync_fs+0x37/0x60 [overlay]
[  +0,000003]  __sync_filesystem+0x33/0x60
[  +0,000002]  sync_filesystem+0x3c/0x50
[  +0,000003]  generic_shutdown_super+0x27/0x120
[  +0,000002]  kill_anon_super+0x12/0x30
[  +0,000002]  deactivate_locked_super+0x48/0x80
[  +0,000003]  deactivate_super+0x40/0x60
[  +0,000003]  cleanup_mnt+0x3f/0x90
[  +0,000003]  __cleanup_mnt+0x12/0x20
[  +0,000003]  task_work_run+0x9d/0xc0
[  +0,000004]  exit_to_usermode_loop+0xf2/0x100
[  +0,000003]  do_syscall_64+0x107/0x120
[  +0,000004]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0,000002] RIP: 0033:0x557423d1e0f0
[  +0,000003] Code: Bad RIP value.
[  +0,000002] RSP: 002b:000000c002112eb0 EFLAGS: 00000202 ORIG_RAX: 00000000000000a6
[  +0,000002] RAX: 0000000000000000 RBX: 000000c00005c000 RCX: 0000557423d1e0f0
[  +0,000001] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000c002217d50
[  +0,000001] RBP: 000000c002112f08 R08: 0000000000000000 R09: 0000000000000000
[  +0,000002] R10: 0000000000000000 R11: 0000000000000202 R12: ffffffffffffffff
[  +0,000001] R13: 0000000000000044 R14: 0000000000000043 R15: 0000000000000049
[  +0,000549] INFO: task exe:27573 blocked for more than 120 seconds.
[  +0,000002]       Not tainted 5.0.0-31-generic #33~18.04.1-Ubuntu
[  +0,000002] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0,000004] Call Trace:
[  +0,000005]  __schedule+0x2c0/0x870
[  +0,000003]  ? putname+0x4c/0x60
[  +0,000003]  schedule+0x2c/0x70
[  +0,000003]  rwsem_down_write_failed+0x157/0x350
[  +0,000005]  call_rwsem_down_write_failed+0x17/0x30
[  +0,000003]  ? call_rwsem_down_write_failed+0x17/0x30
[  +0,000003]  down_write+0x2d/0x40
[  +0,000004]  do_mount+0x50a/0xd70
[  +0,000003]  ksys_mount+0x98/0xe0
[  +0,000002]  __x64_sys_mount+0x25/0x30
[  +0,000004]  do_syscall_64+0x5a/0x120
[  +0,000004]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0,000001] RIP: 0033:0x7f3b93bec3ca
[  +0,000004] Code: Bad RIP value.
[  +0,000002] RSP: 002b:00007ffd29fb6888 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
[  +0,000002] RAX: ffffffffffffffda RBX: 00007ffd29fb6890 RCX: 00007f3b93bec3ca
[  +0,000001] RDX: 00005653f9a437d9 RSI: 00007ffd29fb6890 RDI: 00005653f9a437d9
[  +0,000001] RBP: 00007ffd29fb78e0 R08: 00005653f9a437d9 R09: 00007ffd29fe6080
[  +0,000002] R10: 0000000000001021 R11: 0000000000000246 R12: 00005653fa9e226a
[  +0,000001] R13: 0000000000000010 R14: 00005653fa9e226a R15: 00005653fa9e2280

==> kernel <==
 04:06:52 up 1 day,  7:30,  1 user,  load average: 17,08, 16,66, 15,37
Linux BLUEMONSTER 5.0.0-31-generic #33~18.04.1-Ubuntu SMP Tue Oct 1 10:20:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.3 LTS"

==> kube-addon-manager [64ab5a342b0a] <==
E1024 04:06:52.305275   32139 logs.go:135] failed: running command: docker logs --tail 60 64ab5a342b0a
.: exit status 1

==> kubelet <==
-- Logs begin at Thu 2019-10-10 15:05:35 WEST, end at Thu 2019-10-24 04:06:52 WEST. --
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.185031   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.186918   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.188673   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.232634   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.332046   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.332918   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.433079   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.532472   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.533204   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.633403   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.733191   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.733552   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.833755   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.933000   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.933921   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.938930   22956 pod_workers.go:191] Error syncing pod c3e29047da86ce6690916750ab69c40b ("kube-addon-manager-minikube_kube-system(c3e29047da86ce6690916750ab69c40b)"), skipping: rpc error: code = Unknown desc = Error: No such container: 64ab5a342b0a5d05b9156ab1a59c910253ddf692c4065d3d8b16c1728ede355e
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.034081   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.132712   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.134235   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.197792   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.199689   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.204119   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.206109   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.208043   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.209814   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.234462   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.333056   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.334636   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.434877   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.533600   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.535053   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.635345   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.734086   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.735491   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.835738   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.934029   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.935951   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.036217   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.133815   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.136432   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.218532   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.222156   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.223655   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.225261   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.226252   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.230292   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.236577   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.333964   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.336770   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.437083   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.534631   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.537279   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.637516   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.735020   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.737683   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.837878   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.934631   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.938025   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.938803   22956 pod_workers.go:191] Error syncing pod 6e4dcdb25e0b9f2e1e4a93933525ea32 ("etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)"), skipping: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:52 BLUEMONSTER kubelet[22956]: E1024 04:06:52.038178   22956 kubelet.go:2267] node "minikube" not found

💣  Error getting machine logs: unable to fetch logs for: kube-addon-manager [64ab5a342b0a]

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose

@k8s-ci-robot
Copy link
Contributor

@gpedro34: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I'm a newbie in Kubernetes and was just trying my first install on Ubuntu 18.04 when I stumbled into this...
Any more data I can collect for you?

Is there a way of passing some parameters or ENV var that overrides the timeouts?
How should I proceed so I can at least test Kubernetes before deploy to the cloud?

I have only 6Gb of RAM free in this machine. Can this be the issue for the long wait for the API?
If so what's the recommended amount of RAM to run the Kubernetes stack on top of docker (run directly in Linux)?

Thank you in advance

$ sudo minikube start --vm-driver=none
😄  minikube v1.4.0 on Ubuntu 18.04
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🏃  Using the running none "minikube" VM ...
⌛  Waiting for the host to be provisioned ...
🐳  Preparing Kubernetes v1.16.0 on Docker 19.03.2 ...
   ▪ kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
🔄  Relaunching Kubernetes using kubeadm ... 

💣  Error restarting cluster: waiting for apiserver: apiserver process never appeared

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose



$ sudo minikube logs


==> Docker <==
-- Logs begin at Thu 2019-10-10 15:05:35 WEST, end at Thu 2019-10-24 04:06:52 WEST. --
out 24 03:22:28 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:28.158172315+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:22:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:32.785454617+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:22:35 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:22:35.783548380+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:24:18 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:24:18.976375542+01:00" level=warning msg="got error while decoding json" error="unexpected EOF" retries=0
out 24 03:24:18 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:24:18.978442266+01:00" level=warning msg="got error while decoding json" error="unexpected EOF" retries=0
out 24 03:31:12 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:12.749974135+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:31:12 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:12.750009589+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:31:35 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:31:35.614085919+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:33:40 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:33:40.614827647+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:33:40 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:33:40.614858965+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:34:21 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:34:21.505413966+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:36:26 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:36:26.505991990+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:36:26 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:36:26.505992153+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:37:11 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:37:11.506484238+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:39:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:39:16.507379979+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:39:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:39:16.507401138+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:40:01 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:40:01.508058978+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:40:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:40:41.323684653+01:00" level=error msg="Handler for GET /v1.25/containers/04329f075a4e9b2283dfb432cf38979f3cdf8b9c0f0c55547b67af84302a283a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 03:40:41 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 03:42:06 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:06.508905538+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:42:06 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:06.508981207+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:42:51 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:42:51.509427138+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:44:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:44:56.509912275+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:44:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:44:56.509926380+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:45:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:45:41.510261676+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.948659139+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951772037+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951833718+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.951879509+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:46:32 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:46:32.952124946+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:47:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:47:46.510717683+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:47:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:47:46.510732378+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:48:31 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:48:31.511215176+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:49:17 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:49:17.354095126+01:00" level=error msg="Handler for GET /containers/9f1fe68749413f256345c1aafa8d8bbe64c2aa0151292d933528b55ac8ab673a/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 03:49:17 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 03:50:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:50:36.511948981+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:50:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:50:36.512055439+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:51:11 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:51:11.811315031+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:53:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:53:16.811975875+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:53:16 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:53:16.811989076+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:54:02 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:54:01.812487614+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:55:23 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:55:23.446970134+01:00" level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
out 24 03:56:48 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:06.813230181+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:56:48 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:06.813361611+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:56:52 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:56:51.813868086+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 03:58:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:58:56.814775432+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:58:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:58:56.814804146+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 03:59:41 BLUEMONSTER dockerd[2402]: time="2019-10-24T03:59:41.815234248+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 04:01:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:46.815716684+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:01:46 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:46.815730450+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.041909050+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.042231153+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:01:56.042263934+01:00" level=error msg="Handler for GET /containers/83613be4f2e7dbc3a0b18875cf5bfba72a32b11d5fb5bc22ec5e91b5b6bd6a33/json returned error: write unix /var/run/docker.sock->@: write: broken pipe"
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:01:56 BLUEMONSTER dockerd[2402]: http: superfluous response.WriteHeader call from github.com/docker/docker/api/server/httputils.WriteJSON (httputils_write_json.go:11)
out 24 04:02:31 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:02:31.816093460+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"
out 24 04:04:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:04:36.816796634+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:04:36 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:04:36.816909274+01:00" level=error msg="stream copy error: reading from a closed fifo"
out 24 04:05:21 BLUEMONSTER dockerd[2402]: time="2019-10-24T04:05:21.817382482+01:00" level=warning msg="Health check for container b5f1a18339b83d2901fa7ddb3065105df496887f179222e0cd2cd57c19c740ea error: context deadline exceeded: unknown"

==> container status <==
sudo: crictl: comando não encontrado
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS                    PORTS                                                                 NAMES
e02ef779a9a5        k8s.gcr.io/pause:3.1              "/pause"                 6 minutes ago       Created                                                                                         k8s_POD_etcd-minikube_kube-system_6e4dcdb25e0b9f2e1e4a93933525ea32_0
e34d6113ed9a        k8s.gcr.io/pause:3.1              "/pause"                 6 minutes ago       Created                                                                                         k8s_POD_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0_6b157fbc
f25741120b31        k8s.gcr.io/pause:3.1              "/pause"                 8 minutes ago       Created                                                                                         k8s_POD_kube-scheduler-minikube_kube-system_c18ee741ac4ad7b2bfda7d88116f3047_0
67b46fbbb595        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_etcd-minikube_kube-system_6e4dcdb25e0b9f2e1e4a93933525ea32_1
64ab5a342b0a        bd12a212f9dc                      "/opt/kube-addons.sh"    12 minutes ago      Created                                                                                         k8s_kube-addon-manager_kube-addon-manager-minikube_kube-system_c3e29047da86ce6690916750ab69c40b_3_c95737d6
71b30cb467ad        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_kube-controller-manager-minikube_kube-system_e640cf9855d72a2348a56dd1f7180a84_1_d45bc4fe
677bb88c6809        k8s.gcr.io/pause:3.1              "/pause"                 12 minutes ago      Created                                                                                         k8s_POD_kube-apiserver-minikube_kube-system_6d97e54aa68d9ea62372e79cc2821bde_2_b4063728
19a477e6f006        roverr/rtsp-stream:1-management   "supervisord --nodae…"   17 hours ago        Exited (0) 17 hours ago                                                                         frosty_villani
48894c59cf35        sonarqube:lts                     "./bin/run.sh"           31 hours ago        Up 31 hours               0.0.0.0:9000-9001->9000-9001/tcp                                      sonarqube
ddb945810951        gpedro/jenkins:1.0.0              "/sbin/tini -- /usr/…"   31 hours ago        Up 31 hours               0.0.0.0:8080->8080/tcp, 0.0.0.0:50000->50000/tcp                      jenkins
55175b826816        google/cadvisor:v0.30.0           "/usr/bin/cadvisor -…"   31 hours ago        Up 31 hours               0.0.0.0:8081->8080/tcp                                                cadvisor
b5f1a18339b8        datadog/agent:latest              "/init"                  31 hours ago        Up 31 hours (unhealthy)   8125/udp, 0.0.0.0:8126->8126/tcp                                      datadog-agent
5df744e3c405        postgres:12-alpine                "docker-entrypoint.s…"   31 hours ago        Up 31 hours               0.0.0.0:5432->5432/tcp                                                sonarqube-db
d770a333b973        portainer/portainer               "/portainer"             31 hours ago        Up 31 hours               0.0.0.0:9900->9000/tcp                                                portainer
ecd0405457b9        jlesage/nginx-proxy-manager       "/init"                  31 hours ago        Up 31 hours               0.0.0.0:8181->8181/tcp, 0.0.0.0:443->4443/tcp, 0.0.0.0:80->8080/tcp   nginx-proxy-manager
bab4d0e8bba0        uroni/urbackup-server             "/usr/bin/start run"     3 months ago        Up 31 hours               0.0.0.0:35623->35623/udp, 0.0.0.0:55413-55415->55413-55415/tcp        urbackup-server-1

==> dmesg <==
[  +0,000002] RBP: 000000c000e84e30 R08: 0000000000000000 R09: 0000000000000000
[  +0,000001] R10: 0000000000000000 R11: 0000000000000212 R12: ffffffffffffffff
[  +0,000001] R13: 0000000000000004 R14: 0000000000000003 R15: 0000000000000049
[  +0,000006] INFO: task dockerd:2980 blocked for more than 120 seconds.
[  +0,000003]       Not tainted 5.0.0-31-generic #33~18.04.1-Ubuntu
[  +0,000002] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0,000004] Call Trace:
[  +0,000004]  __schedule+0x2c0/0x870
[  +0,000003]  schedule+0x2c/0x70
[  +0,000003]  rwsem_down_read_failed+0xe8/0x180
[  +0,000002]  ? destroy_inode+0x3e/0x60
[  +0,000004]  call_rwsem_down_read_failed+0x18/0x30
[  +0,000003]  ? call_rwsem_down_read_failed+0x18/0x30
[  +0,000003]  ? kswapd_cpu_online+0x90/0xd0
[  +0,000003]  down_read+0x20/0x40
[  +0,000005]  ovl_sync_fs+0x37/0x60 [overlay]
[  +0,000003]  __sync_filesystem+0x33/0x60
[  +0,000002]  sync_filesystem+0x3c/0x50
[  +0,000003]  generic_shutdown_super+0x27/0x120
[  +0,000002]  kill_anon_super+0x12/0x30
[  +0,000002]  deactivate_locked_super+0x48/0x80
[  +0,000003]  deactivate_super+0x40/0x60
[  +0,000003]  cleanup_mnt+0x3f/0x90
[  +0,000003]  __cleanup_mnt+0x12/0x20
[  +0,000003]  task_work_run+0x9d/0xc0
[  +0,000004]  exit_to_usermode_loop+0xf2/0x100
[  +0,000003]  do_syscall_64+0x107/0x120
[  +0,000004]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0,000002] RIP: 0033:0x557423d1e0f0
[  +0,000003] Code: Bad RIP value.
[  +0,000002] RSP: 002b:000000c002112eb0 EFLAGS: 00000202 ORIG_RAX: 00000000000000a6
[  +0,000002] RAX: 0000000000000000 RBX: 000000c00005c000 RCX: 0000557423d1e0f0
[  +0,000001] RDX: 0000000000000000 RSI: 0000000000000002 RDI: 000000c002217d50
[  +0,000001] RBP: 000000c002112f08 R08: 0000000000000000 R09: 0000000000000000
[  +0,000002] R10: 0000000000000000 R11: 0000000000000202 R12: ffffffffffffffff
[  +0,000001] R13: 0000000000000044 R14: 0000000000000043 R15: 0000000000000049
[  +0,000549] INFO: task exe:27573 blocked for more than 120 seconds.
[  +0,000002]       Not tainted 5.0.0-31-generic #33~18.04.1-Ubuntu
[  +0,000002] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[  +0,000004] Call Trace:
[  +0,000005]  __schedule+0x2c0/0x870
[  +0,000003]  ? putname+0x4c/0x60
[  +0,000003]  schedule+0x2c/0x70
[  +0,000003]  rwsem_down_write_failed+0x157/0x350
[  +0,000005]  call_rwsem_down_write_failed+0x17/0x30
[  +0,000003]  ? call_rwsem_down_write_failed+0x17/0x30
[  +0,000003]  down_write+0x2d/0x40
[  +0,000004]  do_mount+0x50a/0xd70
[  +0,000003]  ksys_mount+0x98/0xe0
[  +0,000002]  __x64_sys_mount+0x25/0x30
[  +0,000004]  do_syscall_64+0x5a/0x120
[  +0,000004]  entry_SYSCALL_64_after_hwframe+0x44/0xa9
[  +0,000001] RIP: 0033:0x7f3b93bec3ca
[  +0,000004] Code: Bad RIP value.
[  +0,000002] RSP: 002b:00007ffd29fb6888 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
[  +0,000002] RAX: ffffffffffffffda RBX: 00007ffd29fb6890 RCX: 00007f3b93bec3ca
[  +0,000001] RDX: 00005653f9a437d9 RSI: 00007ffd29fb6890 RDI: 00005653f9a437d9
[  +0,000001] RBP: 00007ffd29fb78e0 R08: 00005653f9a437d9 R09: 00007ffd29fe6080
[  +0,000002] R10: 0000000000001021 R11: 0000000000000246 R12: 00005653fa9e226a
[  +0,000001] R13: 0000000000000010 R14: 00005653fa9e226a R15: 00005653fa9e2280

==> kernel <==
04:06:52 up 1 day,  7:30,  1 user,  load average: 17,08, 16,66, 15,37
Linux BLUEMONSTER 5.0.0-31-generic #33~18.04.1-Ubuntu SMP Tue Oct 1 10:20:39 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 18.04.3 LTS"

==> kube-addon-manager [64ab5a342b0a] <==
E1024 04:06:52.305275   32139 logs.go:135] failed: running command: docker logs --tail 60 64ab5a342b0a
.: exit status 1

==> kubelet <==
-- Logs begin at Thu 2019-10-10 15:05:35 WEST, end at Thu 2019-10-24 04:06:52 WEST. --
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.185031   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.186918   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.188673   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.232634   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.332046   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.332918   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.433079   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.532472   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.533204   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.633403   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.733191   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.733552   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.833755   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.933000   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.933921   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:49 BLUEMONSTER kubelet[22956]: E1024 04:06:49.938930   22956 pod_workers.go:191] Error syncing pod c3e29047da86ce6690916750ab69c40b ("kube-addon-manager-minikube_kube-system(c3e29047da86ce6690916750ab69c40b)"), skipping: rpc error: code = Unknown desc = Error: No such container: 64ab5a342b0a5d05b9156ab1a59c910253ddf692c4065d3d8b16c1728ede355e
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.034081   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.132712   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.134235   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.197792   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.199689   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.204119   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.206109   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.208043   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.209814   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.234462   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.333056   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.334636   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.434877   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.533600   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.535053   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:50 BLUEMONSTER kubelet[22956]: E1024 04:06:50.635345   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.734086   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.735491   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.835738   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.934029   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:50.935951   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.036217   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.133815   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:459: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.136432   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.218532   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.222156   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.223655   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.225261   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1" for pod "kube-scheduler-minikube_kube-system(c18ee741ac4ad7b2bfda7d88116f3047)" error: rpc error: code = Unknown desc = Error: No such container: e34d6113ed9a3f6f3dc604e812be7d0d502deff12d820e0cb07efd41868e84c1
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.226252   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe" for pod "kube-controller-manager-minikube_kube-system(e640cf9855d72a2348a56dd1f7180a84)" error: rpc error: code = Unknown desc = Error: No such container: 71b30cb467adbb104558d78077dc768a1b9256ba555351fc91abf82df5367ffe
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.230292   22956 kuberuntime_manager.go:920] PodSandboxStatus of sandbox "e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb" for pod "etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)" error: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.236577   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.333964   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.336770   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.437083   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.534631   22956 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.537279   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.637516   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.735020   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/kubelet.go:450: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.737683   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.837878   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.934631   22956 reflector.go:123] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.938025   22956 kubelet.go:2267] node "minikube" not found
out 24 04:06:51 BLUEMONSTER kubelet[22956]: E1024 04:06:51.938803   22956 pod_workers.go:191] Error syncing pod 6e4dcdb25e0b9f2e1e4a93933525ea32 ("etcd-minikube_kube-system(6e4dcdb25e0b9f2e1e4a93933525ea32)"), skipping: rpc error: code = Unknown desc = Error: No such container: e02ef779a9a58c73a3fad330957e0561d60a0f37d39575dae7a884e1a8c614eb
out 24 04:06:52 BLUEMONSTER kubelet[22956]: E1024 04:06:52.038178   22956 kubelet.go:2267] node "minikube" not found

💣  Error getting machine logs: unable to fetch logs for: kube-addon-manager [64ab5a342b0a]

😿  Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@anhpham2511
Copy link

I am expriencing same problem with minukube 1.6.2

image

@magnologan
Copy link

Same issue here with minikube 1.6.2 on CentOS 7.7.1908

W0121 15:06:55.596066   81837 exit.go:101] Error starting cluster: apiserver healthz: apiserver healthz never reported healthy

💣  Error starting cluster: apiserver healthz: apiserver healthz never reported healthy

😿  minikube is exiting due to an error. If the above message is not useful, open an issue:
👉  https://github.com/kubernetes/minikube/issues/new/choose

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/apiserver Issues relating to apiserver configuration (--extra-config) ev/apiserver-timeout timeout talking to the apiserver priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

8 participants