-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error restarting cluster: waiting for apiserver : timed out waiting for the condition #4366
Comments
Looking at the events and the timeline, my guess is that we're not waiting long enough for the apiserver to become healthy on this host. It appears that the apiserver became healthy by 07:20:02 (about a minute and a half before the "logs" command was run), roughly 5 minutes after the VM came online. I notice that minikube only waits 60 seconds for the apiserver to come online. I'm guessing we missed that deadline by about 15 seconds. Is this problem repeatable when running |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale I am seeing this issue now, on a fresh install n$ minikube start 💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition 😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you: $ minikube logs > minikube.logs ==> container status <== ==> coredns <== ==> dmesg <== ==> kernel <== ==> kube-addon-manager <== ==> kube-apiserver <== ==> kube-proxy <== ==> kube-scheduler <== ==> kubelet <== ==> storage-provisioner <== |
@author: I believe this issue is now addressed by minikube v1.4 & Kubernetes v1.16, as it adjusts the way that the apiserver is spawned. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with Thank you for reporting this issue! |
@author: I believe this issue is now addressed by minikube v1.4 & Kubernetes v1.16, as it adjusts the way that the apiserver is spawned. If you still see this issue with minikube v1.4 or higher, please reopen this issue by commenting with Thank you for reporting this issue! |
I'm a newbie in Kubernetes and was just trying my first install on Ubuntu 18.04 when I stumbled into this... I've tried:
Always the same error. Any more data I can collect for you? Is there a way of passing some parameters or ENV var that overrides the timeouts? I have only 6Gb of RAM free in this machine. Can this be the issue for the long wait for the API? Thank you in advance EDIT: Typos EDIT2:
Sorry. Didn't noticed the previous comment was directed to @author
|
@gpedro34: You can't reopen an issue/PR unless you authored it or you are a collaborator. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Same issue here with minikube 1.6.2 on CentOS 7.7.1908
|
The exact command to reproduce the issue:
minikub start
The full output of the command that failed:
minikube v1.1.0 on linux (amd64)
💡 Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
🔄 Restarting existing virtualbox VM for "minikube" ...
⌛ Waiting for SSH access ...
🐳 Configuring environment for Kubernetes v1.14.2 on Docker 18.09.6
🔄 Relaunching Kubernetes v1.14.2 using kubeadm ...
💣 Error restarting cluster: waiting for apiserver: timed out waiting for the condition
The output of the
minikube logs
command:==> coredns <==
.:53
2019-05-29T07:20:25.355Z [INFO] CoreDNS-1.3.1
2019-05-29T07:20:25.355Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-05-29T07:20:25.355Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
==> dmesg <==
[ +5.006574] hpet1: lost 319 rtc interrupts
[ +4.997924] hpet1: lost 318 rtc interrupts
[ +5.002167] hpet1: lost 318 rtc interrupts
[ +5.000487] hpet1: lost 318 rtc interrupts
[May29 07:18] hpet1: lost 318 rtc interrupts
[ +5.010987] hpet1: lost 319 rtc interrupts
[ +5.002877] hpet1: lost 318 rtc interrupts
[ +4.997147] hpet1: lost 318 rtc interrupts
[ +5.019053] hpet1: lost 319 rtc interrupts
[ +4.998686] hpet1: lost 319 rtc interrupts
[ +4.996221] hpet1: lost 318 rtc interrupts
[ +5.001163] hpet1: lost 318 rtc interrupts
[ +5.010704] hpet1: lost 319 rtc interrupts
[ +5.004883] hpet1: lost 318 rtc interrupts
[ +5.008001] hpet1: lost 319 rtc interrupts
[ +5.001818] hpet1: lost 318 rtc interrupts
[May29 07:19] hpet1: lost 318 rtc interrupts
[ +4.995603] hpet1: lost 318 rtc interrupts
[ +5.013366] hpet1: lost 319 rtc interrupts
[ +4.997688] hpet1: lost 318 rtc interrupts
[ +10.013516] hpet_rtc_timer_reinit: 30 callbacks suppressed
[ +0.000023] hpet1: lost 318 rtc interrupts
[ +5.012843] hpet1: lost 320 rtc interrupts
[ +5.014019] hpet1: lost 319 rtc interrupts
[ +5.004419] hpet1: lost 318 rtc interrupts
[ +4.999819] hpet1: lost 319 rtc interrupts
[ +5.003422] hpet1: lost 319 rtc interrupts
[ +5.007775] hpet1: lost 318 rtc interrupts
[May29 07:20] hpet1: lost 318 rtc interrupts
[ +5.009816] hpet1: lost 319 rtc interrupts
[ +5.017589] hpet1: lost 319 rtc interrupts
[ +3.758954] hrtimer: interrupt took 1320200 ns
[ +1.256886] hpet1: lost 320 rtc interrupts
[ +5.012180] hpet1: lost 319 rtc interrupts
[ +5.004530] hpet1: lost 318 rtc interrupts
[ +5.014112] hpet1: lost 319 rtc interrupts
[ +5.009943] hpet1: lost 319 rtc interrupts
[ +5.003496] hpet1: lost 318 rtc interrupts
[ +5.002602] hpet1: lost 318 rtc interrupts
[ +4.996349] hpet1: lost 318 rtc interrupts
[ +5.009938] hpet1: lost 319 rtc interrupts
[May29 07:21] hpet1: lost 318 rtc interrupts
[ +5.006245] hpet1: lost 318 rtc interrupts
[ +5.003826] hpet1: lost 318 rtc interrupts
[ +5.009590] hpet1: lost 319 rtc interrupts
[ +5.005521] hpet1: lost 318 rtc interrupts
[ +5.013026] hpet1: lost 319 rtc interrupts
[ +4.996601] hpet1: lost 318 rtc interrupts
[ +5.005215] hpet1: lost 318 rtc interrupts
[ +5.012784] hpet1: lost 319 rtc interrupts
==> kernel <==
07:21:45 up 7 min, 0 users, load average: 5.53, 5.42, 2.58
Linux minikube 4.15.0 #1 SMP Tue May 21 00:14:40 UTC 2019 x86_64 GNU/Linux
==> kube-addon-manager <==
INFO: == Generated kubectl prune whitelist flags: --prune-whitelist core/v1/ConfigMap --prune-whitelist core/v1/Endpoints --prune-whitelist core/v1/Namespace --prune-whitelist core/v1/PersistentVolumeClaim --prune-whitelist core/v1/PersistentVolume --prune-whitelist core/v1/Pod --prune-whitelist core/v1/ReplicationController --prune-whitelist core/v1/Secret --prune-whitelist core/v1/Service --prune-whitelist batch/v1/Job --prune-whitelist batch/v1beta1/CronJob --prune-whitelist apps/v1/DaemonSet --prune-whitelifind: '/etc/kubernetes/admission-contst appsrols': No such/v1/Deployment --prune-whiteflist apps/v1/ReplicaSet --pile or direcrune-whitelist apps/v1/StatefulSet --pruntory
e-whitelist extensions/v1beta1/Ingress ==
E0529 07:19:29.676443 35 request.go:853] Unexpected error when reading response body: http2.GoAwayError{LastStreamID:0x1, ErrINFO: == KuberneCode:0x0, DebutgData:""}
es addon manager started at 201error: Unexpected error htt9-05-29T07:17:34+00:00 withp2.GoAwayError ADDON_CHECK_I{LastStreamID:0x1, ErrCode:0x0, DebugData:""} whNTERVAL_SEC=60 ==
en reading response body. Please renamespace/kube-system unchatry.
nged
INFO: == Successfully started /opt/namespace.yaml in namespace at 2019-05-29T07:19:12+00:00
INFO: == Default service account in the kube-system namespace has token default-token-csxpm ==
INFO: == Entering periodical apply loop at 2019-05-29T07:19:27+00:00 ==
INFO: Leader is
error: no objects passed to apply
error when retrieving current configuration of:
Resource: "/v1, Resource=serviceaccounts", GroupVersionKind: "/v1, Kind=ServiceAccount"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["kind":"ServiceAccount" "metadThe connecata":map["labetion to the server localhost:8443 was rels":map["addofused - did you specify the right host onmanager.kubernetes.io/mode":"r port?
Reconcile"] "name":"storage-provisioner" "namespace":"kube-sINFO: == Kubernetes addon ensure completed at 2019-05-ystem" "annot29T07:19:30+00:00 ==
ations":map["kubectl.kubernetes.io/last-applied-configuration":""]] "apIiVersion":"v1"]}
NFO: == Refrom server for: "/etc/kubernetes/addons/storage-provconciling withisioner.yaml": Get https://localhost:84 deprecated la43/api/v1/namespaces/kube-sbel ==
ystem/serviceaccounts/storage-pINFO: == Reconciling with arovisioner: diddon-manager label ==
al tcp 127.0.0.1:8443: connect: connection refused
INFO: == Kubernetes addon reconcile completed at 2019-05-29T07:19:31+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-05-29T07:20:32+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
serviceacecount/storage-provisioner urnrcohanged
r when retrieving cINFO: == Kuberunetes addon rercroncile completeed at 2019-05-n29T07:20:38+00:00 ==
t configuration ofINFO: Leader i:s
minikube
Resource:INFO: == Kubernetes addon e nsure complete"d at 2019-05-2/9T07:21:31+00:v00 ==
1, Resource=podsINFO: == Recon"ciling with deprecated label, ==
GroupVersionKinINFO: == Reconciling with add:don-manager label ==
"/v1, Kind=Pod"
Name: "storage-provisioner", Namespace: "kube-system"
Object: &{map["kind":"Pod" "metadata":map["serviceaccount/storage-provnamespace":"kube-system" "annotations":misioner unchanged
ap["kubectl.kubernetes.io/last-applied-configuration":""] "labels":map["INFO: == Kubernetes addon raddonmanager.econcile compkubernetes.io/mode":"Reconcile" "integration-test":"sleted at 2019-torage-provisioner"] "name"0:"storage-prov5isioner"] "spec":map["containers":[map["command":["/s-29T07:21:36+00:00 ==
torage-provisioner"] "image":"gcr.io/k8s-minikube/storage-provisioner:v1.8.1" "imagePullPolicy":"IfNotPresent" "name":"storage-provisioner" "volumeMounts":[map["name":"tmp" "mountPath":"/tmp"]]]] "hostNetwork":%!!(MISSING)q(bool=true) "serviceAccountName":"storage-provisioner" "volumes":[map["hostPath":map["path":"/tmp" "type":"Directory"] "name":"tmp"]]] "apiVersion":"v1"]}
from server for: "/etc/kubernetes/addons/storage-provisioner.yaml": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/storage-provisioner: dial tcp 127.0.0.1:8443: connect: connection refused
error: no objects passed to apply
error: no objects passed to apply
==> kube-apiserver <==
I0529 07:21:17.521460 1 trace.go:81] Trace[1018313360]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:16.948340095 +0000 UTC m=+91.076732293) (total time: 573.089515ms):
Trace[1018313360]: [573.048623ms] [567.104224ms] Transaction committed
I0529 07:21:17.521943 1 trace.go:81] Trace[469531806]: "Update /api/v1/namespaces/kube-system/endpoints/kube-dns" (started: 2019-05-29 07:21:16.945258327 +0000 UTC m=+91.073650542) (total time: 576.659948ms):
Trace[469531806]: [576.307806ms] [573.33055ms] Object stored in database
I0529 07:21:17.524217 1 trace.go:81] Trace[1369181329]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:16.942332703 +0000 UTC m=+91.070725000) (total time: 581.808261ms):
Trace[1369181329]: [581.754646ms] [580.611722ms] Transaction committed
I0529 07:21:17.524432 1 trace.go:81] Trace[1296896947]: "Update /api/v1/namespaces/default/endpoints/server-cluster-ip-service" (started: 2019-05-29 07:21:16.940848745 +0000 UTC m=+91.069240943) (total time: 583.561002ms):
Trace[1296896947]: [583.407573ms] [582.018239ms] Object stored in database
I0529 07:21:17.923617 1 trace.go:81] Trace[434096516]: "Get /apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy" (started: 2019-05-29 07:21:17.250785179 +0000 UTC m=+91.379177335) (total time: 672.775438ms):
Trace[434096516]: [671.971088ms] [671.955899ms] About to write a response
I0529 07:21:17.930212 1 trace.go:81] Trace[1402708644]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:17.177168729 +0000 UTC m=+91.305560921) (total time: 752.987667ms):
Trace[1402708644]: [752.681564ms] [752.587332ms] About to write a response
I0529 07:21:17.933065 1 trace.go:81] Trace[1543437771]: "List /api/v1/pods" (started: 2019-05-29 07:21:16.95856208 +0000 UTC m=+91.086954273) (total time: 974.44645ms):
Trace[1543437771]: [971.619589ms] [970.736801ms] Listing from storage done
I0529 07:21:18.599962 1 trace.go:81] Trace[1604127098]: "Get /api/v1/namespaces/default" (started: 2019-05-29 07:21:18.024455659 +0000 UTC m=+92.152847929) (total time: 575.449811ms):
Trace[1604127098]: [575.179168ms] [575.148148ms] About to write a response
I0529 07:21:18.601343 1 trace.go:81] Trace[654741462]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:17.937627268 +0000 UTC m=+92.066019530) (total time: 663.667818ms):
Trace[654741462]: [663.640835ms] [663.167039ms] Transaction committed
I0529 07:21:18.601514 1 trace.go:81] Trace[1959586799]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:17.937281442 +0000 UTC m=+92.065673633) (total time: 664.212893ms):
Trace[1959586799]: [664.122909ms] [663.867311ms] Object stored in database
I0529 07:21:19.246695 1 trace.go:81] Trace[1752908758]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:18.60296445 +0000 UTC m=+92.731356586) (total time: 643.660249ms):
Trace[1752908758]: [643.615828ms] [641.350697ms] Transaction committed
I0529 07:21:26.357530 1 trace.go:81] Trace[810658518]: "GuaranteedUpdate etcd3: *coordination.Lease" (started: 2019-05-29 07:21:22.672892749 +0000 UTC m=+96.801284870) (total time: 3.684612273s):
Trace[810658518]: [3.684582428s] [3.684435028s] Transaction committed
I0529 07:21:26.357725 1 trace.go:81] Trace[1471849622]: "Update /apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube" (started: 2019-05-29 07:21:22.672417773 +0000 UTC m=+96.800809884) (total time: 3.685293181s):
Trace[1471849622]: [3.685146062s] [3.68470408s] Object stored in database
I0529 07:21:27.409987 1 trace.go:81] Trace[1656638992]: "Get /api/v1/namespaces/kube-system/endpoints/kube-scheduler" (started: 2019-05-29 07:21:23.986002259 +0000 UTC m=+98.114394451) (total time: 3.423914083s):
Trace[1656638992]: [3.422771799s] [3.422669142s] About to write a response
I0529 07:21:27.412066 1 trace.go:81] Trace[921301053]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:24.676973289 +0000 UTC m=+98.805365481) (total time: 2.735026186s):
Trace[921301053]: [2.734722751s] [2.734619893s] About to write a response
I0529 07:21:27.420665 1 trace.go:81] Trace[381747592]: "List /apis/batch/v1/jobs" (started: 2019-05-29 07:21:24.607230891 +0000 UTC m=+98.735623078) (total time: 2.81338232s):
Trace[381747592]: [2.813220313s] [2.812820234s] Listing from storage done
I0529 07:21:28.599401 1 trace.go:81] Trace[659710806]: "Get /api/v1/namespaces/default" (started: 2019-05-29 07:21:28.023884191 +0000 UTC m=+102.152276339) (total time: 575.438428ms):
Trace[659710806]: [575.070409ms] [575.05643ms] About to write a response
I0529 07:21:28.599427 1 trace.go:81] Trace[1158810893]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:27.815573329 +0000 UTC m=+101.943965462) (total time: 783.800728ms):
Trace[1158810893]: [783.362833ms] [783.351531ms] About to write a response
I0529 07:21:29.679050 1 trace.go:81] Trace[1642499952]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:28.625131404 +0000 UTC m=+102.753523739) (total time: 1.053559647s):
Trace[1642499952]: [1.05350998s] [1.043039473s] Transaction committed
I0529 07:21:34.473952 1 trace.go:81] Trace[231687430]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:33.810510879 +0000 UTC m=+107.938903113) (total time: 663.389626ms):
Trace[231687430]: [663.151563ms] [663.02654ms] About to write a response
I0529 07:21:35.401876 1 trace.go:81] Trace[1097604872]: "GuaranteedUpdate etcd3: *core.Endpoints" (started: 2019-05-29 07:21:34.477924157 +0000 UTC m=+108.606316314) (total time: 923.881882ms):
Trace[1097604872]: [923.816277ms] [922.959909ms] Transaction committed
I0529 07:21:35.402460 1 trace.go:81] Trace[725793672]: "Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:34.477677453 +0000 UTC m=+108.606069645) (total time: 924.701688ms):
Trace[725793672]: [924.287757ms] [924.108225ms] Object stored in database
I0529 07:21:35.411942 1 trace.go:81] Trace[1579696582]: "Get /api/v1/namespaces/kube-system/pods/storage-provisioner" (started: 2019-05-29 07:21:34.481214668 +0000 UTC m=+108.609606804) (total time: 930.217245ms):
Trace[1579696582]: [929.373054ms] [929.359897ms] About to write a response
I0529 07:21:38.571879 1 trace.go:81] Trace[1384862299]: "GuaranteedUpdate etcd3: *v1.Endpoints" (started: 2019-05-29 07:21:38.029512373 +0000 UTC m=+112.157904526) (total time: 542.300581ms):
Trace[1384862299]: [542.252635ms] [538.612023ms] Transaction committed
I0529 07:21:44.114161 1 trace.go:81] Trace[913308298]: "Get /api/v1/namespaces/kube-system/endpoints/kube-controller-manager" (started: 2019-05-29 07:21:43.45967806 +0000 UTC m=+117.588070255) (total time: 654.129295ms):
Trace[913308298]: [653.715989ms] [653.609974ms] About to write a response
==> kube-proxy <==
I0529 07:19:18.233364 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0529 07:19:18.411303 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0529 07:19:18.422011 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0529 07:19:18.422099 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0529 07:19:18.422932 1 config.go:102] Starting endpoints config controller
I0529 07:19:18.422992 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0529 07:19:18.423013 1 config.go:202] Starting service config controller
I0529 07:19:18.423024 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0529 07:19:18.552507 1 controller_utils.go:1034] Caches are synced for service config controller
I0529 07:19:18.552998 1 controller_utils.go:1034] Caches are synced for endpoints config controller
E0529 07:19:28.791816 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=21996&timeout=8m30s&timeoutSeconds=510&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:28.791866 1 reflector.go:270] k8s.io/client-go/informers/factory.go:133: Failed to watch *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=21847&timeout=8m17s&timeoutSeconds=497&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:29.792889 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:29.809107 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:30.794969 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:30.811732 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:31.797705 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:31.817534 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:32.800149 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:32.820550 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:33.802652 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:33.826318 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:34.803625 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:34.832822 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:35.806637 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:35.838829 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:36.811367 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:36.839630 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:37.816418 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:37.850594 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:38.819565 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:38.852663 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:39.821014 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:39.853201 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:40.822731 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:40.854841 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:41.824209 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:41.857868 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:42.826286 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:42.862307 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:43.827342 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:43.864397 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:44.828362 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:44.864999 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:45.829278 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:45.865447 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:46.830085 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:46.866088 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:47.830883 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Endpoints: Get https://localhost:8443/api/v1/endpoints?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
E0529 07:19:47.866957 1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
==> kube-scheduler <==
I0529 07:20:00.859835 1 serving.go:319] Generated self-signed cert in-memory
W0529 07:20:01.750136 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0529 07:20:01.750181 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0529 07:20:01.750188 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0529 07:20:01.750204 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0529 07:20:01.750216 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0529 07:20:01.955377 1 server.go:142] Version: v1.14.2
I0529 07:20:01.955523 1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0529 07:20:01.959983 1 authorization.go:47] Authorization is disabled
W0529 07:20:01.960222 1 authentication.go:55] Authentication is disabled
I0529 07:20:01.960303 1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0529 07:20:01.962638 1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
I0529 07:20:02.882446 1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0529 07:20:02.983037 1 controller_utils.go:1034] Caches are synced for scheduler controller
I0529 07:20:02.983700 1 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...
I0529 07:20:24.901026 1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Wed 2019-05-29 07:15:02 UTC, end at Wed 2019-05-29 07:21:45 UTC. --
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.467896 3250 kubelet_node_status.go:372] Unable to update node status: update node status exceeds retry count
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.600886 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.608607 3250 controller.go:115] failed to ensure node lease exists, will retry in 6.4s, error: Get https://localhost:8443/apis/coordination.k8s.io/v1beta1/namespaces/kube-node-lease/leases/minikube?timeout=10s: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.798733 3250 reflector.go:126] object-"kube-system"/"kube-proxy-token-dspx8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)kube-proxy-token-dspx8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:43 minikube kubelet[3250]: E0529 07:19:43.996773 3250 reflector.go:126] object-"kube-system"/"coredns-token-kzq4n": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)coredns-token-kzq4n&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.197187 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.397028 3250 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: W0529 07:19:44.598759 3250 status_manager.go:485] Failed to get status for pod "coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/kube-system/pods/coredns-fb8b8dccf-rgn86: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.642665 3250 pod_workers.go:190] Error syncing pod 22546764-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.797643 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:44 minikube kubelet[3250]: W0529 07:19:44.918168 3250 container.go:409] Failed to create summary reader for "/kubepods/burstable/pod21f5d2f1-7c8c-11e9-8033-080027e5c6aa/d4f6ee771d7f12e0e57226d8c271e4b785ffad4a4030e0abd7e7d8c46e61ad4f": none of the resources are being tracked.
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.943150 3250 pod_workers.go:190] Error syncing pod 9b290132363a92652555896288ca3f88 ("kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)"), skipping: failed to "StartContainer" for "kube-scheduler" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-scheduler pod=kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)"
May 29 07:19:44 minikube kubelet[3250]: E0529 07:19:44.996702 3250 reflector.go:126] object-"default"/"default-token-lnxsw": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/default/secrets?fieldSelector=metadata.name%!D(MISSING)default-token-lnxsw&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.196695 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.397841 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.519300 3250 event.go:200] Unable to write event: 'Patch https://localhost:8443/api/v1/namespaces/kube-system/events/kube-controller-manager-minikube.15a31635931fb913: dial tcp 127.0.0.1:8443: connect: connection refused' (may retry after sleeping)
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.597521 3250 reflector.go:126] object-"kube-system"/"storage-provisioner-token-mzpj8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)storage-provisioner-token-mzpj8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: E0529 07:19:45.796924 3250 reflector.go:126] object-"kube-system"/"coredns": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)coredns&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:45 minikube kubelet[3250]: W0529 07:19:45.997288 3250 status_manager.go:485] Failed to get status for pod "client-deployment-848bcddb74-ws7tb_default(0a5ac07f-7e0b-11e9-a74b-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/default/pods/client-deployment-848bcddb74-ws7tb: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.198038 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Get https://localhost:8443/apis/storage.k8s.io/v1beta1/csidrivers?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.397415 3250 reflector.go:126] object-"kube-system"/"kube-proxy-token-dspx8": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)kube-proxy-token-dspx8&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.598091 3250 reflector.go:126] object-"kube-system"/"coredns-token-kzq4n": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/kube-system/secrets?fieldSelector=metadata.name%!D(MISSING)coredns-token-kzq4n&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:46 minikube kubelet[3250]: W0529 07:19:46.615069 3250 pod_container_deletor.go:75] Container "7c2638d27e426d5c5f43e550ba036f324ff68f74008803992143ff47608f9e3d" not found in pod's containers
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.695026 3250 remote_runtime.go:321] ContainerStatus "23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.695132 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "postgres-deployment-57c594d6df-6nwld_default(989b50e2-7e17-11e9-a74b-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 23a92a2cf84abe7214430fb386fc8f83487387bc1cb0d89b1cdcd7c832d379df
May 29 07:19:46 minikube kubelet[3250]: E0529 07:19:46.796929 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Get https://localhost:8443/api/v1/services?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.001482 3250 reflector.go:126] object-"kube-system"/"kube-proxy": Failed to list *v1.ConfigMap: Get https://localhost:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)kube-proxy&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.196937 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.397442 3250 reflector.go:126] object-"default"/"default-token-lnxsw": Failed to list *v1.Secret: Get https://localhost:8443/api/v1/namespaces/default/secrets?fieldSelector=metadata.name%!D(MISSING)default-token-lnxsw&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: W0529 07:19:47.597248 3250 status_manager.go:485] Failed to get status for pod "server-deployment-f4c6f6c8f-gdf6n_default(65ebe59e-7e14-11e9-a74b-080027e5c6aa)": Get https://localhost:8443/api/v1/namespaces/default/pods/server-deployment-f4c6f6c8f-gdf6n: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.797601 3250 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)minikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.803064 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:47 minikube kubelet[3250]: E0529 07:19:47.998002 3250 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: Get https://localhost:8443/apis/node.k8s.io/v1beta1/runtimeclasses?limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
May 29 07:19:49 minikube kubelet[3250]: W0529 07:19:49.031551 3250 pod_container_deletor.go:75] Container "fe424629af09b5104d90ab8d6f7df0441598dc931629b622767e365bda1dd196" not found in pod's containers
May 29 07:19:49 minikube kubelet[3250]: E0529 07:19:49.032398 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:52 minikube kubelet[3250]: W0529 07:19:52.208054 3250 pod_container_deletor.go:75] Container "f4c22016f32cc71a8a19169b78eff0eebb62913caadc4396ea6eff15fe7e0a2f" not found in pod's containers
May 29 07:19:52 minikube kubelet[3250]: E0529 07:19:52.208476 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:53 minikube kubelet[3250]: E0529 07:19:53.854952 3250 pod_workers.go:190] Error syncing pod 21f5d2f1-7c8c-11e9-8033-080027e5c6aa ("coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 20s restarting failed container=coredns pod=coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.262933 3250 remote_runtime.go:321] ContainerStatus "8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.263333 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "kube-scheduler-minikube_kube-system(9b290132363a92652555896288ca3f88)" failed: rpc error: code = Unknown desc = Error: No such container: 8c5550cff6fd52a0f3cb1d416cbd5b20d6fd1ed625b08211fd2ff016acbf86a9
May 29 07:19:55 minikube kubelet[3250]: E0529 07:19:55.268640 3250 pod_workers.go:190] Error syncing pod 36130beb-7c8c-11e9-8033-080027e5c6aa ("storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)"
May 29 07:19:57 minikube kubelet[3250]: E0529 07:19:57.362488 3250 remote_runtime.go:321] ContainerStatus "058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b
May 29 07:19:57 minikube kubelet[3250]: E0529 07:19:57.362855 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "coredns-fb8b8dccf-qlx48_kube-system(22546764-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 058461ba388f9caf80c9dd77c0c3c9335813c03854966d9d739f12ce17e4124b
May 29 07:20:13 minikube kubelet[3250]: E0529 07:20:13.271915 3250 remote_runtime.go:321] ContainerStatus "6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c
May 29 07:20:13 minikube kubelet[3250]: E0529 07:20:13.271974 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "coredns-fb8b8dccf-rgn86_kube-system(21f5d2f1-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 6e4ec24ed1970145f5064dc30d08976c87ddb6e86b808667526f22f85ab5476c
May 29 07:20:14 minikube kubelet[3250]: E0529 07:20:14.295004 3250 remote_runtime.go:321] ContainerStatus "67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d
May 29 07:20:14 minikube kubelet[3250]: E0529 07:20:14.295370 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "storage-provisioner_kube-system(36130beb-7c8c-11e9-8033-080027e5c6aa)" failed: rpc error: code = Unknown desc = Error: No such container: 67980f122760a570e1929df6bf5b0a94d1058feb6552fde6bf6e992297b0d05d
May 29 07:20:37 minikube kubelet[3250]: E0529 07:20:37.272727 3250 pod_workers.go:190] Error syncing pod 9c1e365bd18b5d3fc6a5d0ff10c2b125 ("kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)"), skipping: failed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-controller-manager pod=kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)"
May 29 07:20:48 minikube kubelet[3250]: E0529 07:20:48.863646 3250 remote_runtime.go:321] ContainerStatus "387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df" from runtime service failed: rpc error: code = Unknown desc = Error: No such container: 387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df
May 29 07:20:48 minikube kubelet[3250]: E0529 07:20:48.864098 3250 kuberuntime_manager.go:917] getPodContainerStatuses for pod "kube-controller-manager-minikube_kube-system(9c1e365bd18b5d3fc6a5d0ff10c2b125)" failed: rpc error: code = Unknown desc = Error: No such container: 387ae889989af266e7174bf655d9972c6d69a49b2d7b0d8e5e96594f256df5df
==> storage-provisioner <==
The operating system version:
Ubuntu 18.04
The text was updated successfully, but these errors were encountered: