You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
==> kube-apiserver ["26fdd82b664e"] <==
W0111 04:31:40.544220 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.554934 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.574913 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.578432 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.619134 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.679924 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0111 04:31:40.679954 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0111 04:31:40.688501 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0111 04:31:40.688527 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0111 04:31:40.690139 1 client.go:361] parsed scheme: "endpoint"
I0111 04:31:40.690167 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0111 04:31:40.700310 1 client.go:361] parsed scheme: "endpoint"
I0111 04:31:40.700339 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0111 04:31:42.643968 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0111 04:31:42.644134 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0111 04:31:42.644852 1 secure_serving.go:178] Serving securely on [::]:8443
I0111 04:31:42.645782 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0111 04:31:42.645820 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0111 04:31:42.646578 1 available_controller.go:386] Starting AvailableConditionController
I0111 04:31:42.646636 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0111 04:31:42.646714 1 crd_finalizer.go:263] Starting CRDFinalizer
I0111 04:31:42.646965 1 autoregister_controller.go:140] Starting autoregister controller
I0111 04:31:42.647005 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0111 04:31:42.647502 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0111 04:31:42.647628 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0111 04:31:42.647753 1 controller.go:81] Starting OpenAPI AggregationController
I0111 04:31:42.648436 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0111 04:31:42.648490 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0111 04:31:42.650577 1 controller.go:85] Starting OpenAPI controller
I0111 04:31:42.650747 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0111 04:31:42.650878 1 naming_controller.go:288] Starting NamingConditionController
I0111 04:31:42.650956 1 establishing_controller.go:73] Starting EstablishingController
I0111 04:31:42.651036 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0111 04:31:42.651111 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0111 04:31:42.651194 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0111 04:31:42.651252 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0111 04:31:42.651370 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0111 04:31:42.651555 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0111 04:31:42.682991 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.107, ResourceVersion: 0, AdditionalErrorMsg:
I0111 04:31:42.811826 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0111 04:31:42.812410 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0111 04:31:42.812689 1 cache.go:39] Caches are synced for autoregister controller
I0111 04:31:42.814456 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0111 04:31:42.814590 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0111 04:31:43.645908 1 controller.go:107] OpenAPI AggregationController: Processing item
I0111 04:31:43.646188 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0111 04:31:43.646480 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0111 04:31:43.665490 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0111 04:31:43.687424 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0111 04:31:43.688280 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0111 04:31:44.244676 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0111 04:31:44.295176 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0111 04:31:44.410199 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.107]
I0111 04:31:44.411144 1 controller.go:606] quota admission added evaluator for: endpoints
I0111 04:31:44.997177 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0111 04:31:45.752332 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0111 04:31:45.771103 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0111 04:31:45.940302 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0111 04:31:53.895362 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0111 04:31:53.905447 1 controller.go:606] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager ["524cb0e7f9b6"] <==
I0111 04:31:52.608976 1 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0111 04:31:52.609263 1 controllermanager.go:533] Started "nodelifecycle"
I0111 04:31:52.610681 1 node_lifecycle_controller.go:554] Starting node controller
I0111 04:31:52.610734 1 shared_informer.go:197] Waiting for caches to sync for taint
I0111 04:31:52.844604 1 controllermanager.go:533] Started "attachdetach"
I0111 04:31:52.844698 1 attach_detach_controller.go:342] Starting attach detach controller
I0111 04:31:52.844706 1 shared_informer.go:197] Waiting for caches to sync for attach detach
I0111 04:31:53.541647 1 controllermanager.go:533] Started "horizontalpodautoscaling"
I0111 04:31:53.541842 1 horizontal.go:156] Starting HPA controller
I0111 04:31:53.541860 1 shared_informer.go:197] Waiting for caches to sync for HPA
I0111 04:31:53.805107 1 controllermanager.go:533] Started "persistentvolume-binder"
I0111 04:31:53.807509 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0111 04:31:53.808531 1 pv_controller_base.go:294] Starting persistent volume controller
I0111 04:31:53.808545 1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0111 04:31:53.824086 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0111 04:31:53.837446 1 shared_informer.go:204] Caches are synced for GC
W0111 04:31:53.837695 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0111 04:31:53.842039 1 shared_informer.go:204] Caches are synced for ReplicationController
I0111 04:31:53.842659 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0111 04:31:53.848822 1 shared_informer.go:204] Caches are synced for HPA
I0111 04:31:53.855266 1 shared_informer.go:204] Caches are synced for TTL
I0111 04:31:53.873704 1 shared_informer.go:204] Caches are synced for PVC protection
I0111 04:31:53.888626 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0111 04:31:53.891236 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0111 04:31:53.891295 1 shared_informer.go:204] Caches are synced for daemon sets
I0111 04:31:53.892414 1 shared_informer.go:204] Caches are synced for PV protection
I0111 04:31:53.893052 1 shared_informer.go:204] Caches are synced for stateful set
I0111 04:31:53.893845 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0111 04:31:53.903875 1 shared_informer.go:204] Caches are synced for deployment
I0111 04:31:53.908740 1 shared_informer.go:204] Caches are synced for persistent volume
I0111 04:31:53.911740 1 shared_informer.go:204] Caches are synced for taint
I0111 04:31:53.911994 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0111 04:31:53.912351 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0111 04:31:53.912420 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0111 04:31:53.912819 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0111 04:31:53.914426 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"31cde5ea-e6fd-4264-9124-e726847a7c4e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0111 04:31:53.915134 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d84b7e63-92c2-488e-8b28-387d05bae8ea", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0111 04:31:53.920838 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"eaeb89e0-2410-4d44-8954-3bca5b100218", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q557h
I0111 04:31:53.933686 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"79be41ed-12fd-48b6-be46-f51ed0408b90", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-tpld8
I0111 04:31:53.940553 1 shared_informer.go:204] Caches are synced for expand
I0111 04:31:53.940735 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0111 04:31:53.942069 1 shared_informer.go:204] Caches are synced for disruption
I0111 04:31:53.942162 1 disruption.go:338] Sending events to api server.
I0111 04:31:53.945213 1 shared_informer.go:204] Caches are synced for attach detach
I0111 04:31:53.955228 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"79be41ed-12fd-48b6-be46-f51ed0408b90", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-zs2zp
I0111 04:31:54.219462 1 shared_informer.go:204] Caches are synced for endpoint
I0111 04:31:54.227547 1 shared_informer.go:204] Caches are synced for job
I0111 04:31:54.273675 1 shared_informer.go:204] Caches are synced for namespace
I0111 04:31:54.341856 1 shared_informer.go:204] Caches are synced for service account
I0111 04:31:54.378676 1 shared_informer.go:204] Caches are synced for garbage collector
I0111 04:31:54.378722 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0111 04:31:54.379468 1 shared_informer.go:204] Caches are synced for resource quota
I0111 04:31:54.407979 1 shared_informer.go:204] Caches are synced for resource quota
I0111 04:31:54.424536 1 shared_informer.go:204] Caches are synced for garbage collector
I0111 04:32:08.915664 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0111 04:32:18.917650 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0111 04:32:25.028218 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"6ae21cb2-c50f-4fe6-beef-854e4eb76b1e", APIVersion:"apps/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0111 04:32:25.043810 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"61ceb262-a0b5-4b7a-aa59-d088c985a6ef", APIVersion:"apps/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-mwgv4
I0111 04:32:25.084781 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"e5f9fd6c-5e39-49ff-baea-f744f8105345", APIVersion:"apps/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0111 04:32:25.097302 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"0a898f4d-a42e-44ef-bcd6-7ebb4083f56c", APIVersion:"apps/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-zgd79
==> kube-proxy ["34e1d42b0fe3"] <==
W0111 04:32:08.055935 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0111 04:32:08.064886 1 node.go:135] Successfully retrieved node IP: 192.168.99.107
I0111 04:32:08.065069 1 server_others.go:145] Using iptables Proxier.
W0111 04:32:08.065202 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0111 04:32:08.066091 1 server.go:571] Version: v1.17.0
I0111 04:32:08.066916 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0111 04:32:08.066953 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0111 04:32:08.067559 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0111 04:32:08.073107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0111 04:32:08.073242 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0111 04:32:08.073837 1 config.go:131] Starting endpoints config controller
I0111 04:32:08.073866 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0111 04:32:08.073945 1 config.go:313] Starting service config controller
I0111 04:32:08.073960 1 shared_informer.go:197] Waiting for caches to sync for service config
I0111 04:32:08.174924 1 shared_informer.go:204] Caches are synced for service config
I0111 04:32:08.174989 1 shared_informer.go:204] Caches are synced for endpoints config
==> kube-scheduler ["36eb1c4d020d"] <==
I0111 04:31:39.456331 1 serving.go:312] Generated self-signed cert in-memory
W0111 04:31:40.757383 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0111 04:31:40.757499 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0111 04:31:42.806473 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0111 04:31:42.806729 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0111 04:31:42.806865 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0111 04:31:42.806976 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0111 04:31:42.853002 1 authorization.go:47] Authorization is disabled
W0111 04:31:42.853193 1 authentication.go:92] Authentication is disabled
I0111 04:31:42.853380 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0111 04:31:42.860205 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0111 04:31:42.861571 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:42.862034 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:42.862201 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0111 04:31:42.871298 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:42.871593 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0111 04:31:42.871816 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0111 04:31:42.871902 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0111 04:31:42.872036 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0111 04:31:42.872212 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:42.872295 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0111 04:31:42.872568 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0111 04:31:42.872847 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0111 04:31:42.872873 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0111 04:31:42.873095 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0111 04:31:42.874218 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0111 04:31:43.872569 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:43.874467 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0111 04:31:43.875441 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0111 04:31:43.876477 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0111 04:31:43.879443 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0111 04:31:43.880753 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:43.881925 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0111 04:31:43.882198 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0111 04:31:43.891034 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0111 04:31:43.891260 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0111 04:31:43.896273 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0111 04:31:43.896571 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0111 04:31:44.975707 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:44.978908 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0111 04:31:45.002387 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Sat 2020-01-11 04:30:09 UTC, end at Sat 2020-01-11 04:36:28 UTC. --
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.428808 4953 server.go:143] Starting to listen on 0.0.0.0:10250
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.430329 4953 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.430651 4953 server.go:354] Adding debug handlers to kubelet server.
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.440469 4953 volume_manager.go:265] Starting Kubelet Volume Manager
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.451432 4953 desired_state_of_world_populator.go:138] Desired state populator starts to run
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.477466 4953 status_manager.go:157] Starting to sync pod status with apiserver
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.477541 4953 kubelet.go:1820] Starting kubelet main sync loop.
Jan 11 04:32:06 minikube kubelet[4953]: E0111 04:32:06.477612 4953 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.553386 4953 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Jan 11 04:32:06 minikube kubelet[4953]: E0111 04:32:06.585770 4953 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.598863 4953 kubelet_node_status.go:70] Attempting to register node minikube
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.611751 4953 kubelet_node_status.go:112] Node minikube was previously registered
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.612271 4953 kubelet_node_status.go:73] Successfully registered node minikube
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.678562 4953 setters.go:535] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-01-11 04:32:06.678543783 +0000 UTC m=+20.942659931 LastTransitionTime:2020-01-11 04:32:06.678543783 +0000 UTC m=+20.942659931 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692737 4953 cpu_manager.go:173] [cpumanager] starting with none policy
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692838 4953 cpu_manager.go:174] [cpumanager] reconciling every 10s
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692903 4953 policy_none.go:43] [cpumanager] none policy: Start
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.707011 4953 plugin_manager.go:114] Starting Kubelet Plugin Manager
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864867 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-addons") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864936 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d0fcec0112ed9b5ad5e26312bb465ed4-etcd-certs") pod "etcd-minikube" (UID: "d0fcec0112ed9b5ad5e26312bb465ed4")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864964 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-k8s-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864988 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-kubeconfig") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865019 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865046 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f64d1dee-cdf7-4407-bf7a-38e53b80c837-config-volume") pod "coredns-6955765f44-zs2zp" (UID: "f64d1dee-cdf7-4407-bf7a-38e53b80c837")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865067 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-kubeconfig") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865090 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865117 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-wwnt6" (UniqueName: "kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865138 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/c5927ac6-2872-4fde-a910-1e735bccf412-xtables-lock") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865159 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/c5927ac6-2872-4fde-a910-1e735bccf412-lib-modules") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865181 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/627db42b-a209-4a53-843e-582f9f6ded2d-config-volume") pod "coredns-6955765f44-tpld8" (UID: "627db42b-a209-4a53-843e-582f9f6ded2d")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865202 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-ca-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865224 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c5927ac6-2872-4fde-a910-1e735bccf412-kube-proxy") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865248 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8hnsq" (UniqueName: "kubernetes.io/secret/f64d1dee-cdf7-4407-bf7a-38e53b80c837-coredns-token-8hnsq") pod "coredns-6955765f44-zs2zp" (UID: "f64d1dee-cdf7-4407-bf7a-38e53b80c837")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865268 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-k8s-certs") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865289 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff67867321338ffd885039e188f6b424-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff67867321338ffd885039e188f6b424")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865311 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d0fcec0112ed9b5ad5e26312bb465ed4-etcd-data") pod "etcd-minikube" (UID: "d0fcec0112ed9b5ad5e26312bb465ed4")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865333 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-ca-certs") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865410 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/710408c3-61c1-4bc3-96ba-3635451b70df-tmp") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865437 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865462 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-fkllm" (UniqueName: "kubernetes.io/secret/c5927ac6-2872-4fde-a910-1e735bccf412-kube-proxy-token-fkllm") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865484 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8hnsq" (UniqueName: "kubernetes.io/secret/627db42b-a209-4a53-843e-582f9f6ded2d-coredns-token-8hnsq") pod "coredns-6955765f44-tpld8" (UID: "627db42b-a209-4a53-843e-582f9f6ded2d")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865491 4953 reconciler.go:156] Reconciler: start to sync state
Jan 11 04:32:07 minikube kubelet[4953]: E0111 04:32:07.978192 4953 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-wwnt6: failed to sync secret cache: timed out waiting for the condition
Jan 11 04:32:07 minikube kubelet[4953]: E0111 04:32:07.981468 4953 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6" ("710408c3-61c1-4bc3-96ba-3635451b70df")" failed. No retries permitted until 2020-01-11 04:32:08.48143603 +0000 UTC m=+22.745552186 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "storage-provisioner-token-wwnt6" (UniqueName: "kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df") : failed to sync secret cache: timed out waiting for the condition"
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.524914 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.578510 4953 pod_container_deletor.go:75] Container "08458da66447590044d2986803f5ed99fe52d6553c78f4911b77273a3041e8b9" not found in pod's containers
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.580356 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.582810 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-zs2zp through plugin: invalid network status for
Jan 11 04:32:09 minikube kubelet[4953]: W0111 04:32:09.914494 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-zs2zp through plugin: invalid network status for
Jan 11 04:32:09 minikube kubelet[4953]: W0111 04:32:09.924127 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.159211 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/dd8acce0-adaf-441d-ad99-ed9ff4395434-tmp-volume") pod "dashboard-metrics-scraper-7b64584c5c-mwgv4" (UID: "dd8acce0-adaf-441d-ad99-ed9ff4395434")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.159869 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-zhpxd" (UniqueName: "kubernetes.io/secret/dd8acce0-adaf-441d-ad99-ed9ff4395434-kubernetes-dashboard-token-zhpxd") pod "dashboard-metrics-scraper-7b64584c5c-mwgv4" (UID: "dd8acce0-adaf-441d-ad99-ed9ff4395434")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.260311 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-zhpxd" (UniqueName: "kubernetes.io/secret/98f11ecc-e7a4-464f-9ed3-a93e51a11f1a-kubernetes-dashboard-token-zhpxd") pod "kubernetes-dashboard-79d9cd965-zgd79" (UID: "98f11ecc-e7a4-464f-9ed3-a93e51a11f1a")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.260346 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/98f11ecc-e7a4-464f-9ed3-a93e51a11f1a-tmp-volume") pod "kubernetes-dashboard-79d9cd965-zgd79" (UID: "98f11ecc-e7a4-464f-9ed3-a93e51a11f1a")
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.176712 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.203388 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.434893 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.458413 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
Jan 11 04:32:27 minikube kubelet[4953]: W0111 04:32:27.750255 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:27 minikube kubelet[4953]: W0111 04:32:27.792151 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
==> kubernetes-dashboard ["dd4279cbb42a"] <==
2020/01/11 04:32:26 Starting overwatch
2020/01/11 04:32:26 Using namespace: kubernetes-dashboard
2020/01/11 04:32:26 Using in-cluster config to connect to apiserver
2020/01/11 04:32:26 Using secret token for csrf signing
2020/01/11 04:32:26 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/01/11 04:32:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/01/11 04:32:26 Successful initial request to the apiserver, version: v1.17.0
2020/01/11 04:32:26 Generating JWE encryption key
2020/01/11 04:32:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/01/11 04:32:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/01/11 04:32:26 Initializing JWE encryption key from synchronized object
2020/01/11 04:32:26 Creating in-cluster Sidecar client
2020/01/11 04:32:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/01/11 04:32:26 Serving insecurely on HTTP port: 9090
2020/01/11 04:32:56 Successful request to sidecar
==> storage-provisioner ["85a52a6ea02b"] <==
The operating system version:
➜ ~ cat /etc/os-release
NAME="Linux Mint"
VERSION="19.3 (Tricia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 19.3"
VERSION_ID="19.3"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.ubuntu.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=tricia
UBUNTU_CODENAME=bionic
➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 19.3 Tricia
Release: 19.3
Codename: tricia
➜ ~ uname -r
5.0.0-37-generic
The text was updated successfully, but these errors were encountered:
I am closing this issues because it is related to a misconfiguration of my Kubernetes cluster. I was able to resolve it after uninstalling kubelet, kubeadm, kube-proxy, and only installing minikube.
The minikube dashboard fails after the minikube start command when using VirtualBox 6.1.
The exact command to reproduce the issue:
The full output of the command that failed:
🔌 Enabling dashboard ...
🤔 Verifying dashboard health ...
🚀 Launching proxy ...
💣 kubectl proxy: readByteWithTimeout: EOF
😿 minikube is exiting due to an error. If the above message is not useful, open an issue:
👉 https://github.com/kubernetes/minikube/issues/new/choose
The output of the
minikube logs
command:==> Docker <==
-- Logs begin at Sat 2020-01-11 04:30:09 UTC, end at Sat 2020-01-11 04:36:28 UTC. --
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.042993621Z" level=info msg="loading plugin "io.containerd.grpc.v1.containers"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043043648Z" level=info msg="loading plugin "io.containerd.grpc.v1.content"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043090111Z" level=info msg="loading plugin "io.containerd.grpc.v1.diff"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043138017Z" level=info msg="loading plugin "io.containerd.grpc.v1.events"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043186443Z" level=info msg="loading plugin "io.containerd.grpc.v1.healthcheck"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043233358Z" level=info msg="loading plugin "io.containerd.grpc.v1.images"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043282348Z" level=info msg="loading plugin "io.containerd.grpc.v1.leases"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043328009Z" level=info msg="loading plugin "io.containerd.grpc.v1.namespaces"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043418690Z" level=info msg="loading plugin "io.containerd.internal.v1.opt"..." type=io.containerd.internal.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043510508Z" level=info msg="loading plugin "io.containerd.grpc.v1.snapshots"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043561467Z" level=info msg="loading plugin "io.containerd.grpc.v1.tasks"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043612512Z" level=info msg="loading plugin "io.containerd.grpc.v1.version"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043659189Z" level=info msg="loading plugin "io.containerd.grpc.v1.introspection"..." type=io.containerd.grpc.v1
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043805648Z" level=info msg=serving... address="/var/run/docker/containerd/containerd-debug.sock"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043883869Z" level=info msg=serving... address="/var/run/docker/containerd/containerd.sock"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.043970628Z" level=info msg="containerd successfully booted in 0.008194s"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.052441119Z" level=info msg="parsed scheme: "unix"" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.052694170Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.052773867Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.052824843Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.053870793Z" level=info msg="parsed scheme: "unix"" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.053942343Z" level=info msg="scheme "unix" not registered, fallback to default scheme" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.053998116Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock 0 }] }" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.054048426Z" level=info msg="ClientConn switching balancer to "pick_first"" module=grpc
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069422959Z" level=warning msg="Your kernel does not support cgroup blkio weight"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069456972Z" level=warning msg="Your kernel does not support cgroup blkio weight_device"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069464300Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_bps_device"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069469846Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_bps_device"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069477882Z" level=warning msg="Your kernel does not support cgroup blkio throttle.read_iops_device"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069483188Z" level=warning msg="Your kernel does not support cgroup blkio throttle.write_iops_device"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.069630243Z" level=info msg="Loading containers: start."
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.132532996Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.178078199Z" level=info msg="Loading containers: done."
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.200983304Z" level=info msg="Docker daemon" commit=633a0ea838 graphdriver(s)=overlay2 version=19.03.5
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.201152713Z" level=info msg="Daemon has completed initialization"
Jan 11 04:30:25 minikube systemd[1]: Started Docker Application Container Engine.
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.222286112Z" level=info msg="API listen on /var/run/docker.sock"
Jan 11 04:30:25 minikube dockerd[2461]: time="2020-01-11T04:30:25.222372110Z" level=info msg="API listen on [::]:2376"
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.101438913Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/2b2f8b76ace50d1c462c2f260c646047288f6aa10e41e0b246749d9344596262/shim.sock" debug=false pid=4107
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.113900551Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a8b121610b7cd1c8eb18b320a2f180a89840091210cdfff333fd28abf01c712d/shim.sock" debug=false pid=4103
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.129259429Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9756e2449fdcbf8f56a65b8cd3a53e60c965f0fe53d64dcb336eee68239ffd4f/shim.sock" debug=false pid=4130
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.157206560Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/55cae2e9d54e11b6e19b90088bdc6cb03abe6efbed183f3acd1bac6bcd8023b7/shim.sock" debug=false pid=4155
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.173713160Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/34c6370aacc590d238855f93780f5fee2899a8d0e9d99e8b492dde3a954caf8d/shim.sock" debug=false pid=4158
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.465899332Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f299a6ea5a61243ff4c65b967199b76f4ba3baf805a2125da2b376a75b12ada5/shim.sock" debug=false pid=4336
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.495588314Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/36eb1c4d020dcfbd7cc52f773aab99190e421b5a78a045a6427e5935927316b6/shim.sock" debug=false pid=4349
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.499321765Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/26fdd82b664e00e2348e73d8c73644c4b2b244c9af3687d0d5b0063103a58420/shim.sock" debug=false pid=4357
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.619888251Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/524cb0e7f9b66d3e56f45d003b47441d51608b2382863d8d82b5cd0561d5467f/shim.sock" debug=false pid=4403
Jan 11 04:31:37 minikube dockerd[2461]: time="2020-01-11T04:31:37.622055753Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/568b530b7298b13412d5a24ae11271b810b2307491db3c11826552fd2040a577/shim.sock" debug=false pid=4412
Jan 11 04:32:07 minikube dockerd[2461]: time="2020-01-11T04:32:07.621057601Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9effde53c87bb9229fcc7b029925a68cce22db086a9a4a4d9dfb70c4ec939e7d/shim.sock" debug=false pid=5367
Jan 11 04:32:07 minikube dockerd[2461]: time="2020-01-11T04:32:07.857148687Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/34e1d42b0fe3d05232456e9c5e807ded8404ebec5056fe00461dcec97f238915/shim.sock" debug=false pid=5418
Jan 11 04:32:08 minikube dockerd[2461]: time="2020-01-11T04:32:08.231468196Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/5566da407a9a9d7243febed8aa1376adf0a333d46c91e20eefaf8831d45da056/shim.sock" debug=false pid=5528
Jan 11 04:32:08 minikube dockerd[2461]: time="2020-01-11T04:32:08.231891611Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/08458da66447590044d2986803f5ed99fe52d6553c78f4911b77273a3041e8b9/shim.sock" debug=false pid=5530
Jan 11 04:32:08 minikube dockerd[2461]: time="2020-01-11T04:32:08.632510747Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d0ec785249051863c697f36a7dda4a7b9900338667e7849acc9c787935768446/shim.sock" debug=false pid=5674
Jan 11 04:32:08 minikube dockerd[2461]: time="2020-01-11T04:32:08.661744634Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/1ed172e3e5e7a4fa9746203d7cd951234291e11c8045fb5c4ffb5b10c7f6f8cc/shim.sock" debug=false pid=5691
Jan 11 04:32:08 minikube dockerd[2461]: time="2020-01-11T04:32:08.765064598Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ab6f68d7f30e4b473cdadc5ccf71c6a6bb6cc92100efda94a9073e8c6caeeffb/shim.sock" debug=false pid=5723
Jan 11 04:32:09 minikube dockerd[2461]: time="2020-01-11T04:32:09.006265076Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85a52a6ea02b06376a69e876e1602f73c55c4c6e87e6a106d7e7c2e415280638/shim.sock" debug=false pid=5818
Jan 11 04:32:25 minikube dockerd[2461]: time="2020-01-11T04:32:25.533131193Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/de1a316b510ac712d358077a68da469a6bbb17d763a989610d874221aeced513/shim.sock" debug=false pid=6167
Jan 11 04:32:25 minikube dockerd[2461]: time="2020-01-11T04:32:25.865910711Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/6920060f7fdf74d6f6532c88a098bed1c57ba6a47a5500d75ebdf4aa188f565f/shim.sock" debug=false pid=6219
Jan 11 04:32:26 minikube dockerd[2461]: time="2020-01-11T04:32:26.252083015Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/dd4279cbb42a2ed915ad66855839c6eb3392d0704529594b576cebcd3793d8d4/shim.sock" debug=false pid=6302
Jan 11 04:32:26 minikube dockerd[2461]: time="2020-01-11T04:32:26.277791671Z" level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f0aa1a10d8879c031877fbf425c1fdb381d2364c072c67c564a25b6d2d493964/shim.sock" debug=false pid=6318
==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
f0aa1a10d8879 3b08661dc379d 4 minutes ago Running dashboard-metrics-scraper 0 de1a316b510ac
dd4279cbb42a2 eb51a35975256 4 minutes ago Running kubernetes-dashboard 0 6920060f7fdf7
85a52a6ea02b0 4689081edb103 4 minutes ago Running storage-provisioner 0 ab6f68d7f30e4
1ed172e3e5e7a 70f311871ae12 4 minutes ago Running coredns 0 08458da664475
d0ec785249051 70f311871ae12 4 minutes ago Running coredns 0 5566da407a9a9
34e1d42b0fe3d 7d54289267dc5 4 minutes ago Running kube-proxy 0 9effde53c87bb
568b530b7298b 303ce5db0e90d 4 minutes ago Running etcd 0 55cae2e9d54e1
524cb0e7f9b66 5eb3b74868724 4 minutes ago Running kube-controller-manager 0 a8b121610b7cd
36eb1c4d020dc 78c190f736b11 4 minutes ago Running kube-scheduler 0 34c6370aacc59
26fdd82b664e0 0cae8d5cc64c7 4 minutes ago Running kube-apiserver 0 9756e2449fdcb
f299a6ea5a612 bd12a212f9dcb 4 minutes ago Running kube-addon-manager 0 2b2f8b76ace50
==> coredns ["1ed172e3e5e7"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
==> coredns ["d0ec78524905"] <==
.:53
[INFO] plugin/reload: Running configuration MD5 = 4e235fcc3696966e76816bcd9034ebc7
CoreDNS-1.6.5
linux/amd64, go1.13.4, c2fd1b2
==> dmesg <==
[ +5.008795] hpet1: lost 318 rtc interrupts
[ +5.016640] hpet1: lost 319 rtc interrupts
[Jan11 04:32] hpet1: lost 319 rtc interrupts
[ +5.002849] hpet_rtc_timer_reinit: 45 callbacks suppressed
[ +0.000001] hpet1: lost 318 rtc interrupts
[ +2.990018] NFSD: Unable to end grace period: -110
[ +2.031899] hpet1: lost 319 rtc interrupts
[ +5.004674] hpet_rtc_timer_reinit: 3 callbacks suppressed
[ +0.000001] hpet1: lost 319 rtc interrupts
[ +5.016912] hpet_rtc_timer_reinit: 3 callbacks suppressed
[ +0.000004] hpet1: lost 319 rtc interrupts
[ +9.995784] hpet_rtc_timer_reinit: 10 callbacks suppressed
[ +0.000020] hpet1: lost 319 rtc interrupts
[ +5.002908] hpet1: lost 319 rtc interrupts
[ +5.002206] hpet1: lost 318 rtc interrupts
[ +5.016059] hpet1: lost 319 rtc interrupts
[ +5.010439] hpet1: lost 319 rtc interrupts
[ +4.985983] hpet1: lost 317 rtc interrupts
[Jan11 04:33] hpet1: lost 319 rtc interrupts
[ +5.003411] hpet1: lost 318 rtc interrupts
[ +5.008360] hpet1: lost 318 rtc interrupts
[ +4.991916] hpet1: lost 318 rtc interrupts
[ +5.016389] hpet1: lost 319 rtc interrupts
[ +5.001433] hpet1: lost 318 rtc interrupts
[ +4.992054] hpet1: lost 318 rtc interrupts
[ +5.014427] hpet1: lost 319 rtc interrupts
[ +4.993439] hpet1: lost 317 rtc interrupts
[ +5.015132] hpet1: lost 319 rtc interrupts
[ +4.998965] hpet1: lost 319 rtc interrupts
[ +1.806442] hrtimer: interrupt took 3021827 ns
[ +3.199408] hpet1: lost 318 rtc interrupts
[Jan11 04:34] hpet1: lost 321 rtc interrupts
[ +5.006489] hpet1: lost 318 rtc interrupts
[ +5.012946] hpet1: lost 319 rtc interrupts
[ +5.008525] hpet1: lost 318 rtc interrupts
[ +5.016191] hpet1: lost 319 rtc interrupts
[ +5.025624] hpet1: lost 320 rtc interrupts
[ +5.006808] hpet1: lost 320 rtc interrupts
[ +5.003103] hpet1: lost 318 rtc interrupts
[ +5.001840] hpet1: lost 318 rtc interrupts
[ +5.002045] hpet1: lost 318 rtc interrupts
[ +4.999440] hpet1: lost 318 rtc interrupts
[ +5.002399] hpet1: lost 318 rtc interrupts
[Jan11 04:35] hpet1: lost 318 rtc interrupts
[ +5.000847] hpet1: lost 318 rtc interrupts
[ +5.002708] hpet1: lost 318 rtc interrupts
[ +5.003101] hpet1: lost 319 rtc interrupts
[ +5.018656] hpet1: lost 319 rtc interrupts
[ +5.003076] hpet1: lost 318 rtc interrupts
[ +5.011010] hpet1: lost 319 rtc interrupts
[ +4.997305] hpet1: lost 318 rtc interrupts
[ +5.019074] hpet1: lost 319 rtc interrupts
[ +4.987125] hpet1: lost 317 rtc interrupts
[ +5.038764] hpet1: lost 320 rtc interrupts
[ +4.994094] hpet1: lost 318 rtc interrupts
[Jan11 04:36] hpet1: lost 318 rtc interrupts
[ +4.996605] hpet1: lost 318 rtc interrupts
[ +5.017340] hpet1: lost 319 rtc interrupts
[ +4.983628] hpet1: lost 317 rtc interrupts
[ +5.019417] hpet1: lost 320 rtc interrupts
==> kernel <==
04:36:28 up 6 min, 0 users, load average: 1.38, 1.22, 0.62
Linux minikube 4.19.81 #1 SMP Tue Dec 10 16:09:50 PST 2019 x86_64 GNU/Linux
PRETTY_NAME="Buildroot 2019.02.7"
==> kube-addon-manager ["f299a6ea5a61"] <==
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-11T04:36:16+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-11T04:36:16+00:00 ==
INFO: == Reconciling with deprecated label ==
error: no objects passed to apply
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-11T04:36:20+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-11T04:36:22+00:00 ==
error: no objects passed to apply
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
configmap/kubernetes-dashboard-settings unchanged
deployment.apps/dashboard-metrics-scraper unchanged
deployment.apps/kubernetes-dashboard unchanged
namespace/kubernetes-dashboard unchanged
role.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard unchanged
serviceaccount/kubernetes-dashboard unchanged
secret/kubernetes-dashboard-certs unchanged
secret/kubernetes-dashboard-csrf unchanged
secret/kubernetes-dashboard-key-holder unchanged
service/kubernetes-dashboard unchanged
service/dashboard-metrics-scraper unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2020-01-11T04:36:26+00:00 ==
INFO: Leader election disabled.
INFO: == Kubernetes addon ensure completed at 2020-01-11T04:36:26+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
error: no objects passed to apply
==> kube-apiserver ["26fdd82b664e"] <==
W0111 04:31:40.544220 1 genericapiserver.go:404] Skipping API discovery.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.554934 1 genericapiserver.go:404] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.574913 1 genericapiserver.go:404] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.578432 1 genericapiserver.go:404] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.619134 1 genericapiserver.go:404] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0111 04:31:40.679924 1 genericapiserver.go:404] Skipping API apps/v1beta2 because it has no resources.
W0111 04:31:40.679954 1 genericapiserver.go:404] Skipping API apps/v1beta1 because it has no resources.
I0111 04:31:40.688501 1 plugins.go:158] Loaded 11 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,MutatingAdmissionWebhook,RuntimeClass.
I0111 04:31:40.688527 1 plugins.go:161] Loaded 7 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,RuntimeClass,ResourceQuota.
I0111 04:31:40.690139 1 client.go:361] parsed scheme: "endpoint"
I0111 04:31:40.690167 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0111 04:31:40.700310 1 client.go:361] parsed scheme: "endpoint"
I0111 04:31:40.700339 1 endpoint.go:68] ccResolverWrapper: sending new addresses to cc: [{https://127.0.0.1:2379 0 }]
I0111 04:31:42.643968 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
I0111 04:31:42.644134 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0111 04:31:42.644852 1 secure_serving.go:178] Serving securely on [::]:8443
I0111 04:31:42.645782 1 dynamic_serving_content.go:129] Starting serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key
I0111 04:31:42.645820 1 tlsconfig.go:219] Starting DynamicServingCertificateController
I0111 04:31:42.646578 1 available_controller.go:386] Starting AvailableConditionController
I0111 04:31:42.646636 1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0111 04:31:42.646714 1 crd_finalizer.go:263] Starting CRDFinalizer
I0111 04:31:42.646965 1 autoregister_controller.go:140] Starting autoregister controller
I0111 04:31:42.647005 1 cache.go:32] Waiting for caches to sync for autoregister controller
I0111 04:31:42.647502 1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0111 04:31:42.647628 1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0111 04:31:42.647753 1 controller.go:81] Starting OpenAPI AggregationController
I0111 04:31:42.648436 1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
I0111 04:31:42.648490 1 shared_informer.go:197] Waiting for caches to sync for cluster_authentication_trust_controller
I0111 04:31:42.650577 1 controller.go:85] Starting OpenAPI controller
I0111 04:31:42.650747 1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0111 04:31:42.650878 1 naming_controller.go:288] Starting NamingConditionController
I0111 04:31:42.650956 1 establishing_controller.go:73] Starting EstablishingController
I0111 04:31:42.651036 1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0111 04:31:42.651111 1 apiapproval_controller.go:185] Starting KubernetesAPIApprovalPolicyConformantConditionController
I0111 04:31:42.651194 1 crdregistration_controller.go:111] Starting crd-autoregister controller
I0111 04:31:42.651252 1 shared_informer.go:197] Waiting for caches to sync for crd-autoregister
I0111 04:31:42.651370 1 dynamic_cafile_content.go:166] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
I0111 04:31:42.651555 1 dynamic_cafile_content.go:166] Starting request-header::/var/lib/minikube/certs/front-proxy-ca.crt
E0111 04:31:42.682991 1 controller.go:151] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.99.107, ResourceVersion: 0, AdditionalErrorMsg:
I0111 04:31:42.811826 1 shared_informer.go:204] Caches are synced for crd-autoregister
I0111 04:31:42.812410 1 cache.go:39] Caches are synced for AvailableConditionController controller
I0111 04:31:42.812689 1 cache.go:39] Caches are synced for autoregister controller
I0111 04:31:42.814456 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0111 04:31:42.814590 1 shared_informer.go:204] Caches are synced for cluster_authentication_trust_controller
I0111 04:31:43.645908 1 controller.go:107] OpenAPI AggregationController: Processing item
I0111 04:31:43.646188 1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0111 04:31:43.646480 1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0111 04:31:43.665490 1 storage_scheduling.go:133] created PriorityClass system-node-critical with value 2000001000
I0111 04:31:43.687424 1 storage_scheduling.go:133] created PriorityClass system-cluster-critical with value 2000000000
I0111 04:31:43.688280 1 storage_scheduling.go:142] all system priority classes are created successfully or already exist.
I0111 04:31:44.244676 1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0111 04:31:44.295176 1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0111 04:31:44.410199 1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.99.107]
I0111 04:31:44.411144 1 controller.go:606] quota admission added evaluator for: endpoints
I0111 04:31:44.997177 1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0111 04:31:45.752332 1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0111 04:31:45.771103 1 controller.go:606] quota admission added evaluator for: deployments.apps
I0111 04:31:45.940302 1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0111 04:31:53.895362 1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0111 04:31:53.905447 1 controller.go:606] quota admission added evaluator for: replicasets.apps
==> kube-controller-manager ["524cb0e7f9b6"] <==
I0111 04:31:52.608976 1 node_lifecycle_controller.go:520] Controller will reconcile labels.
I0111 04:31:52.609263 1 controllermanager.go:533] Started "nodelifecycle"
I0111 04:31:52.610681 1 node_lifecycle_controller.go:554] Starting node controller
I0111 04:31:52.610734 1 shared_informer.go:197] Waiting for caches to sync for taint
I0111 04:31:52.844604 1 controllermanager.go:533] Started "attachdetach"
I0111 04:31:52.844698 1 attach_detach_controller.go:342] Starting attach detach controller
I0111 04:31:52.844706 1 shared_informer.go:197] Waiting for caches to sync for attach detach
I0111 04:31:53.541647 1 controllermanager.go:533] Started "horizontalpodautoscaling"
I0111 04:31:53.541842 1 horizontal.go:156] Starting HPA controller
I0111 04:31:53.541860 1 shared_informer.go:197] Waiting for caches to sync for HPA
I0111 04:31:53.805107 1 controllermanager.go:533] Started "persistentvolume-binder"
I0111 04:31:53.807509 1 shared_informer.go:197] Waiting for caches to sync for resource quota
I0111 04:31:53.808531 1 pv_controller_base.go:294] Starting persistent volume controller
I0111 04:31:53.808545 1 shared_informer.go:197] Waiting for caches to sync for persistent volume
I0111 04:31:53.824086 1 shared_informer.go:197] Waiting for caches to sync for garbage collector
I0111 04:31:53.837446 1 shared_informer.go:204] Caches are synced for GC
W0111 04:31:53.837695 1 actual_state_of_world.go:506] Failed to update statusUpdateNeeded field in actual state of world: Failed to set statusUpdateNeeded to needed true, because nodeName="minikube" does not exist
I0111 04:31:53.842039 1 shared_informer.go:204] Caches are synced for ReplicationController
I0111 04:31:53.842659 1 shared_informer.go:204] Caches are synced for certificate-csrsigning
I0111 04:31:53.848822 1 shared_informer.go:204] Caches are synced for HPA
I0111 04:31:53.855266 1 shared_informer.go:204] Caches are synced for TTL
I0111 04:31:53.873704 1 shared_informer.go:204] Caches are synced for PVC protection
I0111 04:31:53.888626 1 shared_informer.go:204] Caches are synced for ClusterRoleAggregator
I0111 04:31:53.891236 1 shared_informer.go:204] Caches are synced for certificate-csrapproving
I0111 04:31:53.891295 1 shared_informer.go:204] Caches are synced for daemon sets
I0111 04:31:53.892414 1 shared_informer.go:204] Caches are synced for PV protection
I0111 04:31:53.893052 1 shared_informer.go:204] Caches are synced for stateful set
I0111 04:31:53.893845 1 shared_informer.go:204] Caches are synced for ReplicaSet
I0111 04:31:53.903875 1 shared_informer.go:204] Caches are synced for deployment
I0111 04:31:53.908740 1 shared_informer.go:204] Caches are synced for persistent volume
I0111 04:31:53.911740 1 shared_informer.go:204] Caches are synced for taint
I0111 04:31:53.911994 1 node_lifecycle_controller.go:1443] Initializing eviction metric for zone:
W0111 04:31:53.912351 1 node_lifecycle_controller.go:1058] Missing timestamp for Node minikube. Assuming now as a timestamp.
I0111 04:31:53.912420 1 node_lifecycle_controller.go:1259] Controller detected that zone is now in state Normal.
I0111 04:31:53.912819 1 taint_manager.go:186] Starting NoExecuteTaintManager
I0111 04:31:53.914426 1 event.go:281] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"minikube", UID:"31cde5ea-e6fd-4264-9124-e726847a7c4e", APIVersion:"", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node minikube event: Registered Node minikube in Controller
I0111 04:31:53.915134 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"coredns", UID:"d84b7e63-92c2-488e-8b28-387d05bae8ea", APIVersion:"apps/v1", ResourceVersion:"179", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set coredns-6955765f44 to 2
I0111 04:31:53.920838 1 event.go:281] Event(v1.ObjectReference{Kind:"DaemonSet", Namespace:"kube-system", Name:"kube-proxy", UID:"eaeb89e0-2410-4d44-8954-3bca5b100218", APIVersion:"apps/v1", ResourceVersion:"184", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-proxy-q557h
I0111 04:31:53.933686 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"79be41ed-12fd-48b6-be46-f51ed0408b90", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-tpld8
I0111 04:31:53.940553 1 shared_informer.go:204] Caches are synced for expand
I0111 04:31:53.940735 1 shared_informer.go:204] Caches are synced for bootstrap_signer
I0111 04:31:53.942069 1 shared_informer.go:204] Caches are synced for disruption
I0111 04:31:53.942162 1 disruption.go:338] Sending events to api server.
I0111 04:31:53.945213 1 shared_informer.go:204] Caches are synced for attach detach
I0111 04:31:53.955228 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"coredns-6955765f44", UID:"79be41ed-12fd-48b6-be46-f51ed0408b90", APIVersion:"apps/v1", ResourceVersion:"300", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: coredns-6955765f44-zs2zp
I0111 04:31:54.219462 1 shared_informer.go:204] Caches are synced for endpoint
I0111 04:31:54.227547 1 shared_informer.go:204] Caches are synced for job
I0111 04:31:54.273675 1 shared_informer.go:204] Caches are synced for namespace
I0111 04:31:54.341856 1 shared_informer.go:204] Caches are synced for service account
I0111 04:31:54.378676 1 shared_informer.go:204] Caches are synced for garbage collector
I0111 04:31:54.378722 1 garbagecollector.go:138] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
I0111 04:31:54.379468 1 shared_informer.go:204] Caches are synced for resource quota
I0111 04:31:54.407979 1 shared_informer.go:204] Caches are synced for resource quota
I0111 04:31:54.424536 1 shared_informer.go:204] Caches are synced for garbage collector
I0111 04:32:08.915664 1 node_lifecycle_controller.go:1209] Controller detected that all Nodes are not-Ready. Entering master disruption mode.
I0111 04:32:18.917650 1 node_lifecycle_controller.go:1236] Controller detected that some Nodes are Ready. Exiting master disruption mode.
I0111 04:32:25.028218 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper", UID:"6ae21cb2-c50f-4fe6-beef-854e4eb76b1e", APIVersion:"apps/v1", ResourceVersion:"488", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set dashboard-metrics-scraper-7b64584c5c to 1
I0111 04:32:25.043810 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-7b64584c5c", UID:"61ceb262-a0b5-4b7a-aa59-d088c985a6ef", APIVersion:"apps/v1", ResourceVersion:"489", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-7b64584c5c-mwgv4
I0111 04:32:25.084781 1 event.go:281] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"e5f9fd6c-5e39-49ff-baea-f744f8105345", APIVersion:"apps/v1", ResourceVersion:"493", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-79d9cd965 to 1
I0111 04:32:25.097302 1 event.go:281] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-79d9cd965", UID:"0a898f4d-a42e-44ef-bcd6-7ebb4083f56c", APIVersion:"apps/v1", ResourceVersion:"498", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-79d9cd965-zgd79
==> kube-proxy ["34e1d42b0fe3"] <==
W0111 04:32:08.055935 1 server_others.go:323] Unknown proxy mode "", assuming iptables proxy
I0111 04:32:08.064886 1 node.go:135] Successfully retrieved node IP: 192.168.99.107
I0111 04:32:08.065069 1 server_others.go:145] Using iptables Proxier.
W0111 04:32:08.065202 1 proxier.go:286] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0111 04:32:08.066091 1 server.go:571] Version: v1.17.0
I0111 04:32:08.066916 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0111 04:32:08.066953 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0111 04:32:08.067559 1 conntrack.go:83] Setting conntrack hashsize to 32768
I0111 04:32:08.073107 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0111 04:32:08.073242 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0111 04:32:08.073837 1 config.go:131] Starting endpoints config controller
I0111 04:32:08.073866 1 shared_informer.go:197] Waiting for caches to sync for endpoints config
I0111 04:32:08.073945 1 config.go:313] Starting service config controller
I0111 04:32:08.073960 1 shared_informer.go:197] Waiting for caches to sync for service config
I0111 04:32:08.174924 1 shared_informer.go:204] Caches are synced for service config
I0111 04:32:08.174989 1 shared_informer.go:204] Caches are synced for endpoints config
==> kube-scheduler ["36eb1c4d020d"] <==
I0111 04:31:39.456331 1 serving.go:312] Generated self-signed cert in-memory
W0111 04:31:40.757383 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0111 04:31:40.757499 1 configmap_cafile_content.go:102] unable to load initial CA bundle for: "client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file" due to: configmap "extension-apiserver-authentication" not found
W0111 04:31:42.806473 1 authentication.go:348] Unable to get configmap/extension-apiserver-authentication in kube-system. Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
W0111 04:31:42.806729 1 authentication.go:296] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
W0111 04:31:42.806865 1 authentication.go:297] Continuing without authentication configuration. This may treat all requests as anonymous.
W0111 04:31:42.806976 1 authentication.go:298] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
W0111 04:31:42.853002 1 authorization.go:47] Authorization is disabled
W0111 04:31:42.853193 1 authentication.go:92] Authentication is disabled
I0111 04:31:42.853380 1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0111 04:31:42.860205 1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
I0111 04:31:42.861571 1 configmap_cafile_content.go:205] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:42.862034 1 shared_informer.go:197] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:42.862201 1 tlsconfig.go:219] Starting DynamicServingCertificateController
E0111 04:31:42.871298 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:42.871593 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0111 04:31:42.871816 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0111 04:31:42.871902 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0111 04:31:42.872036 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0111 04:31:42.872212 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:42.872295 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0111 04:31:42.872568 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0111 04:31:42.872847 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0111 04:31:42.872873 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0111 04:31:42.873095 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
E0111 04:31:42.874218 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0111 04:31:43.872569 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:43.874467 1 reflector.go:156] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:246: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0111 04:31:43.875441 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0111 04:31:43.876477 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0111 04:31:43.879443 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0111 04:31:43.880753 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0111 04:31:43.881925 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0111 04:31:43.882198 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0111 04:31:43.891034 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0111 04:31:43.891260 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0111 04:31:43.896273 1 reflector.go:156] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0111 04:31:43.896571 1 reflector.go:156] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:209: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
I0111 04:31:44.975707 1 shared_informer.go:204] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
I0111 04:31:44.978908 1 leaderelection.go:242] attempting to acquire leader lease kube-system/kube-scheduler...
I0111 04:31:45.002387 1 leaderelection.go:252] successfully acquired lease kube-system/kube-scheduler
==> kubelet <==
-- Logs begin at Sat 2020-01-11 04:30:09 UTC, end at Sat 2020-01-11 04:36:28 UTC. --
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.428808 4953 server.go:143] Starting to listen on 0.0.0.0:10250
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.430329 4953 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.430651 4953 server.go:354] Adding debug handlers to kubelet server.
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.440469 4953 volume_manager.go:265] Starting Kubelet Volume Manager
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.451432 4953 desired_state_of_world_populator.go:138] Desired state populator starts to run
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.477466 4953 status_manager.go:157] Starting to sync pod status with apiserver
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.477541 4953 kubelet.go:1820] Starting kubelet main sync loop.
Jan 11 04:32:06 minikube kubelet[4953]: E0111 04:32:06.477612 4953 kubelet.go:1844] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.553386 4953 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Jan 11 04:32:06 minikube kubelet[4953]: E0111 04:32:06.585770 4953 kubelet.go:1844] skipping pod synchronization - container runtime status check may not have completed yet
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.598863 4953 kubelet_node_status.go:70] Attempting to register node minikube
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.611751 4953 kubelet_node_status.go:112] Node minikube was previously registered
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.612271 4953 kubelet_node_status.go:73] Successfully registered node minikube
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.678562 4953 setters.go:535] Node became not ready: {Type:Ready Status:False LastHeartbeatTime:2020-01-11 04:32:06.678543783 +0000 UTC m=+20.942659931 LastTransitionTime:2020-01-11 04:32:06.678543783 +0000 UTC m=+20.942659931 Reason:KubeletNotReady Message:container runtime status check may not have completed yet}
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692737 4953 cpu_manager.go:173] [cpumanager] starting with none policy
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692838 4953 cpu_manager.go:174] [cpumanager] reconciling every 10s
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.692903 4953 policy_none.go:43] [cpumanager] none policy: Start
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.707011 4953 plugin_manager.go:114] Starting Kubelet Plugin Manager
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864867 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "addons" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-addons") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864936 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-certs" (UniqueName: "kubernetes.io/host-path/d0fcec0112ed9b5ad5e26312bb465ed4-etcd-certs") pod "etcd-minikube" (UID: "d0fcec0112ed9b5ad5e26312bb465ed4")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864964 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-k8s-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.864988 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-kubeconfig") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865019 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865046 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f64d1dee-cdf7-4407-bf7a-38e53b80c837-config-volume") pod "coredns-6955765f44-zs2zp" (UID: "f64d1dee-cdf7-4407-bf7a-38e53b80c837")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865067 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/c3e29047da86ce6690916750ab69c40b-kubeconfig") pod "kube-addon-manager-minikube" (UID: "c3e29047da86ce6690916750ab69c40b")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865090 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-usr-share-ca-certificates") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865117 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "storage-provisioner-token-wwnt6" (UniqueName: "kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865138 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/c5927ac6-2872-4fde-a910-1e735bccf412-xtables-lock") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865159 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/c5927ac6-2872-4fde-a910-1e735bccf412-lib-modules") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865181 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/627db42b-a209-4a53-843e-582f9f6ded2d-config-volume") pod "coredns-6955765f44-tpld8" (UID: "627db42b-a209-4a53-843e-582f9f6ded2d")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865202 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-ca-certs") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865224 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c5927ac6-2872-4fde-a910-1e735bccf412-kube-proxy") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865248 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8hnsq" (UniqueName: "kubernetes.io/secret/f64d1dee-cdf7-4407-bf7a-38e53b80c837-coredns-token-8hnsq") pod "coredns-6955765f44-zs2zp" (UID: "f64d1dee-cdf7-4407-bf7a-38e53b80c837")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865268 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-k8s-certs") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865289 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/ff67867321338ffd885039e188f6b424-kubeconfig") pod "kube-scheduler-minikube" (UID: "ff67867321338ffd885039e188f6b424")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865311 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "etcd-data" (UniqueName: "kubernetes.io/host-path/d0fcec0112ed9b5ad5e26312bb465ed4-etcd-data") pod "etcd-minikube" (UID: "d0fcec0112ed9b5ad5e26312bb465ed4")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865333 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/d1f1f371c8ef746141a53b09f08d43c6-ca-certs") pod "kube-apiserver-minikube" (UID: "d1f1f371c8ef746141a53b09f08d43c6")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865410 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp" (UniqueName: "kubernetes.io/host-path/710408c3-61c1-4bc3-96ba-3635451b70df-tmp") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865437 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/e7ce3a6ee9fa0ec547ac7b4b17af0dcb-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "e7ce3a6ee9fa0ec547ac7b4b17af0dcb")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865462 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-fkllm" (UniqueName: "kubernetes.io/secret/c5927ac6-2872-4fde-a910-1e735bccf412-kube-proxy-token-fkllm") pod "kube-proxy-q557h" (UID: "c5927ac6-2872-4fde-a910-1e735bccf412")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865484 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "coredns-token-8hnsq" (UniqueName: "kubernetes.io/secret/627db42b-a209-4a53-843e-582f9f6ded2d-coredns-token-8hnsq") pod "coredns-6955765f44-tpld8" (UID: "627db42b-a209-4a53-843e-582f9f6ded2d")
Jan 11 04:32:06 minikube kubelet[4953]: I0111 04:32:06.865491 4953 reconciler.go:156] Reconciler: start to sync state
Jan 11 04:32:07 minikube kubelet[4953]: E0111 04:32:07.978192 4953 secret.go:195] Couldn't get secret kube-system/storage-provisioner-token-wwnt6: failed to sync secret cache: timed out waiting for the condition
Jan 11 04:32:07 minikube kubelet[4953]: E0111 04:32:07.981468 4953 nestedpendingoperations.go:270] Operation for ""kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6" ("710408c3-61c1-4bc3-96ba-3635451b70df")" failed. No retries permitted until 2020-01-11 04:32:08.48143603 +0000 UTC m=+22.745552186 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume "storage-provisioner-token-wwnt6" (UniqueName: "kubernetes.io/secret/710408c3-61c1-4bc3-96ba-3635451b70df-storage-provisioner-token-wwnt6") pod "storage-provisioner" (UID: "710408c3-61c1-4bc3-96ba-3635451b70df") : failed to sync secret cache: timed out waiting for the condition"
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.524914 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.578510 4953 pod_container_deletor.go:75] Container "08458da66447590044d2986803f5ed99fe52d6553c78f4911b77273a3041e8b9" not found in pod's containers
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.580356 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:08 minikube kubelet[4953]: W0111 04:32:08.582810 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-zs2zp through plugin: invalid network status for
Jan 11 04:32:09 minikube kubelet[4953]: W0111 04:32:09.914494 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-zs2zp through plugin: invalid network status for
Jan 11 04:32:09 minikube kubelet[4953]: W0111 04:32:09.924127 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kube-system/coredns-6955765f44-tpld8 through plugin: invalid network status for
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.159211 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/dd8acce0-adaf-441d-ad99-ed9ff4395434-tmp-volume") pod "dashboard-metrics-scraper-7b64584c5c-mwgv4" (UID: "dd8acce0-adaf-441d-ad99-ed9ff4395434")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.159869 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-zhpxd" (UniqueName: "kubernetes.io/secret/dd8acce0-adaf-441d-ad99-ed9ff4395434-kubernetes-dashboard-token-zhpxd") pod "dashboard-metrics-scraper-7b64584c5c-mwgv4" (UID: "dd8acce0-adaf-441d-ad99-ed9ff4395434")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.260311 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "kubernetes-dashboard-token-zhpxd" (UniqueName: "kubernetes.io/secret/98f11ecc-e7a4-464f-9ed3-a93e51a11f1a-kubernetes-dashboard-token-zhpxd") pod "kubernetes-dashboard-79d9cd965-zgd79" (UID: "98f11ecc-e7a4-464f-9ed3-a93e51a11f1a")
Jan 11 04:32:25 minikube kubelet[4953]: I0111 04:32:25.260346 4953 reconciler.go:209] operationExecutor.VerifyControllerAttachedVolume started for volume "tmp-volume" (UniqueName: "kubernetes.io/empty-dir/98f11ecc-e7a4-464f-9ed3-a93e51a11f1a-tmp-volume") pod "kubernetes-dashboard-79d9cd965-zgd79" (UID: "98f11ecc-e7a4-464f-9ed3-a93e51a11f1a")
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.176712 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.203388 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.434893 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:26 minikube kubelet[4953]: W0111 04:32:26.458413 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
Jan 11 04:32:27 minikube kubelet[4953]: W0111 04:32:27.750255 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/kubernetes-dashboard-79d9cd965-zgd79 through plugin: invalid network status for
Jan 11 04:32:27 minikube kubelet[4953]: W0111 04:32:27.792151 4953 docker_sandbox.go:394] failed to read pod IP from plugin/docker: Couldn't find network status for kubernetes-dashboard/dashboard-metrics-scraper-7b64584c5c-mwgv4 through plugin: invalid network status for
==> kubernetes-dashboard ["dd4279cbb42a"] <==
2020/01/11 04:32:26 Starting overwatch
2020/01/11 04:32:26 Using namespace: kubernetes-dashboard
2020/01/11 04:32:26 Using in-cluster config to connect to apiserver
2020/01/11 04:32:26 Using secret token for csrf signing
2020/01/11 04:32:26 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/01/11 04:32:26 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/01/11 04:32:26 Successful initial request to the apiserver, version: v1.17.0
2020/01/11 04:32:26 Generating JWE encryption key
2020/01/11 04:32:26 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/01/11 04:32:26 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/01/11 04:32:26 Initializing JWE encryption key from synchronized object
2020/01/11 04:32:26 Creating in-cluster Sidecar client
2020/01/11 04:32:26 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/01/11 04:32:26 Serving insecurely on HTTP port: 9090
2020/01/11 04:32:56 Successful request to sidecar
==> storage-provisioner ["85a52a6ea02b"] <==
The operating system version:
➜ ~ cat /etc/os-release
NAME="Linux Mint"
VERSION="19.3 (Tricia)"
ID=linuxmint
ID_LIKE=ubuntu
PRETTY_NAME="Linux Mint 19.3"
VERSION_ID="19.3"
HOME_URL="https://www.linuxmint.com/"
SUPPORT_URL="https://forums.ubuntu.com/"
BUG_REPORT_URL="http://linuxmint-troubleshooting-guide.readthedocs.io/en/latest/"
PRIVACY_POLICY_URL="https://www.linuxmint.com/"
VERSION_CODENAME=tricia
UBUNTU_CODENAME=bionic
➜ ~ lsb_release -a
No LSB modules are available.
Distributor ID: LinuxMint
Description: Linux Mint 19.3 Tricia
Release: 19.3
Codename: tricia
➜ ~ uname -r
5.0.0-37-generic
The text was updated successfully, but these errors were encountered: