Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperkit + docker proxy: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection #4589

Closed
dfang opened this issue Jun 25, 2019 · 44 comments
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@dfang
Copy link
Contributor

dfang commented Jun 25, 2019

Switched to docker for mac long ago, since minikube new version support load balancer, i tried it today.

but i can't pull any images.

The exact command to reproduce the issue:

minikube start --vm-driver hyperkit --insecure-registry raspberrypi.local --registry-mirror https://registry.docker-cn.com --docker-env HTTP_PROXY=http://`ipconfig getifaddr en0`:8118 --docker-env HTTPS_PROXY=http://`ipconfig getifaddr en0`:8118 --logtostderr --v=3

The full output of the command that failed:
after startup, change to minikube context

kubectl run alpine --image=alpine

k get pod -w
NAME                      READY   STATUS         RESTARTS   AGE
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull   0          39s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          45s
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull       0          87s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          101s
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull       0          2m24s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          2m38s
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull       0          3m47s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          4m2s
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull       0          5m41s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          5m54s
alpine-65cfc795bb-jqqmk   0/1     ErrImagePull       0          8m57s
alpine-65cfc795bb-jqqmk   0/1     ImagePullBackOff   0          9m10s

minikube addons list | grep dns shows nothings ? is that the problem, why kube-dns, coredns removed ?

docker info shows HTTP_PROXY, HTTPS_PROXY are correctly configured.

 minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker info
Containers: 22
 Running: 22
 Paused: 0
 Stopped: 0
Images: 13
Server Version: 18.09.6
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: N/A
init version: N/A (expected: )
Security Options:
 seccomp
  Profile: default
Kernel Version: 4.15.0
Operating System: Buildroot 2018.05.3
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.944GiB
Name: minikube
ID: XED3:NANC:4SSU:7Z7H:NQPJ:N6K5:K4KP:VSEW:3XI5:7UCC:OQZD:ZQSL
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: http://192.168.66.182:8118
HTTPS Proxy: http://192.168.66.182:8118
No Proxy: 192.168.99.0/24
Registry: https://index.docker.io/v1/
Labels:
 provider=hyperkit
Experimental: false
Insecure Registries:
 10.96.0.0/12
 127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine

minikube ssh
docker pull alpine
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

where is the configuration for docker in minikube ssh ?

The output of the minikube logs command:

 minikube logs
==> coredns <==
.:53
2019-06-25T16:07:23.040Z [INFO] CoreDNS-1.3.1
2019-06-25T16:07:23.040Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-25T16:07:23.040Z [INFO] plugin/reload: Running configuration MD5 = 5d5369fbc12f985709b924e721217843
2019-06-25T16:07:25.095Z [ERROR] plugin/errors: 2 3854395251452095566.3592777180853197066. HINFO: read udp 172.17.0.5:41439->192.168.64.1:53: read: connection refused
2019-06-25T16:07:26.045Z [ERROR] plugin/errors: 2 3854395251452095566.3592777180853197066. HINFO: read udp 172.17.0.5:43577->192.168.64.1:53: read: connection refused

==> dmesg <==
[Jun25 16:05] ERROR: earlyprintk= earlyser already used
[  +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20170831/tbprint-211)
[  +0.000000] ACPI Error: Could not enable RealTimeClock event (20170831/evxfevnt-218)
[  +0.000003] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20170831/evxface-654)
[  +0.011808] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.345055] systemd-fstab-generator[1040]: Ignoring "noauto" for root device
[  +0.010461] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000002] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.553504] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +1.058281] vboxguest: loading out-of-tree module taints kernel.
[  +0.003743] vboxguest: PCI device not found, probably running on physical hardware.
[  +3.333130] systemd-fstab-generator[1843]: Ignoring "noauto" for root device
[Jun25 16:06] systemd-fstab-generator[2550]: Ignoring "noauto" for root device
[ +15.855302] systemd-fstab-generator[2734]: Ignoring "noauto" for root device
[Jun25 16:07] kauditd_printk_skb: 68 callbacks suppressed
[ +12.126576] tee (3401): /proc/3123/oom_adj is deprecated, please use /proc/3123/oom_score_adj instead.
[  +6.266815] kauditd_printk_skb: 20 callbacks suppressed
[  +7.416826] kauditd_printk_skb: 71 callbacks suppressed
[ +24.097318] NFSD: Unable to end grace period: -110
[  +9.066804] kauditd_printk_skb: 2 callbacks suppressed

==> kernel <==
 16:15:06 up 9 min,  0 users,  load average: 0.08, 0.25, 0.17
Linux minikube 4.15.0 #1 SMP Sun Jun 23 23:02:01 PDT 2019 x86_64 GNU/Linux

==> kube-addon-manager <==
error: no objects passed to apply
error: no objects passed to apply
error:s noerviceacco objectsun pat/ssed tosto arappgely
-provisioner unchanged
error: no objects passed to apply
INFO: == Kubernetes addon reconcile completed at 2019-06-25T16:10:21+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T16:11:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
daemonset.extensions/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T16:11:21+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T16:12:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
daemonset.extensions/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T16:12:21+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T16:13:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
daemonset.extensions/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T16:13:21+00:00 ==
INFO: Leader is minikube
INFO: == Kubernetes addon ensure completed at 2019-06-25T16:14:19+00:00 ==
INFO: == Reconciling with deprecated label ==
INFO: == Reconciling with addon-manager label ==
deployment.apps/kubernetes-dashboard unchanged
service/kubernetes-dashboard unchanged
daemonset.extensions/registry-proxy unchanged
replicationcontroller/registry unchanged
service/registry unchanged
serviceaccount/storage-provisioner unchanged
INFO: == Kubernetes addon reconcile completed at 2019-06-25T16:14:21+00:00 ==

==> kube-apiserver <==
I0625 16:07:07.021556       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0625 16:07:07.023514       1 client.go:354] parsed scheme: ""
I0625 16:07:07.023577       1 client.go:354] scheme "" not registered, fallback to default scheme
I0625 16:07:07.023672       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0625 16:07:07.023752       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 16:07:07.033132       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 16:07:07.033834       1 client.go:354] parsed scheme: ""
I0625 16:07:07.033868       1 client.go:354] scheme "" not registered, fallback to default scheme
I0625 16:07:07.033910       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0625 16:07:07.033986       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 16:07:07.047767       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 16:07:08.807771       1 secure_serving.go:116] Serving securely on [::]:8443
I0625 16:07:08.808074       1 autoregister_controller.go:140] Starting autoregister controller
I0625 16:07:08.808247       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0625 16:07:08.808368       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0625 16:07:08.808448       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0625 16:07:08.808644       1 crd_finalizer.go:255] Starting CRDFinalizer
I0625 16:07:08.809591       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0625 16:07:08.809644       1 controller_utils.go:1029] Waiting for caches to sync for crd-autoregister controller
I0625 16:07:08.809672       1 controller.go:83] Starting OpenAPI controller
I0625 16:07:08.809771       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0625 16:07:08.809799       1 naming_controller.go:288] Starting NamingConditionController
I0625 16:07:08.809903       1 establishing_controller.go:73] Starting EstablishingController
I0625 16:07:08.809966       1 nonstructuralschema_controller.go:191] Starting NonStructuralSchemaConditionController
I0625 16:07:08.813910       1 controller.go:81] Starting OpenAPI AggregationController
I0625 16:07:08.819705       1 available_controller.go:374] Starting AvailableConditionController
I0625 16:07:08.819901       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
E0625 16:07:08.821016       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.95, ResourceVersion: 0, AdditionalErrorMsg:
I0625 16:07:09.007539       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0625 16:07:09.008829       1 cache.go:39] Caches are synced for autoregister controller
I0625 16:07:09.009392       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0625 16:07:09.010152       1 controller_utils.go:1036] Caches are synced for crd-autoregister controller
I0625 16:07:09.199534       1 controller.go:606] quota admission added evaluator for: namespaces
I0625 16:07:09.810498       1 controller.go:107] OpenAPI AggregationController: Processing item
I0625 16:07:09.811135       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0625 16:07:09.811504       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0625 16:07:09.819537       1 storage_scheduling.go:119] created PriorityClass system-node-critical with value 2000001000
I0625 16:07:09.833179       1 storage_scheduling.go:119] created PriorityClass system-cluster-critical with value 2000000000
I0625 16:07:09.833217       1 storage_scheduling.go:128] all system priority classes are created successfully or already exist.
I0625 16:07:11.100994       1 controller.go:606] quota admission added evaluator for: endpoints
I0625 16:07:11.591743       1 controller.go:606] quota admission added evaluator for: roles.rbac.authorization.k8s.io
I0625 16:07:11.870933       1 controller.go:606] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
W0625 16:07:12.150110       1 lease.go:223] Resetting endpoints for master service "kubernetes" to [192.168.64.95]
I0625 16:07:12.262562       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0625 16:07:12.334372       1 controller.go:606] quota admission added evaluator for: serviceaccounts
I0625 16:07:13.621979       1 controller.go:606] quota admission added evaluator for: deployments.apps
I0625 16:07:13.940508       1 controller.go:606] quota admission added evaluator for: daemonsets.apps
I0625 16:07:19.074837       1 controller.go:606] quota admission added evaluator for: replicasets.apps
I0625 16:07:19.173246       1 controller.go:606] quota admission added evaluator for: controllerrevisions.apps
I0625 16:07:20.745608       1 controller.go:606] quota admission added evaluator for: daemonsets.extensions

==> kube-proxy <==
W0625 16:07:20.130262       1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy
I0625 16:07:20.148366       1 server_others.go:143] Using iptables Proxier.
W0625 16:07:20.148694       1 proxier.go:321] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0625 16:07:20.149329       1 server.go:534] Version: v1.15.0
I0625 16:07:20.167260       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0625 16:07:20.167316       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0625 16:07:20.168013       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0625 16:07:20.171224       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0625 16:07:20.171496       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0625 16:07:20.172500       1 config.go:96] Starting endpoints config controller
I0625 16:07:20.172547       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller
I0625 16:07:20.172729       1 config.go:187] Starting service config controller
I0625 16:07:20.172767       1 controller_utils.go:1029] Waiting for caches to sync for service config controller
I0625 16:07:20.274256       1 controller_utils.go:1036] Caches are synced for endpoints config controller
I0625 16:07:20.276220       1 controller_utils.go:1036] Caches are synced for service config controller

==> kube-scheduler <==
I0625 16:07:04.308895       1 serving.go:319] Generated self-signed cert in-memory
W0625 16:07:04.784446       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0625 16:07:04.784692       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0625 16:07:04.784764       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0625 16:07:04.787458       1 server.go:142] Version: v1.15.0
I0625 16:07:04.787614       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0625 16:07:04.789443       1 authorization.go:47] Authorization is disabled
W0625 16:07:04.789564       1 authentication.go:55] Authentication is disabled
I0625 16:07:04.789758       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
I0625 16:07:04.790404       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0625 16:07:08.951988       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0625 16:07:08.952587       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0625 16:07:08.952659       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0625 16:07:08.952750       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0625 16:07:08.952839       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0625 16:07:08.952872       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0625 16:07:08.952912       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0625 16:07:08.954754       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0625 16:07:08.954938       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0625 16:07:08.955179       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0625 16:07:09.954405       1 reflector.go:125] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:226: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0625 16:07:09.957532       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0625 16:07:09.958089       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0625 16:07:09.959584       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0625 16:07:09.960675       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
E0625 16:07:09.961523       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0625 16:07:09.963177       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0625 16:07:09.963834       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0625 16:07:09.966751       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0625 16:07:09.968573       1 reflector.go:125] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
I0625 16:07:11.795725       1 leaderelection.go:235] attempting to acquire leader lease  kube-system/kube-scheduler...
I0625 16:07:11.802687       1 leaderelection.go:245] successfully acquired lease kube-system/kube-scheduler
E0625 16:07:19.171550       1 factory.go:702] pod is already present in the activeQ

==> kubelet <==
-- Logs begin at Tue 2019-06-25 16:05:50 UTC, end at Tue 2019-06-25 16:15:06 UTC. --
Jun 25 16:10:34 minikube kubelet[2752]: E0625 16:10:34.884629    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:10:38 minikube kubelet[2752]: E0625 16:10:38.884625    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:10:45 minikube kubelet[2752]: E0625 16:10:45.882865    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:10:49 minikube kubelet[2752]: E0625 16:10:49.883455    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:10:59 minikube kubelet[2752]: E0625 16:10:59.883769    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:11:02 minikube kubelet[2752]: E0625 16:11:02.881641    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:11:10 minikube kubelet[2752]: E0625 16:11:10.888356    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:11:25 minikube kubelet[2752]: E0625 16:11:25.885850    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:11:32 minikube kubelet[2752]: E0625 16:11:32.896392    2752 remote_image.go:113] PullImage "registry.hub.docker.com/library/registry:2.6.1" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:32 minikube kubelet[2752]: E0625 16:11:32.896472    2752 kuberuntime_image.go:51] Pull image "registry.hub.docker.com/library/registry:2.6.1" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:32 minikube kubelet[2752]: E0625 16:11:32.896592    2752 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:32 minikube kubelet[2752]: E0625 16:11:32.896634    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jun 25 16:11:44 minikube kubelet[2752]: E0625 16:11:44.883803    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:11:52 minikube kubelet[2752]: E0625 16:11:52.888993    2752 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:52 minikube kubelet[2752]: E0625 16:11:52.889205    2752 kuberuntime_image.go:51] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:52 minikube kubelet[2752]: E0625 16:11:52.889970    2752 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:11:52 minikube kubelet[2752]: E0625 16:11:52.890480    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jun 25 16:11:56 minikube kubelet[2752]: E0625 16:11:56.883090    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:12:07 minikube kubelet[2752]: E0625 16:12:07.899325    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:12:08 minikube kubelet[2752]: E0625 16:12:08.883020    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:12:19 minikube kubelet[2752]: E0625 16:12:19.886568    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:12:20 minikube kubelet[2752]: E0625 16:12:20.882142    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:12:31 minikube kubelet[2752]: E0625 16:12:31.885875    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:12:31 minikube kubelet[2752]: E0625 16:12:31.887906    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:12:45 minikube kubelet[2752]: E0625 16:12:45.886825    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:12:45 minikube kubelet[2752]: E0625 16:12:45.887110    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:12:57 minikube kubelet[2752]: E0625 16:12:57.882893    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:13:00 minikube kubelet[2752]: E0625 16:13:00.882144    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:13:11 minikube kubelet[2752]: E0625 16:13:11.883928    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:13:11 minikube kubelet[2752]: E0625 16:13:11.884711    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:13:23 minikube kubelet[2752]: E0625 16:13:23.886667    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:13:26 minikube kubelet[2752]: E0625 16:13:26.883180    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:13:37 minikube kubelet[2752]: E0625 16:13:37.884723    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:13:38 minikube kubelet[2752]: E0625 16:13:38.882416    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:13:49 minikube kubelet[2752]: E0625 16:13:49.885667    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:13:52 minikube kubelet[2752]: E0625 16:13:52.883849    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:14:01 minikube kubelet[2752]: E0625 16:14:01.885601    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:14:05 minikube kubelet[2752]: E0625 16:14:05.884446    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:14:18 minikube kubelet[2752]: E0625 16:14:18.882463    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:14:28 minikube kubelet[2752]: E0625 16:14:28.888739    2752 remote_image.go:113] PullImage "registry.hub.docker.com/library/registry:2.6.1" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:28 minikube kubelet[2752]: E0625 16:14:28.888855    2752 kuberuntime_image.go:51] Pull image "registry.hub.docker.com/library/registry:2.6.1" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:28 minikube kubelet[2752]: E0625 16:14:28.888928    2752 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:28 minikube kubelet[2752]: E0625 16:14:28.888965    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://registry.hub.docker.com/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"
Jun 25 16:14:32 minikube kubelet[2752]: E0625 16:14:32.883929    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kube-registry-proxy:0.4\""
Jun 25 16:14:40 minikube kubelet[2752]: E0625 16:14:40.882548    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:14:54 minikube kubelet[2752]: E0625 16:14:54.882891    2752 pod_workers.go:190] Error syncing pod 859f0f0b-4d03-4563-a3bf-018addae20bb ("registry-k8jdk_kube-system(859f0f0b-4d03-4563-a3bf-018addae20bb)"), skipping: failed to "StartContainer" for "registry" with ImagePullBackOff: "Back-off pulling image \"registry.hub.docker.com/library/registry:2.6.1\""
Jun 25 16:14:58 minikube kubelet[2752]: E0625 16:14:58.890309    2752 remote_image.go:113] PullImage "gcr.io/google_containers/kube-registry-proxy:0.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:58 minikube kubelet[2752]: E0625 16:14:58.890358    2752 kuberuntime_image.go:51] Pull image "gcr.io/google_containers/kube-registry-proxy:0.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:58 minikube kubelet[2752]: E0625 16:14:58.890426    2752 kuberuntime_manager.go:775] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
Jun 25 16:14:58 minikube kubelet[2752]: E0625 16:14:58.890463    2752 pod_workers.go:190] Error syncing pod 64a29457-5493-4036-8dd1-6da4bc8a9a8d ("registry-proxy-msssn_kube-system(64a29457-5493-4036-8dd1-6da4bc8a9a8d)"), skipping: failed to "StartContainer" for "registry-proxy" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)"

==> kubernetes-dashboard <==
2019/06/25 16:07:22 Starting overwatch
2019/06/25 16:07:22 Using in-cluster config to connect to apiserver
2019/06/25 16:07:22 Using service account token for csrf signing
2019/06/25 16:07:22 Successful initial request to the apiserver, version: v1.15.0
2019/06/25 16:07:22 Generating JWE encryption key
2019/06/25 16:07:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/06/25 16:07:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/06/25 16:07:22 Storing encryption key in a secret
2019/06/25 16:07:22 Creating in-cluster Heapster client
2019/06/25 16:07:22 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:07:22 Serving insecurely on HTTP port: 9090
2019/06/25 16:07:52 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:08:22 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:08:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:09:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:09:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:10:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:10:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:11:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:11:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:12:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:12:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:13:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:13:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:14:23 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 16:14:53 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> storage-provisioner <==

The operating system version:
macOS 10.14.5

@medyagh
Copy link
Member

medyagh commented Jun 25, 2019

does this issue happen on re-starting miniukube or on first start ?

@medyagh
Copy link
Member

medyagh commented Jun 25, 2019

have you verified that you can access docker hub through proxy ?

@medyagh medyagh added the triage/needs-information Indicates an issue needs more information in order to work on it. label Jun 25, 2019
@dfang
Copy link
Contributor Author

dfang commented Jun 26, 2019

@medyagh

  1. tried restart, even minikube stop && minikube delete && minikube start
  2. tried docker pull in minikube ssh, pull failed, and
    tried curl -I baidu.com or google.com , can't find any host

@dfang
Copy link
Contributor Author

dfang commented Jun 26, 2019

@medyagh
unset any HTTP_PROXY, HTTPS_PROXY, then run minikube startwithout any proxy settings.

in minikube ssh,

$ docker pull alpine
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.95:47445->192.168.64.1:53: read: connection refused


$ curl -I docker.io
curl: (6) Could not resolve host: docker.io


$ ps -afe | grep dns
root      2509     1  3 00:19 ?        00:00:18 /usr/bin/kubelet --authorization-mode=Webhook --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroup-driver=cgroupfs --client-ca-file=/var/lib/minikube/certs/ca.crt --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --container-runtime=docker --fail-swap-on=false --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests
root      3935  3908  0 00:19 ?        00:00:02 /coredns -conf /etc/coredns/Corefile
root      3960  3933  0 00:19 ?        00:00:02 /coredns -conf /etc/coredns/Corefile
docker    7626  4561  0 00:27 pts/0    00:00:00 grep dns


$ ls /etc/coredns/Corefile
ls: cannot access '/etc/coredns/Corefile': No such file or directory
$ sudo cat /etc/coredns/Corefile
cat: /etc/coredns/Corefile: No such file or directory

@dfang dfang closed this as completed Jun 28, 2019
@nisiyong
Copy link

@dfang Could you show the final solution here ? I met same problem...

@eltonkevani
Copy link

for me hyperkit driver on mac had same issue. switching to vmwarefusion made the trick, though not a solution if you want to use hyperkit

@vikas86
Copy link

vikas86 commented Dec 23, 2019

Hi,
Facing same issue while starting minikube on windows.
Is there a specific solution or workaround for this.

minikube start --vm-driver=hyperv

  • minikube v1.6.1 on Microsoft Windows 10 Pro 10.0.17763 Build 17763
  • Selecting 'hyperv' driver from user configuration (alternates: [])
  • Creating hyperv VM (CPUs=2, Memory=2000MB, Disk=20000MB) ...
    ! VM is unable to access k8s.gcr.io, you may need to configure a proxy or set --image-repository
  • Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
  • Pulling images ...
  • Unable to pull images, which may be OK: running cmd: "/bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml"": /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.17.0:$PATH kubeadm config images pull --config /var/tmp/minikube/kubeadm.yaml": Process exited with status 1
    stdout:

stderr:
W1223 11:14:47.619633 3662 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1223 11:14:47.620481 3662 common.go:77] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta1". Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
W1223 11:14:47.622864 3662 validation.go:28] Cannot validate kube-proxy config - no validator is available
W1223 11:14:47.622906 3662 validation.go:28] Cannot validate kubelet config - no validator is available
failed to pull image "k8s.gcr.io/kube-apiserver:v1.17.0": output: Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
, error: exit status 1
To see the stack trace of this error execute with --v=5 or higher

  • Launching Kubernetes ...
  • Waiting for cluster to come online ...
  • Done! kubectl is now configured to use "minikube"

@aleksas
Copy link

aleksas commented Dec 27, 2019

Had the same issue on windows+virtualbox+minihube.

Problem was the DNS. By adding nameserver 8.8.8.8 to /etc/resolv.conf . Kubernetes was able to pull the images. Interestingly enough resolv.conf shortly reverted to having just original entry...

@vikas86
Copy link

vikas86 commented Dec 30, 2019

Had the same issue on windows+virtualbox+minihube.
Problem was the DNS. By adding nameserver 8.8.8.8 to /etc/resolv.conf . Kubernetes was able to pull the images. Interestingly enough resolv.conf shortly reverted to having just original entry...

After stopping minikube VM, I added nameserver server 8.8.8.8 to /etc/resolv.conf in minikube VM (deployed on HyperV) and then started the VM but the issue still persists.
Anything that I am missing here

@raskolnikov7
Copy link

I have the same issue. tried what vikas86 did (on mac+minikube+virtualbox).. to no avail.

@vikas86
Copy link

vikas86 commented Dec 31, 2019

Finally narrowed down the issue, for which the solution is providing public network access to minikube VM.

For minikube to pull images it needs to have a public network access which can be done by creating a Virtual network switch in HyperV or something equivalent in virtualbox (NAT or Bridge).

In HyperV create a virtual network switch (type external) - all default settings.
In windows powershell enter below cmd:

minikube start --vm-driver hyperv --hyperv-virtual-switch "My Virtual Switch"

This solves below issues while starting minikube:

  • VM is unable to access k8s.gcr.io
  • Unable to pull images,

@spsarolkar
Copy link

I am using KVM with minikube on linux and facing the same issue, not able to fetch any of the public images from docker hub

@raskolnikov7
Copy link

raskolnikov7 commented Dec 31, 2019

Worked.. https_proxy= minikube --profile profile_name start --docker-env http_proxy= --docker-env https_proxy= --docker-env no_proxy=192.168.99.0/24

Details here : https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster

@vikas86
Copy link

vikas86 commented Jan 2, 2020

I am using KVM with minikube on linux and facing the same issue, not able to fetch any of the public images from docker hub

@spsarolkar make sure you are able to access public network from your minikube VM.
you would need a bridge adapter in Virtual box for that.

@ch0mik
Copy link

ch0mik commented Jan 6, 2020

Hi!

Im still problem with pull images via MiniKube

`
C:\Windows\system32>minikube ssh

$ docker pull nginx:alpine
Error response from daemon: Get https://registry-1.docker.io/v2/library/nginx/manifests/alpine: remote error: tls: bad record MAC
`

my minikube starts on Windows 10 via cmd :
minikube start --vm-driver hyperv --hyperv-virtual-switch "Default Switch"

info during starts minikube :

`

  • minikube v1.6.2 on Microsoft Windows 10 Pro 10.0.18363 Build 18363
  • Selecting 'hyperv' driver from user configuration (alternates: [])
  • Tip: Use 'minikube start -p ' to create a new cluster, or 'minikube delete' to delete this one.
  • Using the running hyperv "minikube" VM ...
  • Waiting for the host to be provisioned ...
  • Preparing Kubernetes v1.17.0 on Docker '19.03.5' ...
  • Launching Kubernetes ...
  • Done! kubectl is now configured to use "minikube"
    `

Regards
Pawel

@massenz
Copy link

massenz commented Jan 11, 2020

If anyone faces the same problem on MacOS, this fixes it (you will need to install VirtualBox first):

$ minikube start --vm-driver virtualbox

IMHO this issue should not be closed, as there should be a way to run Minikube on MacOS with the native hypervisor (or, at the very least, a warning message and a suggested workaround).

@afbjorklund
Copy link
Collaborator

@massenz : please open a new issue, if you continue to have issues with --vm-driver=hyperkit

Using --vm-driver virtualbox is of course a workaround, but like you say the native one should work

@massenz
Copy link

massenz commented Jan 13, 2020

Good point, done: see #6296

@raghvendra1218
Copy link

Finally narrowed down the issue, for which the solution is providing public network access to minikube VM.

For minikube to pull images it needs to have a public network access which can be done by creating a Virtual network switch in HyperV or something equivalent in virtualbox (NAT or Bridge).

In HyperV create a virtual network switch (type external) - all default settings.
In windows powershell enter below cmd:

minikube start --vm-driver hyperv --hyperv-virtual-switch "My Virtual Switch"

This solves below issues while starting minikube:

  • VM is unable to access k8s.gcr.io
  • Unable to pull images,

Would like to add I was unable to run the command for creating the switch, those who are facing the issue try this , I am using hyperkit VM and running on MACOS Mojave. But even after running this problem still persists, I was just trying to spin off nginx container inside a pod using this pod.yml
apiVersion: v1
kind: Pod
metadata:
name: hello-pod
spec:
containers:
- name: gumball-pod-container
image: nginx:1.16.1-alpine
ports:
- containerPort: 8080

minikube start --vm-driver=hyperkit --hyperv-virtual-switch="My virtual switch"

@wpdildine
Copy link

wpdildine commented Feb 6, 2020

I'm running into same issue on MacOS mojave. Damn near about to drop kick my laptop. Neither of the fixes are working for me.

@massenz
Copy link

massenz commented Feb 6, 2020

@wpdildine - see #6296

TL;DR: minikube start --vm-driver=virtualbox
you'll need to download / install Virtualbox on your Mac

@paw-eloquent-safe
Copy link

I somehow ran into exactly the same problem, even after multiple minikube stop && minikube delete ...
The following worked for me on Windows 10 Pro using the Hyper-V driver:

  1. minikube ssh
  2. sudo vi /etc/systemd/network/10-eth1.network add DNS=8.8.8.8 under [Network]
  3. sudo vi /etc/systemd/network/20-dhcp.network add DNS=8.8.8.8 under [Network]
  4. sudo systemctl restart systemd-networkd
  5. To test it execute something that has to resolve using dns, like curl google.com or docker pull

@vinaydotblog
Copy link

I somehow ran into exactly the same problem, even after multiple minikube stop && minikube delete ...
The following worked for me on Windows 10 Pro using the Hyper-V driver:

  1. minikube ssh
  2. sudo vi /etc/systemd/network/10-eth1.network add DNS=8.8.8.8 under [Network]
  3. sudo vi /etc/systemd/network/20-dhcp.network add DNS=8.8.8.8 under [Network]
  4. sudo systemctl restart systemd-networkd
  5. To test it execute something that has to resolve using dns, like curl google.com or docker pull

This worked for my on my mac!

@MrMikeFloyd
Copy link

@wpdildine - see #6296

TL;DR: minikube start --vm-driver=virtualbox
you'll need to download / install Virtualbox on your Mac

This did the trick for me on my Linux Box- worked straight away with VirtualBox. Tried running minikube on docker before that and almost went crazy!

@ehsan-salari
Copy link

I somehow ran into exactly the same problem, even after multiple minikube stop && minikube delete ...
The following worked for me on Windows 10 Pro using the Hyper-V driver:

  1. minikube ssh
  2. sudo vi /etc/systemd/network/10-eth1.network add DNS=8.8.8.8 under [Network]
  3. sudo vi /etc/systemd/network/20-dhcp.network add DNS=8.8.8.8 under [Network]
  4. sudo systemctl restart systemd-networkd
  5. To test it execute something that has to resolve using dns, like curl google.com or docker pull

It worked fine for me on mac. Thanks!

@massenz
Copy link

massenz commented May 15, 2020

But why go through all that, if a much simpler (IMHO) is to just install Virtualbox and do this:

minikube start --vm-driver virtualbox

@afbjorklund
Copy link
Collaborator

Those of you commenting on other drivers, you might want to add your thoughts to #8135

The VirtualBox driver and the so called "native virtualization" drivers (HyperKit/HyperV/libvirt) will remain, even if the Docker/Podman drivers are available. The question is which should be default.

There is a new option called --vm, that will make sure to select some kind of hypervisor driver.

See also https://minikube.sigs.k8s.io/docs/drivers/

The intro kubernetes course still uses VirtualBox...

@massenz
Copy link

massenz commented May 17, 2020

Fair enough, @afbjorklund, I wasn't "commenting" on other drivers: I was just noting the weird and wonderful contortions folks were going through (also, worth noting, very temporary and manual), and thought I'd suggest a simpler way.

For that matter, there's also #6296, which I dutifully filed following your suggestions; unfortunately that one seems to have stalled for now.

@dmslowmo
Copy link

@wpdildine - see #6296

TL;DR: minikube start --vm-driver=virtualbox
you'll need to download / install Virtualbox on your Mac

This works for me. By default the vm driver would be 'hyperkit' unless otherwise specified.

@jbbarquero
Copy link

I somehow ran into exactly the same problem, even after multiple minikube stop && minikube delete ...
The following worked for me on Windows 10 Pro using the Hyper-V driver:

  1. minikube ssh
  2. sudo vi /etc/systemd/network/10-eth1.network add DNS=8.8.8.8 under [Network]
  3. sudo vi /etc/systemd/network/20-dhcp.network add DNS=8.8.8.8 under [Network]
  4. sudo systemctl restart systemd-networkd
  5. To test it execute something that has to resolve using dns, like curl google.com or docker pull

It doesn't work for me. However...

If anyone faces the same problem on MacOS, this fixes it (you will need to install VirtualBox first):

$ minikube start --vm-driver virtualbox

IMHO this issue should not be closed, as there should be a way to run Minikube on MacOS with the native hypervisor (or, at the very least, a warning message and a suggested workaround).

...works fine.

@CentUser
Copy link

@wpdildine - see #6296

TL;DR: minikube start --vm-driver=virtualbox
you'll need to download / install Virtualbox on your Mac

emm... virtualbox is too heavy, minikube at least start 3 VMs, dell 7050 with 8GB ram and intel i7-6700, system frozen.

@EslamElHusseiny
Copy link

I somehow ran into exactly the same problem, even after multiple minikube stop && minikube delete ...
The following worked for me on Windows 10 Pro using the Hyper-V driver:

1. `minikube ssh`

2. `sudo vi /etc/systemd/network/10-eth1.network` add `DNS=8.8.8.8` under `[Network]`

3. `sudo vi /etc/systemd/network/20-dhcp.network` add `DNS=8.8.8.8` under `[Network]`

4. `sudo systemctl restart systemd-networkd`

5. To test it execute something that has to resolve using dns, like `curl google.com` or `docker pull`

Didn't work for me on Mac 😞

@sudochop
Copy link

Same issue here under hyperkit

@indykish
Copy link

indykish commented Oct 1, 2020

I did a restart after doing the following

minikube delete

upon restart


minkube start --driver=docker

minikube status

minikube ssh

@alsoft27
Copy link

on windows 10, Docker -> Setup -> Resources -> enable Manual DNS config value=8.8.8.8
restart windows
minikube delete & start, thats alls for me, works fine.

remenber is windows

@arashbi
Copy link

arashbi commented Nov 5, 2020

I still have this issue, why is this closed without an explanation?

@sspieker-cc
Copy link

Agreed - this issue has not been resolved to a point that explains how to correct this issue. Adding the DNS resolution entries as stated above has not changed the behavior or minikube being able to pull or to resolve hosts.

@k8s-ci-robot
Copy link
Contributor

@sspieker-cc: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@dfang dfang reopened this Nov 6, 2020
@CentUser
Copy link

CentUser commented Nov 6, 2020

Hi, guys. Though I am not familiar with the codes, yet ,I figured out an workaround for this problem.
You can use minikube cach subcommand to cache the images first, and then , minikube start.
The programme would use the cached image instead of downloading from the net.
If this is helpful, reply to me, I'd like to know that this way applies to others.

@medyagh
Copy link
Member

medyagh commented Nov 12, 2020

I still have this issue, why is this closed without an explanation?

does this help your case #4589 (comment) ?

@medyagh
Copy link
Member

medyagh commented Nov 25, 2020

@arashbi @wpdildine
I am curious if you still have this issue with docker driver with latest version of minikube?

@arashbi
Copy link

arashbi commented Nov 26, 2020

It happens on hyperkit, I am not sure what you mean by docker driver

@OkayJosh
Copy link

yes, i still have the issue with docker driver on the latest version

@priyawadhwa priyawadhwa added the kind/support Categorizes issue or PR as a support question. label Dec 1, 2020
@tstromberg tstromberg changed the title Can not pull any image in minikube docker: Can not pull any image in minikube Dec 16, 2020
@tstromberg tstromberg changed the title docker: Can not pull any image in minikube docker: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection Dec 16, 2020
@tstromberg tstromberg changed the title docker: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection hyperkit: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection Dec 16, 2020
@tstromberg tstromberg changed the title hyperkit: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection hyperkit + docker proxy: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection Dec 16, 2020
@tstromberg tstromberg added the cause/firewall-or-proxy When firewalls or proxies seem to be interfering label Dec 16, 2020
@tstromberg
Copy link
Contributor

Based on the messages, I'm pretty confident that this is a proxy configuration issue, and a very old one at that. Marking as obsolete, as the handling has changed since 2019.

If you run into a similar problem, please open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cause/firewall-or-proxy When firewalls or proxies seem to be interfering kind/support Categorizes issue or PR as a support question. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests