Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multinode Minikube + Dashboard => Error (related to dashboard-metrics-scraper?) #8733

Closed
fzyzcjy opened this issue Jul 16, 2020 · 22 comments
Closed
Labels
co/dashboard dashboard related issues co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@fzyzcjy
Copy link

fzyzcjy commented Jul 16, 2020

Thanks very much for the minikube & kubernetes!

Steps to reproduce the issue:

  1. minikube start --nodes=4 --cpus=2 --memory=3000MB --driver=docker (Running on a Mac Mini 2020 version, with i7 12core and 32GB RAM.)
  2. minikube dashboard

P.S. I know multi-node is in experimental, and I am willing to help if I can :)

Full output of failed command: minikube dashboard

I0716 18:20:59.052468   78440 mustload.go:64] Loading cluster: minikube
I0716 18:20:59.052981   78440 cli_runner.go:109] Run: docker container inspect minikube --format={{.State.Status}}
I0716 18:20:59.086087   78440 host.go:65] Checking if "minikube" exists ...
I0716 18:20:59.086411   78440 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" minikube
I0716 18:20:59.121296   78440 api_server.go:146] Checking apiserver status ...
I0716 18:20:59.121441   78440 ssh_runner.go:148] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0716 18:20:59.121507   78440 cli_runner.go:109] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0716 18:20:59.160084   78440 sshutil.go:44] new ssh client: &{IP:127.0.0.1 Port:32887 SSHKeyPath:/Users/tom/.minikube/machines/minikube/id_rsa Username:docker}
I0716 18:20:59.270791   78440 ssh_runner.go:148] Run: sudo egrep ^[0-9]+:freezer: /proc/1775/cgroup
I0716 18:20:59.281732   78440 api_server.go:162] apiserver freezer: "7:freezer:/docker/76a5ba36b02d6ad01e4b24432b13c165d0b2072ae5c6048c1938cc2df00a1a01/kubepods/burstable/pod484e7c0718c2559ba40cc73195f5d1a3/ec76ebe6d79061b1c93e000041aa422f3ab9d6322f4b244620d7f87b72d2cc69"
I0716 18:20:59.281838   78440 ssh_runner.go:148] Run: sudo cat /sys/fs/cgroup/freezer/docker/76a5ba36b02d6ad01e4b24432b13c165d0b2072ae5c6048c1938cc2df00a1a01/kubepods/burstable/pod484e7c0718c2559ba40cc73195f5d1a3/ec76ebe6d79061b1c93e000041aa422f3ab9d6322f4b244620d7f87b72d2cc69/freezer.state
I0716 18:20:59.292002   78440 api_server.go:184] freezer state: "THAWED"
I0716 18:20:59.292038   78440 api_server.go:215] Checking apiserver healthz at https://127.0.0.1:32884/healthz ...
I0716 18:20:59.299163   78440 api_server.go:235] https://127.0.0.1:32884/healthz returned 200:
ok
W0716 18:20:59.299189   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299553   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299563   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299568   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299573   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299579   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299583   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299587   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299591   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299596   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299601   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299605   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299609   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299613   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299617   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299622   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299628   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299633   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299637   78440 proxy.go:117] fail to check proxy env: Error ip not in block
W0716 18:20:59.299642   78440 proxy.go:117] fail to check proxy env: Error ip not in block
🤔  正在验证 dashboard 运行情况 ...
I0716 18:20:59.314916   78440 service.go:212] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard 67f1b30c-86c5-49dc-8a3f-d14d764ebfd9 1029 0 2020-07-16 18:09:41 +0800 CST <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl Update v1 2020-07-16 18:09:41 +0800 CST FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 107 117 98 101 99 116 108 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 108 97 115 116 45 97 112 112 108 105 101 100 45 99 111 110 102 105 103 117 114 97 116 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 97 100 100 111 110 109 97 110 97 103 101 114 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 111 100 101 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 44 34 102 58 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 105 110 105 107 117 98 101 45 97 100 100 111 110 115 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 112 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 112 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 44 34 102 58 116 97 114 103 101 116 80 111 114 116 34 58 123 125 125 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 46 34 58 123 125 44 34 102 58 107 56 115 45 97 112 112 34 58 123 125 125 44 34 102 58 115 101 115 115 105 111 110 65 102 102 105 110 105 116 121 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125],}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.105.36.106,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamily:nil,TopologyKeys:[],},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},},}
🚀  Launching proxy ...
I0716 18:20:59.315194   78440 dashboard.go:144] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context minikube proxy --port=0]
I0716 18:20:59.317861   78440 dashboard.go:149] Waiting for kubectl to output host:port ...
I0716 18:20:59.364538   78440 dashboard.go:167] proxy stdout: Starting to serve on 127.0.0.1:54967
🤔  正在验证 proxy 运行状况 ...
I0716 18:20:59.380697   78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00009a8c0 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0004fe100 TLS:<nil>}
I0716 18:20:59.380762   78440 retry.go:30] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0716 18:20:59.387440   78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000718200 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0001ec800 TLS:<nil>}
I0716 18:20:59.387481   78440 retry.go:30] will retry after 216.077µs: Temporary Error: unexpected response code: 503
I0716 18:20:59.393636   78440 dashboard.go:204] http://127.0.0.1:54967/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:20:59 GMT] X-Content-Type-Options:[nosniff]] Body:0xc00009ae40 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000346600 TLS:<nil>}
I0716 18:20:59.393685   78440 retry.go:30] will retry after 262.026µs: Temporary Error: unexpected response code: 503
[...then this repeat over and over again..., for instance:]
I0716 18:22:25.484676   79413 retry.go:30] will retry after 4.744335389s: Temporary Error: unexpected response code: 503
I0716 18:22:30.239387   79413 dashboard.go:204] http://127.0.0.1:55060/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[86] Content-Type:[text/plain; charset=utf-8] Date:[Thu, 16 Jul 2020 10:22:30 GMT] X-Content-Type-Options:[nosniff]] Body:0xc000432e80 ContentLength:86 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000b6c700 TLS:<nil>}
I0716 18:22:30.239422   79413 retry.go:30] will retry after 4.014454686s: Temporary Error: unexpected response code: 503
[...and more...]

Full output of minikube start command used, if not already included:
minikube start --nodes=4 --cpus=2 --memory=3000MB --driver=docker

Optional: Full output of minikube logs command:

[minikube_logs.txt](https://github.com/kubernetes/minikube/files/4930857/minikube_logs.txt)

Extra information that IMHO maybe useful

  1. I see the dashboard-metrics-scraper complains.
k logs --namespace=kubernetes-dashboard dashboard-metrics-scraper-dc6947fbf-69lcp     

{"level":"info","msg":"Kubernetes host: https://10.96.0.1:443","time":"2020-07-16T10:09:43Z"}
172.18.0.1 - - [16/Jul/2020:10:10:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
172.18.0.1 - - [16/Jul/2020:10:10:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
172.18.0.1 - - [16/Jul/2020:10:10:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.18"
{"level":"error","msg":"Error scraping node metrics: the server could not find the requested resource (get nodes.metrics.k8s.io)","time":"2020-07-16T10:10:43Z"}
  1. more about that
k describe --namespace=kubernetes-dashboard pods dashboard-metrics-scraper-dc6947fbf-69lcp

...
Events:
  Type    Reason     Age   From                   Message
  ----    ------     ----  ----                   -------
  Normal  Scheduled  16m   default-scheduler      Successfully assigned kubernetes-dashboard/dashboard-metrics-scraper-dc6947fbf-69lcp to minikube-m04
  Normal  Pulled     16m   kubelet, minikube-m04  Container image "kubernetesui/metrics-scraper:v1.0.4" already present on machine
  Normal  Created    16m   kubelet, minikube-m04  Created container dashboard-metrics-scraper
  Normal  Started    16m   kubelet, minikube-m04  Started container dashboard-metrics-scraper
  1. The dashboard itself is also complaining, which seems to related to the scraper
k logs --namespace=kubernetes-dashboard kubernetes-dashboard-6dbb54fd95-4tcm7 

2020/07/16 10:09:43 Starting overwatch
2020/07/16 10:09:43 Using namespace: kubernetes-dashboard
2020/07/16 10:09:43 Using in-cluster config to connect to apiserver
2020/07/16 10:09:43 Using secret token for csrf signing
2020/07/16 10:09:43 Initializing csrf token from kubernetes-dashboard-csrf secret
2020/07/16 10:09:43 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
2020/07/16 10:09:43 Successful initial request to the apiserver, version: v1.18.3
2020/07/16 10:09:43 Generating JWE encryption key
2020/07/16 10:09:43 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
2020/07/16 10:09:43 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
2020/07/16 10:09:43 Initializing JWE encryption key from synchronized object
2020/07/16 10:09:43 Creating in-cluster Sidecar client
2020/07/16 10:09:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/07/16 10:09:43 Serving insecurely on HTTP port: 9090
2020/07/16 10:09:43 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
2020/07/16 10:10:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
...more...
@fzyzcjy fzyzcjy changed the title Multinode Minikube + Dashboard => Error Multinode Minikube + Dashboard => Error (related to dashboard-metrics-scraper?) Jul 16, 2020
@priyawadhwa priyawadhwa added kind/support Categorizes issue or PR as a support question. co/dashboard dashboard related issues co/multinode Issues related to multinode clusters labels Jul 16, 2020
@priyawadhwa
Copy link

priyawadhwa commented Jul 16, 2020

Hey @fzyzcjy thanks for opening this issue! Would you be interested in looking into this issue? The scraper seems like a good place to start.'

We have a big multinode tracking issue open here as well: #7538

@fzyzcjy
Copy link
Author

fzyzcjy commented Jul 16, 2020

@priyawadhwa Yes I would love to help, and I have already thumbed up #7538 yesterday :) However, I am really new to k8s and have no idea what to do next... I have tried to search those error messages, but cannot find any solution :(

Thus I would appreciate it if you could provide some hints about what I should do, thanks! (Or maybe hint me about what more logs can I get?)

What I have found:

  1. unable to fetch metrics from API: the server could not find the requested resource (get pods.metrics.k8s.io) kubernetes-sigs/metrics-server#41 which is a bit different - his scraper works fine but mine does not. Anyway here are some outputs:
kubectl get --raw /apis/metrics.k8s.io/v1beta1

Error from server (NotFound): the server could not find the requested resource

--- if using kubectl get --raw /apis/metrics.k8s.io/v1beta1 --alsologtostderr -v=7 ---

I0717 07:59:18.680900   27494 loader.go:375] Config loaded from file:  /Users/tom/.kube/config
I0717 07:59:18.681486   27494 cert_rotation.go:137] Starting client certificate rotation controller
I0717 07:59:18.681513   27494 round_trippers.go:420] GET https://127.0.0.1:32900/apis/metrics.k8s.io/v1beta1
I0717 07:59:18.681520   27494 round_trippers.go:427] Request Headers:
I0717 07:59:18.681525   27494 round_trippers.go:431]     Accept: application/json, */*
I0717 07:59:18.681548   27494 round_trippers.go:431]     User-Agent: kubectl/v1.18.5 (darwin/amd64) kubernetes/e6503f8
I0717 07:59:18.691577   27494 round_trippers.go:446] Response Status: 404 Not Found in 10 milliseconds
I0717 07:59:18.691745   27494 helpers.go:216] server response object: [{
  "metadata": {},
  "status": "Failure",
  "message": "the server could not find the requested resource",
  "reason": "NotFound",
  "details": {
    "causes": [
      {
        "reason": "UnexpectedServerResponse",
        "message": "404 page not found"
      }
    ]
  },
  "code": 404
}]
F0717 07:59:18.691768   27494 helpers.go:115] Error from server (NotFound): the server could not find the requested resource

@fzyzcjy
Copy link
Author

fzyzcjy commented Jul 17, 2020

Another idea: Can I get dashboard without scraper? Since Minikube is not production env, thus people often do not care about metrics!

I find an argument: --metrics-provider=none

Latest update: I did the following step but failed.

  1. downloaded the official recommended.yaml for dashboard
  2. edit the arguments to add the arg above, and then apply it.
  3. minikube service --namespace=kubernetes-dashboard kubernetes-dashboard

then the command output

|----------------------|----------------------|-------------|--------------|
|      NAMESPACE       |         NAME         | TARGET PORT |     URL      |
|----------------------|----------------------|-------------|--------------|
| kubernetes-dashboard | kubernetes-dashboard |             | No node port |
|----------------------|----------------------|-------------|--------------|
😿  service kubernetes-dashboard/kubernetes-dashboard has no node port
🏃  Starting tunnel for service kubernetes-dashboard.
|----------------------|----------------------|-------------|------------------------|
|      NAMESPACE       |         NAME         | TARGET PORT |          URL           |
|----------------------|----------------------|-------------|------------------------|
| kubernetes-dashboard | kubernetes-dashboard |             | http://127.0.0.1:60779 |
|----------------------|----------------------|-------------|------------------------|
🎉  正通过默认浏览器打开服务 kubernetes-dashboard/kubernetes-dashboard...
❗  Because you are using a Docker driver on darwin, the terminal needs to be open to run it.

and the curl

curl -v http://127.0.0.1:60779

* Uses proxy env variable no_proxy == '127.0.0.1,localhost'
*   Trying 127.0.0.1:60779...
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 60779 (#0)
> GET / HTTP/1.1
> Host: 127.0.0.1:60779
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Recv failure: Connection reset by peer
* Closing connection 0
curl: (56) Recv failure: Connection reset by peer

If I use the other way, issuing kubectl proxy and then open http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ just like the official doc, then:

curl -v http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
* Uses proxy env variable no_proxy == '127.0.0.1,localhost'
*   Trying ::1:8001...
* TCP_NODELAY set
* Connection failed
* connect to ::1 port 8001 failed: Connection refused
*   Trying 127.0.0.1:8001...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 8001 (#0)
> GET /api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/ HTTP/1.1
> Host: localhost:8001
> User-Agent: curl/7.68.0
> Accept: */*
> 
* Mark bundle as not supporting multiuse
< HTTP/1.1 503 Service Unavailable
< Cache-Control: no-cache, private
< Content-Length: 86
< Content-Type: text/plain; charset=utf-8
< Date: Fri, 17 Jul 2020 00:31:37 GMT
< X-Content-Type-Options: nosniff
< 
* Connection #0 to host localhost left intact
Error trying to reach service: 'dial tcp 172.18.0.2:9090: connect: connection refused'

p.s. the outputs of descibing pods -

k describe --namespace=kubernetes-dashboard pods kubernetes-dashboard-6b9b8b674f-wz86l 
Name:         kubernetes-dashboard-6b9b8b674f-wz86l
Namespace:    kubernetes-dashboard
Priority:     0
Node:         minikube-m02/172.17.0.5
Start Time:   Fri, 17 Jul 2020 08:34:11 +0800
Labels:       k8s-app=kubernetes-dashboard
              pod-template-hash=6b9b8b674f
Annotations:  <none>
Status:       Running
IP:           172.18.0.2
IPs:
  IP:           172.18.0.2
Controlled By:  ReplicaSet/kubernetes-dashboard-6b9b8b674f
Containers:
  kubernetes-dashboard:
    Container ID:  docker://7c100d6af40b24ec9178994cc5747842cce611f503d8adc21d33fe84c3e144f8
    Image:         kubernetesui/dashboard:v2.0.0
    Image ID:      docker-pullable://kubernetesui/dashboard@sha256:06868692fb9a7f2ede1a06de1b7b32afabc40ec739c1181d83b5ed3eb147ec6e
    Port:          8443/TCP
    Host Port:     0/TCP
    Args:
      --auto-generate-certificates
      --namespace=kubernetes-dashboard
      --metrics-provider=none
    State:          Running
      Started:      Fri, 17 Jul 2020 08:34:17 +0800
    Ready:          True
    Restart Count:  0
    Liveness:       http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /certs from kubernetes-dashboard-certs (rw)
      /tmp from tmp-volume (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-5zw5v (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  kubernetes-dashboard-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-certs
    Optional:    false
  tmp-volume:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  kubernetes-dashboard-token-5zw5v:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  kubernetes-dashboard-token-5zw5v
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  kubernetes.io/os=linux
Tolerations:     node-role.kubernetes.io/master:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From                   Message
  ----    ------     ----       ----                   -------
  Normal  Scheduled  <unknown>  default-scheduler      Successfully assigned kubernetes-dashboard/kubernetes-dashboard-6b9b8b674f-wz86l to minikube-m02
  Normal  Pulling    56s        kubelet, minikube-m02  Pulling image "kubernetesui/dashboard:v2.0.0"
  Normal  Pulled     51s        kubelet, minikube-m02  Successfully pulled image "kubernetesui/dashboard:v2.0.0"
  Normal  Created    51s        kubelet, minikube-m02  Created container kubernetes-dashboard
  Normal  Started    51s        kubelet, minikube-m02  Started container kubernetes-dashboard

P.S. I find an interesting thing - in single node setup, my k8s dashboard actually does not have that "cpu usage" / "memory usage" row (in official photo: https://github.com/kubernetes/dashboard/blob/master/docs/images/dashboard-ui.png), which maybe a signal that it starts without metrics

@fzyzcjy
Copy link
Author

fzyzcjy commented Jul 17, 2020

Sorry I have really no idea what to do next :(

@priyawadhwa
Copy link

Hey @fzyzcjy sorry for the delayed response -- I'm not sure what to do next either as I'm unfamiliar with multinode and with the dashboard.

I'd suggest asking on the minikube slack channel, where there may be someone who can help?

@priyawadhwa
Copy link

Hey @fzyzcjy are you still seeing this with the latest version of minikube?

@fzyzcjy
Copy link
Author

fzyzcjy commented Oct 22, 2020

@priyawadhwa Sorry I do not use multinode and get back to single node after that bug happens

@chatterjeesunit
Copy link

My minikube version is : v1.15.1

I started minikube using this command
minikube start --memory 6000 --cpus=4 --nodes=2 --disk-size='5gb'

minikube dashboard never works for me if nodes are greater than 1.

@sharifelgamal sharifelgamal added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/support Categorizes issue or PR as a support question. labels Dec 16, 2020
@xhebox
Copy link

xhebox commented Jan 26, 2021

I wonder if this is related to CNI.

@xhebox
Copy link

xhebox commented Feb 20, 2021

@priyawadhwa I found that other nodes won't start if CNI nodes are not started. They will rely unresolvable uris like xxxx.svc. Then dashboard can not start due to the bad network.

Maybe minikube should give some info about the status of CNI nodes, or just wait for CNI nodes running.

And it definitely should cache those CNI images automatically. I have a bad network, and constantly ran into cases like failing to pull images. So CNI fails frequently.

This is the shell script that I used to get the list of images I needed to cache for minikube:

list=$($KUBECTL get pods --all-namespaces -o json | $JQ -r '.items[].spec.containers[].image, (.items[].spec | select(.initContainers != null) | .initContainers[].image)' | sort | uniq)
for i in $list; do
        minikube cache add $i
done

@tstromberg
Copy link
Contributor

tstromberg commented Mar 24, 2021

I believe this has likely been addressed in recent releases of minikube (v1.18.x), where multi-node is no longer an experimental feature. Do you mind trying to confirm?

@xhebox
Copy link

xhebox commented Mar 25, 2021

I don't know what the original error is, but for my problem, it may still exist. Check this comment and this comment.

We need to cache and respect registry for initContainers, too. Or I would need to wait for a long time pulling CNI images. If you want, I can fire another issue for this, @tstromberg .

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 23, 2021
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 23, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jul 28, 2021
@spowelljr
Copy link
Member

I tried this with minikube v1.22.0 and the dashboard command worked with multi-node. Could someone who reported this problem confirm that this is fixed now? @fzyzcjy @xhebox @chatterjeesunit

@chatterjeesunit
Copy link

@spowelljr Yes i can confirm that it is now working with minikube v 1.22.0

@spowelljr
Copy link
Member

spowelljr commented Jul 29, 2021

Thanks for checking @chatterjeesunit, I'm going to close this issue, if this isn't resolved for anyone else, comment and I can reopen the issue.

@powerman
Copy link

I've just installed minicube 1.22.0. Dashboard works on single-node cluster but doesn't work on multi-node cluster:

$ minikube start --nodes=3
...
$ minikube dashboard
🤔  Verifying dashboard health ...
🚀  Launching proxy ...
🤔  Verifying proxy health ...

and it hangs here.

$ minikube dashboard --alsologtostderr
I0829 16:09:39.192420 2381686 out.go:286] Setting OutFile to fd 1 ...
I0829 16:09:39.192486 2381686 out.go:338] isatty.IsTerminal(1) = true
I0829 16:09:39.192491 2381686 out.go:299] Setting ErrFile to fd 2...
I0829 16:09:39.192497 2381686 out.go:338] isatty.IsTerminal(2) = true
I0829 16:09:39.192551 2381686 root.go:312] Updating PATH: /home/powerman/.minikube/bin
I0829 16:09:39.192638 2381686 mustload.go:65] Loading cluster: minikube
I0829 16:09:39.193016 2381686 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0829 16:09:39.211356 2381686 host.go:66] Checking if "minikube" exists ...
I0829 16:09:39.211488 2381686 api_server.go:164] Checking apiserver status ...
I0829 16:09:39.211521 2381686 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0829 16:09:39.211551 2381686 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0829 16:09:39.230523 2381686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/powerman/.minikube/machines/minikube/id_rsa Username:docker}
I0829 16:09:39.312655 2381686 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2107/cgroup
I0829 16:09:39.316761 2381686 api_server.go:180] apiserver freezer: "7:freezer:/docker/7e253d4b8fc99f52693fbcb1d84faa8693df14dbce944778f67ed5262bef8e6f/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/740102568cae96cf3c882b1e56d0a5405e7886925cd7fa8721d8ef6eed9ff92c"
I0829 16:09:39.316803 2381686 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/7e253d4b8fc99f52693fbcb1d84faa8693df14dbce944778f67ed5262bef8e6f/kubepods/burstable/podcefbe66f503bf010430ec3521cf31be8/740102568cae96cf3c882b1e56d0a5405e7886925cd7fa8721d8ef6eed9ff92c/freezer.state
I0829 16:09:39.320177 2381686 api_server.go:202] freezer state: "THAWED"
I0829 16:09:39.320190 2381686 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0829 16:09:39.323910 2381686 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
ok
W0829 16:09:39.323943 2381686 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:09:39.323955 2381686 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:09:39.323960 2381686 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:09:39.323970 2381686 out.go:230] 🤔  Verifying dashboard health ...
🤔  Verifying dashboard health ...
I0829 16:09:39.329690 2381686 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  e8ebebd3-8f95-4c0e-9545-4a5c421a240f 701 0 2021-08-29 15:54:02 +0300 EEST <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2021-08-29 15:54:02 +0300 EEST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{".":{},"f:k8s-app":{}},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.99.181.85,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.99.181.85],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0829 16:09:39.329774 2381686 out.go:230] 🚀  Launching proxy ...
🚀  Launching proxy ...
I0829 16:09:39.329805 2381686 dashboard.go:146] Executing: /home/powerman/.local/bin/kubectl [/home/powerman/.local/bin/kubectl --context minikube proxy --port=0]
I0829 16:09:39.329913 2381686 dashboard.go:151] Waiting for kubectl to output host:port ...
I0829 16:09:39.347785 2381686 dashboard.go:169] proxy stdout: Starting to serve on 127.0.0.1:42177
W0829 16:09:39.347821 2381686 out.go:230] 🤔  Verifying proxy health ...
🤔  Verifying proxy health ...

I0829 16:10:09.352491 2381686 dashboard.go:206] http://127.0.0.1:42177/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:10:09 GMT]] Body:0xc00111ee40 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000f6c400 TLS:<nil>}
I0829 16:10:09.352538 2381686 retry.go:31] will retry after 110.466µs: Temporary Error: unexpected response code: 503

@powerman
Copy link

I probably should add I'm using minicube with a docker driver and some extra info (note dashboard containers runs on different nodes):

$ minikube version
minikube version: v1.22.0
commit: a03fbcf166e6f74ef224d4a63be4277d017bb62e
$ kubectl get -A nodes
NAME           STATUS   ROLES                  AGE   VERSION
minikube       Ready    control-plane,master   22m   v1.21.2
minikube-m02   Ready    <none>                 22m   v1.21.2
minikube-m03   Ready    <none>                 22m   v1.21.2
$ kubectl get -A pods
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-558bd4d5db-w5vr2                     1/1     Running   0          20m
kube-system            etcd-minikube                                1/1     Running   0          21m
kube-system            kindnet-dxtjq                                1/1     Running   0          20m
kube-system            kindnet-gt7q4                                1/1     Running   0          20m
kube-system            kindnet-wrfp8                                1/1     Running   0          20m
kube-system            kube-apiserver-minikube                      1/1     Running   0          21m
kube-system            kube-controller-manager-minikube             1/1     Running   0          21m
kube-system            kube-proxy-2v9bj                             1/1     Running   0          20m
kube-system            kube-proxy-424hx                             1/1     Running   0          20m
kube-system            kube-proxy-z4pxj                             1/1     Running   0          20m
kube-system            kube-scheduler-minikube                      1/1     Running   0          21m
kube-system            storage-provisioner                          1/1     Running   0          20m
kubernetes-dashboard   dashboard-metrics-scraper-7976b667d4-wxbj4   1/1     Running   0          19m
kubernetes-dashboard   kubernetes-dashboard-6fcdf4f6d-kks94         1/1     Running   0          19m
$ kubectl get --namespace kubernetes-dashboard pods -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7976b667d4-wxbj4   1/1     Running   0          24m   10.244.2.2   minikube-m03   <none>           <none>
kubernetes-dashboard-6fcdf4f6d-kks94         1/1     Running   0          24m   10.244.1.2   minikube-m02   <none>           <none>

The node's version listed is v1.21.2 - maybe it should be v1.22.0 (matching minikube's version) to fix this?

@powerman
Copy link

I've started cluster using latest k8s image and this time both dashboard images happens to run on same node, but, still, it doesn't work:

$ minikube start --nodes=3 --kubernetes-version=latest
😄  minikube v1.22.0 on Gentoo 2.7
✨  Automatically selected the docker driver. Other choices: virtualbox, ssh
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
💾  Downloading Kubernetes v1.22.0-beta.0 preload ...
    > preloaded-images-k8s-v11-v1...: 514.03 MiB / 514.03 MiB  100.00% 10.40 Mi
🔥  Creating docker container (CPUs=2, Memory=2666MB) ...
🐳  Preparing Kubernetes v1.22.0-beta.0 on Docker 20.10.7 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔗  Configuring CNI (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

👍  Starting node minikube-m02 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2666MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2
🐳  Preparing Kubernetes v1.22.0-beta.0 on Docker 20.10.7 ...
    ▪ env NO_PROXY=192.168.49.2
🔎  Verifying Kubernetes components...

👍  Starting node minikube-m03 in cluster minikube
🚜  Pulling base image ...
🔥  Creating docker container (CPUs=2, Memory=2666MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.49.2,192.168.49.3
🐳  Preparing Kubernetes v1.22.0-beta.0 on Docker 20.10.7 ...
    ▪ env NO_PROXY=192.168.49.2
    ▪ env NO_PROXY=192.168.49.2,192.168.49.3
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
$ kubectl get -A nodes -o wide
NAME           STATUS   ROLES                  AGE     VERSION          INTERNAL-IP    EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
minikube       Ready    control-plane,master   5m21s   v1.22.0-beta.0   192.168.49.2   <none>        Ubuntu 20.04.2 LTS   5.10.52-gentoo   docker://20.10.7
minikube-m02   Ready    <none>                 4m58s   v1.22.0-beta.0   192.168.49.3   <none>        Ubuntu 20.04.2 LTS   5.10.52-gentoo   docker://20.10.7
minikube-m03   Ready    <none>                 4m46s   v1.22.0-beta.0   192.168.49.4   <none>        Ubuntu 20.04.2 LTS   5.10.52-gentoo   docker://20.10.7
$ kubectl get --namespace kubernetes-dashboard pods -o wide
NAME                                         READY   STATUS    RESTARTS   AGE   IP           NODE           NOMINATED NODE   READINESS GATES
dashboard-metrics-scraper-7976b667d4-5pbvv   1/1     Running   0          18s   10.244.1.2   minikube-m02   <none>           <none>
kubernetes-dashboard-6fcdf4f6d-7kb8m         1/1     Running   0          18s   10.244.1.3   minikube-m02   <none>           <none>
$ minikube dashboard --alsologtostderr
I0829 16:25:01.044618 2420654 out.go:286] Setting OutFile to fd 1 ...
I0829 16:25:01.044674 2420654 out.go:338] isatty.IsTerminal(1) = true
I0829 16:25:01.044678 2420654 out.go:299] Setting ErrFile to fd 2...
I0829 16:25:01.044684 2420654 out.go:338] isatty.IsTerminal(2) = true
I0829 16:25:01.044755 2420654 root.go:312] Updating PATH: /home/powerman/.minikube/bin
I0829 16:25:01.044851 2420654 mustload.go:65] Loading cluster: minikube
I0829 16:25:01.045204 2420654 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
I0829 16:25:01.064049 2420654 host.go:66] Checking if "minikube" exists ...
I0829 16:25:01.064224 2420654 api_server.go:164] Checking apiserver status ...
I0829 16:25:01.064265 2420654 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0829 16:25:01.064309 2420654 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0829 16:25:01.081812 2420654 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49182 SSHKeyPath:/home/powerman/.minikube/machines/minikube/id_rsa Username:docker}
I0829 16:25:01.163880 2420654 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/2134/cgroup
I0829 16:25:01.167843 2420654 api_server.go:180] apiserver freezer: "7:freezer:/docker/1df9e36a14a231904247e252d47955c5f5c82593662c6caadc7cc6a32aa42479/kubepods/burstable/podca13ce008c5b78a4ce6f1c88344868ee/7b2a3cacc8357b08ea720f865a5690fff5e36695487138ec5443c86bd75d6f2b"
I0829 16:25:01.167871 2420654 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/1df9e36a14a231904247e252d47955c5f5c82593662c6caadc7cc6a32aa42479/kubepods/burstable/podca13ce008c5b78a4ce6f1c88344868ee/7b2a3cacc8357b08ea720f865a5690fff5e36695487138ec5443c86bd75d6f2b/freezer.state
I0829 16:25:01.171251 2420654 api_server.go:202] freezer state: "THAWED"
I0829 16:25:01.171264 2420654 api_server.go:239] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
I0829 16:25:01.174205 2420654 api_server.go:265] https://192.168.49.2:8443/healthz returned 200:
ok
W0829 16:25:01.174239 2420654 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:25:01.174252 2420654 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:25:01.174260 2420654 proxy.go:118] fail to check proxy env: Error ip not in block
W0829 16:25:01.174270 2420654 out.go:230] 🤔  Verifying dashboard health ...
🤔  Verifying dashboard health ...
I0829 16:25:01.180538 2420654 service.go:214] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  3e5535d4-5b8d-49c8-90be-20e3ef4c6129 642 0 2021-08-29 16:23:58 +0300 EEST <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] []  [{kubectl-client-side-apply Update v1 2021-08-29 16:23:58 +0300 EEST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}}}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.97.248.209,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,TopologyKeys:[],IPFamilyPolicy:*SingleStack,ClusterIPs:[10.97.248.209],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0829 16:25:01.180628 2420654 out.go:230] 🚀  Launching proxy ...
🚀  Launching proxy ...
I0829 16:25:01.180658 2420654 dashboard.go:146] Executing: /home/powerman/.local/bin/kubectl [/home/powerman/.local/bin/kubectl --context minikube proxy --port=0]
I0829 16:25:01.180792 2420654 dashboard.go:151] Waiting for kubectl to output host:port ...
I0829 16:25:01.198977 2420654 dashboard.go:169] proxy stdout: Starting to serve on 127.0.0.1:38869
W0829 16:25:01.198999 2420654 out.go:230] 🤔  Verifying proxy health ...
🤔  Verifying proxy health ...


I0829 16:25:31.204350 2420654 dashboard.go:206] http://127.0.0.1:38869/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c9a7672b-9862-4cd9-af4d-1200d3429fd1] Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:25:31 GMT]] Body:0xc0006e4f80 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00018b700 TLS:<nil>}
I0829 16:25:31.204401 2420654 retry.go:31] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0829 16:26:01.207692 2420654 dashboard.go:206] http://127.0.0.1:38869/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[575c6b9d-ddb8-43e2-b715-5261b9e53528] Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:26:01 GMT]] Body:0xc00109cb00 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00018b800 TLS:<nil>}
I0829 16:26:01.207729 2420654 retry.go:31] will retry after 216.077µs: Temporary Error: unexpected response code: 503
I0829 16:26:31.211051 2420654 dashboard.go:206] http://127.0.0.1:38869/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b8dab5fb-14b6-4389-962f-431fb612a501] Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:26:31 GMT]] Body:0xc000f44480 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0010a8200 TLS:<nil>}
I0829 16:26:31.211097 2420654 retry.go:31] will retry after 262.026µs: Temporary Error: unexpected response code: 503
I0829 16:27:01.214667 2420654 dashboard.go:206] http://127.0.0.1:38869/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[322c1c75-17dd-43c0-9879-b63872cc4ade] Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:27:01 GMT]] Body:0xc000c42540 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0010a8300 TLS:<nil>}
I0829 16:27:01.214709 2420654 retry.go:31] will retry after 316.478µs: Temporary Error: unexpected response code: 503
I0829 16:27:31.217915 2420654 dashboard.go:206] http://127.0.0.1:38869/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e3d0dbaa-6127-4f7a-8beb-502e1264de8a] Cache-Control:[no-cache, private] Content-Length:[191] Content-Type:[application/json] Date:[Sun, 29 Aug 2021 13:27:31 GMT]] Body:0xc0006a6240 ContentLength:191 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000160200 TLS:<nil>}
I0829 16:27:31.217963 2420654 retry.go:31] will retry after 468.098µs: Temporary Error: unexpected response code: 503

@jarobar435
Copy link

jarobar435 commented Sep 14, 2021

Have exactly the same problem as @powerman above:

🚀  Launching proxy ...
I0914 15:38:00.216150   88335 dashboard.go:146] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context minikube proxy --port=0]
I0914 15:38:00.217668   88335 dashboard.go:151] Waiting for kubectl to output host:port ...
I0914 15:38:00.395473   88335 dashboard.go:169] proxy stdout: Starting to serve on 127.0.0.1:64971
W0914 15:38:00.395575   88335 out.go:181] 🤔  Verifying proxy health ...
🤔  Verifying proxy health ...
I0914 15:38:00.408046   88335 dashboard.go:206] http://127.0.0.1:64971/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 14 Sep 2021 13:38:00 GMT]] Body:0x14000b19a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x14000830b00 TLS:<nil>}
I0914 15:38:00.408107   88335 retry.go:31] will retry after 110.466µs: Temporary Error: unexpected response code: 503
I0914 15:38:00.411744   88335 dashboard.go:206] http://127.0.0.1:64971/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Tue, 14 Sep 2021 13:38:00 GMT]] Body:0x14000b19b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x14000e2e900 TLS:<nil>}
I0914 15:38:00.411768   88335 retry.go:31] will retry after 216.077µs: Temporary Error: unexpected response code: 503
{...}

already tried

minikube stop
minikube delete
removed /.minikube and /.kube
minikube start

running on mac m1
Kubernetes v1.21.4
Docker version 20.10.8, build 3967b7d

--edit--

I've reinstalled everything minikube/docker-related.
Now everything is working :)

@spowelljr
Copy link
Member

Hi @powerman, thanks for following up on the issue, I tried this using the newest version of minikube (v1.24.0) and it still seems to be working fine. If you're still experiencing this please create a new issue and we can take a look at it, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/dashboard dashboard related issues co/multinode Issues related to multinode clusters kind/bug Categorizes issue or PR as related to a bug. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests