Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Heapster metrics not showing up on the kubernetes dashboard #3147

Closed
DreadPirateRoberts90 opened this issue Jul 16, 2018 · 10 comments
Closed

Comments

@DreadPirateRoberts90
Copy link

Dashboard version: 1.8
Kubernetes version: 1.11
Operating system: RedHat Linux
Node.js version:
Go version:


I have installed kubernetes dashboard and heapster but I am not able to view the metrics on my dashboard. I am pasting the dashboard logs and the heapster logs here. 

I have looked through the all the GitHub forums on the kubernetes + Heapster issues. 
I have added the --heaspter-host argument in the dashboard yaml file as mentioned here
https://github.com/kubernetes/dashboard/issues/2224

I am installed the dashboard through Helm. Any help on this is greatly appreciated. I have spent the last 2 days figuring this out and have exhausted all the resources. 

Dashboard Logs:
kubectl logs kubernetes-dashboard-helm-6964d48448-xjvth -n kube-system
2018/07/16 21:50:51 Starting overwatch
2018/07/16 21:50:51 Using in-cluster config to connect to apiserver
2018/07/16 21:50:51 Using service account token for csrf signing
2018/07/16 21:50:51 No request provided. Skipping authorization
2018/07/16 21:50:51 Successful initial request to the apiserver, version: v1.11.0
2018/07/16 21:50:51 Generating JWE encryption key
2018/07/16 21:50:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/07/16 21:50:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/07/16 21:50:52 Initializing JWE encryption key from synchronized object
2018/07/16 21:50:52 Creating remote Heapster client for http://heapster.kube-system.svc.cluster.local
2018/07/16 21:50:53 Auto-generating certificates
2018/07/16 21:50:53 Successfully created certificates
2018/07/16 21:50:53 Serving securely on HTTPS port: 8443
2018/07/16 21:51:12 http2: server: error reading preface from client 10.32.0.1:53174: read tcp 10.38.0.2:8443->10.32.0.1:53174: read: connection reset by peer
2018/07/16 21:51:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:51:39 Getting application global configuration
2018/07/16 21:51:39 Application configuration {"serverTime":1531777899709}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/settings/global request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/systembanner request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/rbac/status request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.32.0.1:53212: {
  "jweToken": "{\"protected\":\"eyJhbGciOiJSU0EtT0FFUC0yNTYiLCJlbmMiOiJBMjU2R0NNIn0\",\"aad\":\"eyJleHAiOiIyMDE4LTA3LTE2VDIyOjAxOjU0WiIsImlhdCI6IjIwMTgtMDctMTZUMjE6NDY6NTRaIn0\",\"encrypted_key\":\"zuMt4znG3KmFOqRnPT90zeE6TxR2lTuOUMFgTOaDOwoKmiMALhk0FxzuruOaJidrXMD1IV2q8m3LGdU2-Iwpb9EJ4LWLK_f5NGZoIj7LXzNieyGq2gpvoldjGH-8cfTJqcLlqM7Ywq28wAKuNh41iRaSl7kqeGUx3YrDVvYCXfDnrexYMiy-a1JiTan0k0YS5NGa1y8i3IDsn0jji3XY9zcgOBy_4qx6IjIOkuzi1GOGhvwC3GX18XYDHS9POAzpk0l0UGupwmsmJKURtS5qaT7qFS7jSQAaoeHOIFQ_w6h-qAMFbW_PdegDIUpgvXVsgvzuQ8c5KWKVs2CHEx6GPg\",\"iv\":\"YGiqAS6NQeFXoiOf\",\"ciphertext\":\"Hx7zjuOqIPwDoq1mE4hPWC9-258RLl_nhhlOO2Q-63_3oWQ_4LyUcsGNfPGxepLX6mDK12NU_9HL3cN02Y3JFkqHz8xXCh4widgT52Cvbku1nw4fbJO1Q-634xdLzfs3Qh13MIX2welVG8lb8cKK0ce6176AArTTtQMb5ycZ7kUioZwqDQKx4f4UAz9vYQp-C_3BPtt6ZIrVOy-AmYLCoC3VOGk2VpsXYSdyR3SuYQXynmNXMmNE-N2o7HOcNvPVh4itmidnEf0-CkTr6HadvjUrnxVeF5d_zzMviSAE7B7kcs4WnkEevmLui03TCcX6jex9bbIFpXHgUDEuovzJBi9XisCtB7aYVExE4rA3FCB4vpyZOlC8K3Z_KJiuee5PlGDjy75GTl2sMY66wNo-lGuJpA0tdngbArBnRbOnDWXLdwIN7HoJ1fP8pDaZqD4GueuTUU-WvD3GaOYaagJiJQrlpThXyosx8nJNX_uXsZjzy-hDl1th-7FivwnslrCcmff-zzSv1s-9S9LkHq2pvfnMDNbDtgFj9QsCI3qmHfyzWD8XWBG4o6yPRcFeEWLgM_309ReSwI0jH9GqbS3Ds1xFLA08UoO_cJdO8yFK9kb8SD2OZ0C20vCEUPDWRZeTVNiRyPLNOIsXc7Y4LHCwK8WmFnpuHcH4dl4NtPid1RMTNoEABkzt9qWeZyrzTHeN5Tsn6vCuIX6uxa2XtV8ghBztYD3vZkQ6Ib_VB1BVRR87zD7als9_lso4QuaHkty1H_BIAVDrYzS7V2gRXAAg04L18aTBej8CjcrhCOfZYV8eUrP5VZ3SwZtR9hS3BbZQk9016FKaoj9Ls4Bkzdym99gCG4g_-_QcM0CcAAPfgnuDxDMFzy4GCcDt6QNf5z7hoUA4cgYSTeK8UNtGnwzGuyLbh1ipjNySXcu2OKDI7TU--gV5y46zODTLU7fWEjUVZ88C-eWMEnuBC4EJLe7mZ-j9eN22njPqi8TE_6f2aFnoijtC_R2WaCxk5lEP30pbyMPWJq1054B59bsPbg4Px3OC6_nOCVo9m6Wd8rHv9mRwJjNn0EnTcfaH2vbP7ndjNIigktwSjNGQJJhhqDJkFxhd_XOMLtJZUg__GFux87-1QJZMTD9iuJQ8_LQp31Xl0TYlGOX_NQQSaxan-vEqBf-JxhAAcwxg65qgDiu2Y399kpPk8-CkL3JBTg0\",\"tag\":\"ijgFL1Se7wF0fgsvN5imNg\"}"
}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/overview?filterBy=&itemsPerPage=10&name=&page=1&sortBy=d,creationTimestamp request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 Getting config category
2018/07/16 21:51:40 Getting discovery and load balancing category
2018/07/16 21:51:40 Getting lists of all workloads
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 Getting pod metrics
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 No metric client provided. Skipping metrics.
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53212: {}
2018/07/16 21:51:40 [2018-07-16T21:51:40Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.32.0.1:53212: {}
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.32.0.1:53212: {
  "jweToken": "{\"protected\":\"eyJhbGciOiJSU0EtT0FFUC0yNTYiLCJlbmMiOiJBMjU2R0NNIn0\",\"aad\":\"eyJleHAiOiIyMDE4LTA3LTE2VDIyOjA2OjQwWiIsImlhdCI6IjIwMTgtMDctMTZUMjE6NTE6NDBaIn0\",\"encrypted_key\":\"UZFpmS5gxqnTuJymQgXK_aXqTG7I4z5WGNY45UmmV5nLfszUDZuQnjejtsqmEVVK3SmJ3J5hKNwrGJ4kTqPJmDUmRu3FBHfsF37TltNdF7fkCiQt66UV9B6KaUtJxumhzYh3Cc0db6eC14dJ6mNQ2T-zanvVOLtEcT_hngZvMHdOQhGMaReWdu8b52Tny1GgTxL1vIOL1n-qAcLMIYAOG11r8NXcY9u7DyzRQiyXBqbMjYRYdv2-Fv1mLyj5i-McP-LHhLKAlxuUzfLZVO5PFdzboHdZ8zIGDZuv58trx14kLcOkoMIOBnDkoRUzEtL_FyzbQ4lMRo5Qa5FDSgw7iQ\",\"iv\":\"CYNHx2l4iYqSZH4O\",\"ciphertext\":\"I85VopkK8P145eiFC5_T9XpJzbGAAFKmosExsCcRXgcAlVTpI4nlzBbY6L02v7L_ZgAwiy0VLAdZYKwsZWvambSdM3j9gNO3TsVUVzBQECioal9NYUgjy8rmuM4Oa_OHPCjXGFVyhMjxsgSCeu3Q4hSq-jORgYYyQJp01xZVRxLZ8-XzeeoRsTW6UBu34H2ozyOmLul5AXRyH0DiHDOznBCKMdgDg8adw_PjB4bM0xljOcGquG3EnXijAk_pznOkRjbPv5JTcZGMH4ve6r2J4N3jbXtUSEIT_iXyiCNayrjWWbTC05FdrfuDglQSgLPWe6anWa6OsYzmPzhIICHQ5N3hzf-1gbSud_vgQrGKZqGKIqdK1bElmnMaSSd9HCOLLqudEUIl1lo9zF7C8z_y6N0x7aPWHt8H4IaxvkQxnrA_ROlkkFRk7Y2k4v6M80ALaCYnoP1U3DrYDFYJ8STcExD7SIjqLTw5gbPc5fcBH-q6imGFBhNj0gESHC8e0KCOE9HLJmUWjvqlA2EQJSanECEKpm0lgBJbCMK1-3HpBfVK2j5UZ_PhJKjuIPfJZYolnHuYhFpQzB2welnhBUY6AzJgL0-052vKmJcQw_6ZILbEkhC3yiAQcfMa8yJvIQ9pSjL9Dw3SsnNlaP2v4dNoJzlF3Lc_uqtDaLquYN5vDF2xPk2PayM1kT-5m0000RfN8OKgJlmhlg1UAe0XjvgabIWvHCR8rQBsMRp6N0TcbmbVL-XmXVJTCtUHf97WXL5-FXbFBHwoW-ze9akIcgzYyUSt2Aht6m3OB_J7R7U3lkTlCvbNyjm4N4J08KegbgsQTH9P8p0deK0fD5_ucTIV1mzv6VCf_5q7_KCwtme_KTd_1YWihJdXD2cB7xpFAqkBWY82-WCLgv3cP5j70C6y2Z4QzEc5PNHc7_zXY15xcN43-SMfGisU95HJr-4lwi_REo8oQU_LQPFX7b6UJT5jKQli_XTFWNUqHjAP9Q43jmzUGBC52mAQW7MpKsfLNqbruJ4OP9pvWUpDd11a7VF9sAzu5d8nWAereE4rpvv3QejSLEc-FdcSMrsf4NzLIHCsoy33N3dJ4Qir8Bq1qETO-cVcS4XmrWp4V94bnnml0qVRwq0UoVjgxdvnJIAldFbE2dh-1TkIueokk03iji5WtOFh91o9hdn9UYVudttzfXUTPep-tmDLjcVzzE0\",\"tag\":\"AKYElJKknhd0R6NofQvJtg\"}"
}
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Incoming HTTP/2.0 GET /api/v1/overview/default?filterBy=&itemsPerPage=10&name=&page=1&sortBy=d,creationTimestamp request from 10.32.0.1:53212: {}
2018/07/16 21:51:41 Getting config category
2018/07/16 21:51:41 Getting discovery and load balancing category
2018/07/16 21:51:41 Getting lists of all workloads
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 Getting pod metrics
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 No metric client provided. Skipping metrics.
2018/07/16 21:51:41 [2018-07-16T21:51:41Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:47 [2018-07-16T21:51:47Z] Incoming HTTP/2.0 GET /api/v1/namespace request from 10.32.0.1:53212: {}
2018/07/16 21:51:47 Getting list of namespaces
2018/07/16 21:51:47 [2018-07-16T21:51:47Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Incoming HTTP/2.0 GET /api/v1/login/status request from 10.32.0.1:53212: {}
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Incoming HTTP/2.0 GET /api/v1/csrftoken/token request from 10.32.0.1:53212: {}
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Incoming HTTP/2.0 POST /api/v1/token/refresh request from 10.32.0.1:53212: {
  "jweToken": "{\"protected\":\"eyJhbGciOiJSU0EtT0FFUC0yNTYiLCJlbmMiOiJBMjU2R0NNIn0\",\"aad\":\"eyJleHAiOiIyMDE4LTA3LTE2VDIyOjA2OjQxWiIsImlhdCI6IjIwMTgtMDctMTZUMjE6NTE6NDFaIn0\",\"encrypted_key\":\"Xtjm48BbG5-H9sk_G6KwcwsdGvW-r-QA6F6-4x3phedGE6L6QQhmm9QA3oIIcW6Yli3_G_B8aLsDuu_kcVK57pRXJ2LCLdzvAqKc8yvU4OLH8YIUtGHuTLFr96GEOLsbuLHarDAojf4flLYXi10TH6Vkiy8iLJUvBUOoZR_CnEyWY7txw9MBTxt3xIFWaAJeZ4Ln_PCMTn8I-wG82bvpBc_8kURBrHV6fQjvb9SuAg4gzt4D3_RhoKvQyWozf7aJvPJEbdPxccAXWQqValJuaNF4-oY9_k4sEqjT0HvVSUFZERLECNVZCuMRnWu4l9fQVw2hpnDOzfZ0Ml1iMMFh9A\",\"iv\":\"6bNSpOUCR-84rAQb\",\"ciphertext\":\"cydK-ctVdNqcVn8DGjOYz4ToN74uGn9D5PXrB2XtTiYKqmAlPwV2Ue0cRKjljQLuiDK12ZWN3M6lelKXYRUujTb1Ny2JHt2gFXzNwTbSBVM8qu2uYbEXJLnpQ9MLAK6apH1ms162agl4MJB87e-RiYPRt6GDxhVCuPXqC_9b0q50MsRHYanHzB1YzKc5xcoffDpJaTPgWh7vyXTHJWgDa4yrfMQ8C9nOh9z__Us80ddE7pJhZM4BtGcCnSdnm5hjebBO-UzUGNUA8ODAcGIFRnZEUypIbTRDJje8sRKyLsIBjBxJFNMW4vqaPoGJyli98CRlugb2IP7wmvTc3OazKllzjlFH0YzXUP5xRZnwpaiXXN4kFWMks1hCcSchwVx2-fIjIA4KgJzt3Mtb5OxMfMEVCxumpR4Kn3zFAb84M6YPGNfb9Ckg83QIF-HsWWcTEFBEtMJXfvog8h3tbjnnUtJFKJsPB6PEQQr5tOtW2xtpk9XRnNx1sXrLbJBnm_AAjLeZ8l1PoUYn6L0bY0BhIf-M8Fc3qhqOvdDbWETq1KYoHqQ7wMemSzlzVIJS19CzyeCaXIgj1xBJEwuve6s49SEIzb0_ZhMjUCvKRXLOS_q0gpVV9JIOnVGrDuOOLntYxIdCOBZ9tIreyJNkFP-wUXEY6-CgaSRXZGEqyzA0Ket9qG_ahhmmqc2CkpTiOb1QFqHNqHuNLwTihPnKqSAa-IDIzgRneHX7ZoxJv2KCq7EFcBmeSJLMtdJ0N576yJznFxRDnOHDwCcn_pm_Nd5ThrzA5JheEwxDJmo_HGTn_l35STa8bNF2S56qsfC26jnYAtLIKUT59He4A8moqqDcgcg-4BvdjIjOfKYhp-nyzRv9kYbfSTEuk7oaFbeIOpBhcb67ylvEP4bsA7kqHeqKU9hLDb8sGIZeZxVsHjfN7uf1HHkiQSy3Ef4i7d7YfWQ6FvzZJTtbLSdXqCiMGm6quWd55M3COiUaqOPYcvHE8SdJitjbGBa_K7BRZ0djNMmDamhYsGy14Ah69GSgbtq2Is8l2XIb9SNaHmhJKGpNCnl7dy9vJ_gDtk1D8MFTAgfxPq_5k2324TLK3wF-2tCreQTJPwJ1teA0jsx0hGEozX4BsXbZE-l5X5HumaT_6b__oZWuqTLlLfH3stKbPAKPNBS3fu-CX5yraBxkdy0HNsPnbgE0op5giNhZBjk\",\"tag\":\"7Mhjrh1IH7O_85TbDLBZTw\"}"
}
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:51:49 [2018-07-16T21:51:49Z] Incoming HTTP/2.0 GET /api/v1/overview/kube-system?filterBy=&itemsPerPage=10&name=&page=1&sortBy=d,creationTimestamp request from 10.32.0.1:53212: {}
2018/07/16 21:51:49 Getting config category
2018/07/16 21:51:49 Getting discovery and load balancing category
2018/07/16 21:51:49 Getting lists of all workloads
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 Getting pod metrics
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:49 No metric client provided. Skipping metrics.
2018/07/16 21:51:50 [2018-07-16T21:51:50Z] Outcoming response to 10.32.0.1:53212 with 200 status code
2018/07/16 21:52:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:53:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:54:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:55:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:56:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:57:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:58:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 21:59:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 22:00:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 22:01:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
2018/07/16 22:02:22 Metric client health check failed: Get http://heapster.kube-system.svc.cluster.local/healthz: dial tcp 10.105.76.172:80: i/o timeout. Retrying in 30 seconds.
[root@ip-172-16-255-70 ec2-user]# kubectl logs kubernetes-dashboard-helm-6964d48448-xjvth -n kube-system | grep error
2018/07/16 21:51:12 http2: server: error reading preface from client 10.32.0.1:53174: read tcp 10.38.0.2:8443->10.32.0.1:53174: read: connection reset by peer

Heapster Logs:
E0716 21:45:05.028016       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-11.ec2.internal(172.16.255.11:10255): Get http://172.16.255.11:10255/stats/summary/: dial tcp 172.16.255.11:10255: getsockopt: connection refused
E0716 21:46:05.003936       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-11.ec2.internal(172.16.255.11:10255): Get http://172.16.255.11:10255/stats/summary/: dial tcp 172.16.255.11:10255: getsockopt: connection refused
E0716 21:46:05.006024       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-248.ec2.internal(172.16.0.248:10255): Get http://172.16.0.248:10255/stats/summary/: dial tcp 172.16.0.248:10255: getsockopt: connection refused
E0716 21:46:05.017121       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-70.ec2.internal(172.16.255.70:10255): Get http://172.16.255.70:10255/stats/summary/: dial tcp 172.16.255.70:10255: getsockopt: connection refused
E0716 21:46:05.021679       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-167.ec2.internal(172.16.0.167:10255): Get http://172.16.0.167:10255/stats/summary/: dial tcp 172.16.0.167:10255: getsockopt: connection refused
E0716 21:47:05.009057       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-167.ec2.internal(172.16.0.167:10255): Get http://172.16.0.167:10255/stats/summary/: dial tcp 172.16.0.167:10255: getsockopt: connection refused
E0716 21:47:05.019218       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-248.ec2.internal(172.16.0.248:10255): Get http://172.16.0.248:10255/stats/summary/: dial tcp 172.16.0.248:10255: getsockopt: connection refused
E0716 21:47:05.028291       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-70.ec2.internal(172.16.255.70:10255): Get http://172.16.255.70:10255/stats/summary/: dial tcp 172.16.255.70:10255: getsockopt: connection refused
E0716 21:47:05.032984       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-11.ec2.internal(172.16.255.11:10255): Get http://172.16.255.11:10255/stats/summary/: dial tcp 172.16.255.11:10255: getsockopt: connection refused
E0716 21:48:05.011959       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-167.ec2.internal(172.16.0.167:10255): Get http://172.16.0.167:10255/stats/summary/: dial tcp 172.16.0.167:10255: getsockopt: connection refused
E0716 21:48:05.016272       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-248.ec2.internal(172.16.0.248:10255): Get http://172.16.0.248:10255/stats/summary/: dial tcp 172.16.0.248:10255: getsockopt: connection refused
E0716 21:48:05.023789       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-70.ec2.internal(172.16.255.70:10255): Get http://172.16.255.70:10255/stats/summary/: dial tcp 172.16.255.70:10255: getsockopt: connection refused
E0716 21:48:05.026025       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-11.ec2.internal(172.16.255.11:10255): Get http://172.16.255.11:10255/stats/summary/: dial tcp 172.16.255.11:10255: getsockopt: connection refused
E0716 21:49:05.022397       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-248.ec2.internal(172.16.0.248:10255): Get http://172.16.0.248:10255/stats/summary/: dial tcp 172.16.0.248:10255: getsockopt: connection refused
E0716 21:49:05.023698       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-0-167.ec2.internal(172.16.0.167:10255): Get http://172.16.0.167:10255/stats/summary/: dial tcp 172.16.0.167:10255: getsockopt: connection refused
E0716 21:49:05.025968       1 summary.go:97] error while getting metrics summary from Kubelet ip-172-16-255-11.ec2.internal(172.16.255.11:10255): Get http://172.16.255.11:10255/stats/summary/: dial tcp 172.16.255.11:10255: getsockopt: connection refused

@bigfoot31
Copy link

facing same issue. I think the dashboard is not yet compatible with 1.11

@DreadPirateRoberts90
Copy link
Author

The dashboard is running perfectly fine but I cannot see the Heapster metrics like CPU usage and memory details.

@bigfoot31
Copy link

In 1.11 k8s deprecated support for heapster-influxdb-grafana. They have their own implementation in metrics-server. The dashboard 1.8 can't communicate with metrics-server, it is build to support only heapster metrics.

@DreadPirateRoberts90
Copy link
Author

DreadPirateRoberts90 commented Jul 19, 2018

So whats the workaround for this? Do you mean only Heapster is supported with dashboard 1.8?

@jeefy
Copy link
Member

jeefy commented Jul 19, 2018

Related to #2986

@floreks
Copy link
Member

floreks commented Jul 19, 2018

Closing in favor of #2986.

@floreks floreks closed this as completed Jul 19, 2018
@mhristache
Copy link

If I understand it correctly, heapster support was deprecated in k8s 1.11 but should still work. I guess it's not enabled by default. What is the procedure to enable it?

@mhristache
Copy link

I got it working following these instructions.

@mrwulf
Copy link

mrwulf commented Dec 13, 2018

So basically, if you had a working dashboard+heapster and upgraded,

  1. In the heapster deployment, tell heapster to connect to the secure kubelet port by changing the source parameter from kubernetes:https://kubernetes.default to kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true
  2. Give heapster enough api access to share it's knowledge and create the stats subresource on nodes by adding this rule to the system:heapster ClusterRole:
  - verbs:
      - get
      - list
      - watch
      - create
    apiGroups:
      - ''
    resources:
      - nodes/stats

@ShahNewazKhan
Copy link

So basically, if you had a working dashboard+heapster and upgraded,

1. In the heapster deployment, tell heapster to connect to the secure kubelet port by changing the source parameter from `kubernetes:https://kubernetes.default` to `kubernetes:https://kubernetes.default?useServiceAccount=true&kubeletHttps=true&kubeletPort=10250&insecure=true`

2. Give heapster enough api access to share it's knowledge and create the stats subresource on nodes by adding this rule to the `system:heapster` ClusterRole:
  - verbs:
      - get
      - list
      - watch
      - create
    apiGroups:
      - ''
    resources:
      - nodes/stats

Thanks for the work around instructions!

What are the implications of setting insecure=true?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants