-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Metrics not available at the moment" on minikube , prometheus installed via Lens > Settings > Lens metrics #5052
Comments
I think that Lens.app will do the following PromQL query lens/src/main/prometheus/lens.ts Lines 82 to 83 in 589472c
So I tried that against the prometheus server
And performed But if I try with an longer
|
It takes some time before we expect metrics to start appearing. However, we can certainly improve the UI here to make that more clear. |
that pod "test-32677" has been running 1 hour and it's not showing up in Lens. Also if I try the same thing but installing kube-prometheus (prometheus operator) then metrics appear in Lens.app after 2 minutes.
and then change in Lens to prometheus operator monitoring/prometheus-k8s:9090 and the CPU chart in Lens.app works almost right away. But I haven't managed to get in minikube the Lens metrics or Helm option to work. |
I am seeing the same behaviour for 1 of my clusters, whereas another cluster in the same workspace works perfectly fine, showing all metrics. Here are more details: Lens Version: 5.4.6 Here are my observations with 2 of my clusters in lens: (1.) When seeing this cluster in lens, I don't even need to setup Settings -> Metrics -> "helm". With the default setting of "auto detect" , all the metrics (i.e. cluster, node, pod etc) are visible. (2.) When seeing this cluster in lens, the default setting "auto-detect" caused the metrics chart to try loading for 5 mins, and then give message "metrics are not available at this moment" Upon changing this setting to "helm", and providing prometheus service address as "metrics/prometheus:80" , same observation of "5 minute wait -> metrics not available" I even tried removing the prometheus helm release, and installed lens-metrics stack (enabled prometheus, kube-state-metrics, node-exporter in "settings -> lens-metrics"), but still the same behaviour. |
I'm experiencing issues after deploying the same chart in Azure AKS since you mentioned you are able to pull all the metrics using the lens. |
@wizpresso-steve-cy-fan Which provider do you have set in your cluster preferences? |
Thanks for bringing this up. |
Though I think you probably should be using the "Helm 14.x" provider |
@Nokel81 I used Prometheus Operator since this is the way I installed it. I used an helm chart to install the operator |
I guess my question was mostly towards @nanirover |
@ecerulm Did you try and change the I guess your problem is that the scrapes seem to be failing quite often. NOTE: the above PR is for fixing @wizpresso-steve-cy-fan's issue |
Describe the bug
I get "metrics not available at the moment" for all pods, even though prometheus is installed using Lens itself.
To Reproduce
minikube delete
minikube start
Start Lens.app > Catalog > minikube
minikube > Settings > Lens Metrics
minikube > Settings > Metrics > Prometheus > Lens
Run a pod
Expected behavior
I expect it to show cpu metrics for the pods, or a debug log somewhere that tells me why there is no metrics. As far as I know Lens it's doing PromQL queries to prometheus-server but I don't know exactly which queries and why those queries come empty
Screenshots
Environment (please complete the following information):
Logs:
When you run the application executable from command line you will see some logging output. Please paste them here:
Kubeconfig:
Quite often the problems are caused by malformed kubeconfig which the application tries to load. Please share your kubeconfig, remember to remove any secret and sensitive information.
Additional context
From the browser I can see that
container_cpu_usage_seconds_total{pod="test-1720"}
has results, so I guess that Lens is doing some other query , but it's not clear which one.
The text was updated successfully, but these errors were encountered: