-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
query: ExemplarQueryResult error #5189
Comments
Thank you for the detailed report! It should probably be enough to create an empty slice here if it is nil: Help wanted! |
Or maybe this https://github.com/thanos-io/thanos/blob/main/pkg/exemplars/exemplars.go#L92 would be an even better place |
I found grafana/grafana#42749 which seems to trigger this error.
I'm not a Go developer, unfortunately, so I can't help fix this. |
Thanos, Prometheus and Golang version used:
Thanos: quay.io/thanos/thanos:v0.24.0 - go1.16.12
Prometheus: 2.29.2 - go1.16.6
Grafana: grafana/grafana:8.4.2
What happened:
I see the following error in the Grafana log
What you expected to happen:
No erros
How to reproduce it (as minimally and precisely as possible):
It just happens when I open a Grafana dashboard.
Full logs to relevant components:
level=info ts=2022-02-28T09:04:26.161364912Z caller=client.go:55 msg="enabling client to server TLS" level=info ts=2022-02-28T09:04:26.161683494Z caller=options.go:115 msg="TLS client using provided certificate pool" level=info ts=2022-02-28T09:04:26.161707098Z caller=options.go:148 msg="TLS client authentication enabled" level=info ts=2022-02-28T09:04:26.166055848Z caller=options.go:27 protocol=gRPC msg="disabled TLS, key and cert must be set to enable" level=info ts=2022-02-28T09:04:26.166787409Z caller=query.go:695 msg="starting query node" level=info ts=2022-02-28T09:04:26.166967193Z caller=intrumentation.go:48 msg="changing probe status" status=ready level=info ts=2022-02-28T09:04:26.167250961Z caller=intrumentation.go:60 msg="changing probe status" status=healthy level=info ts=2022-02-28T09:04:26.167328044Z caller=grpc.go:131 service=gRPC/server component=query msg="listening for serving gRPC" address=127.0.0.1:10901 level=info ts=2022-02-28T09:04:26.167341139Z caller=http.go:63 service=http/server component=query msg="listening for requests and metrics" address=127.0.0.1:9090 level=info ts=2022-02-28T09:04:26.16746526Z caller=tls_config.go:195 service=http/server component=query msg="TLS is disabled." http2=false level=info ts=2022-02-28T09:04:31.189964822Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.12:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-0\"}" level=info ts=2022-02-28T09:04:31.190044591Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.59:10901 extLset="{prometheus=\"openshift-user-workload-monitoring/user-workload\", prometheus_replica=\"prometheus-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190073174Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.128.10.62:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-1\"}" level=info ts=2022-02-28T09:04:31.190095015Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.128.10.61:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-1\"}" level=info ts=2022-02-28T09:04:31.190121093Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.131.8.9:10901 extLset="{prometheus=\"openshift-monitoring/k8s\", prometheus_replica=\"prometheus-k8s-0\"}" level=info ts=2022-02-28T09:04:31.190148554Z caller=endpointset.go:349 component=endpointset msg="adding new rule with [storeAPI rulesAPI]" address=10.131.8.11:10901 extLset="{thanos_ruler_replica=\"thanos-ruler-user-workload-0\"}"
Anything else we need to know:
I found VictoriaMetrics/VictoriaMetrics#2000, maybe it can help to fix the problem.
If you think this is a Grafana Issue, let me know.
The text was updated successfully, but these errors were encountered: