Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus example is panic (exporting histogram panic) #735

Closed
youngbupark opened this issue May 16, 2020 · 2 comments
Closed

Prometheus example is panic (exporting histogram panic) #735

youngbupark opened this issue May 16, 2020 · 2 comments
Milestone

Comments

@youngbupark
Copy link

youngbupark commented May 16, 2020

SDK version

>= 0.5.0

No issue with 0.4.3.

Repro step

  1. Run prometheus example - https://github.com/open-telemetry/opentelemetry-go/blob/master/example/prometheus/main.go

  2. Browse localhost:2222 to see the metrics after 1 minute

Expected Result

It should show the list of metrics on the browser without any panic.

Actual Result

panic: runtime error: index out of range [-1]

goroutine 33 [running]:
go.opentelemetry.io/otel/exporters/metric/prometheus.(*collector).exportHistogram(0xc000116018, 0xc00013a720, 0x1f88440, 0xc0003d4dc0, 0xc0003d4d01, 0xc000134230, 0xc00012a3a0, 0x1, 0x1, 0x0, ...)
	/Users/youngp/go/pkg/mod/go.opentelemetry.io/otel/exporters/metric/prometheus@v0.5.0/prometheus.go:326 +0x870
go.opentelemetry.io/otel/exporters/metric/prometheus.(*collector).Collect.func1(0xc0001320f8, 0xc000144228, 0x16ca860, 0xc0003d4dc0, 0x0, 0x0)
	/Users/youngp/go/pkg/mod/go.opentelemetry.io/otel/exporters/metric/prometheus@v0.5.0/prometheus.go:218 +0x304
go.opentelemetry.io/otel/sdk/metric/integrator/simple.batchMap.ForEach(0xc000126180, 0xc000115fe0, 0x0, 0x0)
	/Users/youngp/go/pkg/mod/go.opentelemetry.io/otel@v0.5.0/sdk/metric/integrator/simple/simple.go:110 +0x19c
go.opentelemetry.io/otel/sdk/metric/controller/push.syncCheckpointSet.ForEach(0xc00013e008, 0x16c47e0, 0xc000126180, 0xc000115fe0, 0x0, 0x0)
	/Users/youngp/go/pkg/mod/go.opentelemetry.io/otel@v0.5.0/sdk/metric/controller/push/push.go:208 +0xa9
go.opentelemetry.io/otel/exporters/metric/prometheus.(*collector).Collect(0xc000116018, 0xc00013a720)
	/Users/youngp/go/pkg/mod/go.opentelemetry.io/otel/exporters/metric/prometheus@v0.5.0/prometheus.go:211 +0xcc
github.com/prometheus/client_golang/prometheus.(*Registry).Gather.func1()
	/Users/youngp/go/pkg/mod/github.com/prometheus/client_golang@v1.5.0/prometheus/registry.go:445 +0x21e
created by github.com/prometheus/client_golang/prometheus.(*Registry).Gather
	/Users/youngp/go/pkg/mod/github.com/prometheus/client_golang@v1.5.0/prometheus/registry.go:454 +0x803

This panic caused because buckets.Counts was nil

totalCount += buckets.Counts[len(buckets.Counts)-1].AsUint64()

Adding DefaultHistogramBoundaries setting to prometheus config did not work.

@surajkjai
Copy link

I modified the last sleep statement for a 5 second sleep to avoid the reported crash. A curl of the endpoint localhost:2222 does not produce any output.

~/opentelemetry-go/example/prometheus$ curl -vvv localhost:2222/

  • Trying 127.0.0.1...
  • TCP_NODELAY set
  • Connected to localhost (127.0.0.1) port 2222 (#0)

GET / HTTP/1.1
Host: localhost:2222
User-Agent: curl/7.58.0
Accept: /

< HTTP/1.1 200 OK
< Content-Type: text/plain; version=0.0.4; charset=utf-8
< Date: Mon, 18 May 2020 07:03:45 GMT
< Content-Length: 0
<

  • Connection #0 to host localhost left intact

@jmacd
Copy link
Contributor

jmacd commented May 18, 2020

I was able to reproduce this, though. See #736.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants