-
Notifications
You must be signed in to change notification settings - Fork 808
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
opentelemetry/exporter-metrics-otlp-http Does not send metrics to collector (with last collector version 0.40.0) #2675
Comments
Same issue here. Also tested with otelcol 0.41.0, otelcontribcol 0.40.0 and otelcontribcol 0.41.0 |
I believe this is due to inconsistencies between payload of the Metrics and actual proto type. This is caused by this change. This changed in proto 0.9.0 which delivered to the collector in version 0.33.0. Perhaps fix is quite easy: just rename fields to the new scheme and remove unused (int) fields. |
Yeah our proto is a little out of date. i'll update it |
Looking forward! If any help is needed, just let me know |
I won't have time to tackle this until monday so if you want to do it before then I will try to keep an eye out for notifications and merge/release the fix over the weekend. If you can wait until monday i'll just handle it then. |
@dyladan sorry for pinging you, is there any success? |
Same issue here! |
@dyladan sorry for disturbing you again. Could you pls add help-needed badge or something. Just to draw attention of possible contributors to the issue. |
In order to get this working, the proto transformations need to be updated. The transformations are in Alternatively, I am working on a separate package which will handle transformations into the intermediate OTLP JSON format and serialization as protobuf in #2691. Once that package is completed, the metrics exporters will be refactored to use it (and it already has updated proto). Depending on how time-sensitive this is for you, it may be prudent to just wait for that (i would estimate a couple weeks). |
Hello, is there any progress? |
I ran a test with the OTLP trace exporter and noticed it fails to connect to the collector for version > 0.42.0. Does this issue track the trace exporter as well, or should that be a separate issue? |
This is the error I'm receiving when using metrics exporter
|
It's still an issue with |
Using same versions here (0.28.0 and 0.51.0) and also not seeing metrics appearing in the collector |
Same issue here. I am on |
Same issue here. I thought the latests 0.28.0 was going to have a newer otlp metric |
gotta use |
I had to use |
@dyladan any progress on this? |
I'm sorry i forgot to create a release but 1.3.0/0.29.0 was released at the end of last week #2995 the most recent version should work |
@dyladan trying both
Tired with |
That looks like a collector issue not an issue with OTel js |
@pichlermarc if you have time can you look into this? |
I'm running this locally in a docker compose.. Not in a deployed env. I'll double check again to make sure nothing is trying to communicate with otel over the grpc port. |
Yeah I did double check and also i get this from the otel-js side.
|
The |
@mplachter you're using I looked into your issue and found that the I'm currently working on a fix. Edit: the fix should be ready over at #3019 🙂 |
@mplachter @pichlermarc can this be closed? |
@dyladan for me it is working again. But we have not released that fix yet. |
Running into this again on 0.35.1. I'm using RemixInstrumentation. Maybe that is sending the wrong shape? |
Hi @lamroger, thanks for reaching out. Could you please open a new bug for the problem you're seeing? It has gotten a bit hard to follow the thread here, and a lot has changed since this issue here was first opened. |
I keep getting this error.
package.json {
"dependencies": {
"@opentelemetry/api": "^1.4.1",
"@opentelemetry/sdk-node": "^0.41.2",
"opentelemetry-instrumentation-express": "^0.39.1"
}
}
otel-collector-config.yaml receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:8000
http:
endpoint: 0.0.0.0:4318
exporters:
prometheus:
endpoint: collector:7777
namespace: mysearcher
extensions:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
pipelines:
metrics:
receivers: [otlp]
processors: [batch]
exporters: [prometheus]
telemetry:
logs:
level: info
version: '3.5'
services:
prometheus:
image: prom/prometheus
container_name: prometheus
ports:
- 9090:9090
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus_data:/prometheus
collector:
image: otel/opentelemetry-collector-contrib
container_name: collector
command: [--config=/etc/otel-collector-config.yaml]
volumes:
- ./otel-collector-config.yaml:/etc/otel-collector-config.yaml
ports:
- 8000:8000
- "4318:4318"
grafana:
image: grafana/grafana
container_name: grafana
ports:
- 3000:3000 Can you help me why am I getting the error? |
Hi @Pratap22 , I got the same error as yours. Did you manage to solve the issue? |
Hello guys, We wanted to use https://github.com/GoogleChrome/web-vitals#browser-support to get some metrics from our browser instrumentation and send them through opentelemetry-js. In order to accomplish that we started to make some tests with the examples provided here.
What version of OpenTelemetry are you using?
I'm using 0.27.0 opentelemetry-js version (this repo in main branch) and otelcontribcol 0.40.0
What version of Node are you using?
v12.16.3
Please provide the code you used to setup the OpenTelemetry SDK
At the moment the exporter does not work with the examples provided.
Just changed this in the example (tracer-web/metrics/index.js)
const labels = { pid: 1, environment: 'staging' };
because 'process' is undefinedWhat did you do?
We just simply start up the 'tracer-web' examples.
All the traces examples work as expected and arrive to our collector, but metrics doesn't.
If possible, provide a recipe for reproducing the error.
const labels = { pid: process.pid, environment: 'staging' };
toconst labels = { pid: 1, environment: 'staging' };
What did you expect to see?
I expected to see the metrics in our collector/backed, like all the other traces examples that works correctly.
What did you see instead?
In Network tab from the browser I have a 400 Bad request response from the collector.
Additional context
traces are exported to localhost:55681/v1/traces (default legacy endpoint)
metrics are exported to localhost:55681/v1/metrics (default legacy endpoint)
Docker compose collector file:
Hope the information is enough to reproduce the issue, if not, please reach me and I provide more details.
The text was updated successfully, but these errors were encountered: