-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[prometheusreceiver] - Critical regression since 0.30.0 #4907
Comments
Additional information, I have a lot of error in console since 0.30.0. Exactly the sames than on https://github.com/open-telemetry/opentelemetry-collector/issues/3118#issuecomment-833399201 It seems not directly related because the previous issues is older than 0.30.0 but the symptom is similar in an different context ( |
@gillg thank you for the report! Given that you have the setup and you know what you look for, can you perhaps binary bissect the offending commit, essentially: start with the commit at the very bottom of the stack of suspect commits; if it doesn't have the regression, go to mid between the top and bottom; once mid way, if it doesn't have the regression, go to half of half; if it did, then go lower but half and that should perhaps help figure out the regression. |
@odeke-em I will don't have time to dissect and recompile every versions. I try to do my best to help and contribute, I already have a lot of issues with metrics and logs collection to workaround for now 😅 I keep you in touch when I find the time to make some tests. |
Thanks Gill! If you try out the commit before my change and it returns a desirable result then that helps rule out the cause. As for delaying stale markers, we are trying to emulate what Prometheus does so I'll run end to end tests with Prometheus then compare results. I have a PR to fix staleness markers for replicated/multiple instances but that isn't what's happening for your issue. |
@dashpole thank you a lot ! I will try to test it today ! |
@dashpole Tested and approved !!! |
@alolita @Aneurysm9 @gillg please help close this issue. |
Describe the bug
Hello, I just discovered a regression since the 0.30.0 release.
Lot of my metrics has values "NaN" since the version 0.30.0 ! (I tested lot of versions one by one...)
The problem(s) is probably in this list : open-telemetry/opentelemetry-collector@e8aeafa...79816e7
Workflow :
node exporter => prometheus receiver => batch processor => prometheus exporter
Steps to reproduce
Use a standard node exporter, scrape it with prometheus receiver, use prometheus exporter and see the content.
What did you expect to see?
What did you see instead?
What version did you use?
Version: >= 0.30.0
What config did you use?
I don't detail the pipeline, but there is nothing special
Environment
OS: Linux, docker image
The text was updated successfully, but these errors were encountered: