-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheusremotewriteexporter logs spurious errors with WAL configured and no metrics to be sent #24399
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I was able to reproduce and debug this locally with the shared config. The error is actually being caught as expected, it's just that after 12 retries the read fails and returns the After every read attempt, there is a sleep call that increases duration with each failure. This is where the ~4 second delay comes in, because after all of the delays are done it will start over from the beginning, disregarding the default (or set) wal.TruncateFrequency I agree this is annoying log spam, but I don't have enough expertise here to confidently say what to do about it. I think it's a good idea in most cases to log an error, since you'd theoretically expect to be getting data if you're running the collector. 4 seconds without data in a regular use case seems like a problem, so I think we do the right thing logging. I'll defer to code owners but I think at most we may decrease log level to Apologies for the delayed response! |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
@crobert-1 However, if I configure the wal, no metrics will be sent, and I can just find this error log. I think it is not merely the problems of logs. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I can confirm the same problem. No metrics are sent and I see this "not found" log message. |
@toughnoah @blockloop do you mean that, if you enable |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Might be related to #20875. @ArthurSens will try to pick that back up |
Component(s)
exporter/prometheusremotewrite
What happened?
Description
When the prometheusremotewriteexporter is configured to use a WAL directory and is receiving no metrics to export, it will emit error logs that
error processing WAL entries
where there are, in fact, none to be processed. The exporter should fall back to usingfsnotify.NewWatcher()
but it does not.Steps to Reproduce
run otelcol-contrib with included config
Expected Result
no errors and no error logs
Actual Result
error logs emitted every 5s
Collector version
v0.81.0
Environment information
Environment
OS: osx, colima, docker
docker run --rm --mount type=bind,source=$(pwd),target=/etc/otelcol/ otel/opentelemetry-collector-contrib:latest --config /etc/otelcol/config.yaml
OpenTelemetry Collector configuration
Log output
Additional context
It appears that the error check in
*prweWAL.readPrompbFromWAL
forwal.ErrNotFound
is not correctly catching it, as the"error": "not found"
text is rising up to the"error processing WAL entries"
error logging.https://github.com/open-telemetry/opentelemetry-collector-contrib/blob/main/exporter/prometheusremotewriteexporter/wal.go#L358
The text was updated successfully, but these errors were encountered: