You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
When promtail is configured to scrape logs from journald it is supposed to remember the position of the logs in the position file. When promtail is restarted it can't read back saved position from journal with the following error:
level=error ts=2020-05-21T05:53:29.024009194Z caller=journaltarget.go:219 msg="received error reading saved journal position" err="failed to get realtime timestamp: cannot assign requested address"
Note: on ubuntu, the error is not cannot assign requested address but 99 which is the same reality (error message vs error code).
The side effect is that is then reread logs from journal up to journal.max_age which can cause some troubles like:
burst of level=error ts=2020-05-21T05:58:14.901208619Z caller=client.go:247 component=client host=172.30.0.101:6902 msg="final error sending batch" status=400 error="server returned HTTP status 400 Bad Request (400): entry with timestamp 2020-05-21 04:40:42.937458 +0000 UTC ignored, reason: 'entry out of order' for stream: {host=\"node1.novalocal\", job=\"systemd-journal\", log_type=\"access\"},"
and loki could then complains for too much requests with level=warn ts=2020-05-21T06:15:39.861839553Z caller=client.go:242 component=client host=172.30.0.101:6902 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Ingestion rate limit exceeded (limit: 4194304 bytes/sec) while attempting to ingest '333' lines totaling '102198' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased"
To Reproduce
Steps to reproduce the behavior:
Started Loki 1.5.0
Started Promtail 1.5.0
check error logs
Expected behavior
promtail should be able to detect where it stopped to fetch logs and start again from there.
Instead it starts over again logs that have already been pushed
Environment:
Infrastructure: baremetal Centos7 or Ubuntu 18.04
Deployment tool: release binary from github or local compilation
Describe the bug
When promtail is configured to scrape logs from journald it is supposed to remember the position of the logs in the position file. When promtail is restarted it can't read back saved position from journal with the following error:
level=error ts=2020-05-21T05:53:29.024009194Z caller=journaltarget.go:219 msg="received error reading saved journal position" err="failed to get realtime timestamp: cannot assign requested address"
Note: on ubuntu, the error is not
cannot assign requested address
but99
which is the same reality (error message vs error code).The side effect is that is then reread logs from journal up to
journal.max_age
which can cause some troubles like:level=error ts=2020-05-21T05:58:14.901208619Z caller=client.go:247 component=client host=172.30.0.101:6902 msg="final error sending batch" status=400 error="server returned HTTP status 400 Bad Request (400): entry with timestamp 2020-05-21 04:40:42.937458 +0000 UTC ignored, reason: 'entry out of order' for stream: {host=\"node1.novalocal\", job=\"systemd-journal\", log_type=\"access\"},"
level=warn ts=2020-05-21T06:15:39.861839553Z caller=client.go:242 component=client host=172.30.0.101:6902 msg="error sending batch, will retry" status=429 error="server returned HTTP status 429 Too Many Requests (429): Ingestion rate limit exceeded (limit: 4194304 bytes/sec) while attempting to ingest '333' lines totaling '102198' bytes, reduce log volume or contact your Loki administrator to see if the limit can be increased"
To Reproduce
Steps to reproduce the behavior:
Expected behavior
promtail should be able to detect where it stopped to fetch logs and start again from there.
Instead it starts over again logs that have already been pushed
Environment:
The text was updated successfully, but these errors were encountered: