You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The way logs are read/sent currently are two discrete processes which result in log lines being held in memory during retries on send to Loki, while new lines continue to be read. This can lead to a large volume of memory being consumed if reading a lot of log lines while promtail attempts to resend logs. This is currently unbounded as there is no feedback between the reading and sending.
To Reproduce
Start promtail without a Loki instance consuming high volumes of logs and watch the memory grow.
Expected behavior
Ideally we don't want to have unbounded memory growth when Loki goes down or there are errors sending logs.
The text was updated successfully, but these errors were encountered:
Looking at this a little closer I think my assumptions about how this is happening are wrong. It seems that the reading->sending might be synchronous and the memory growth is a function of many many log files being read simultaneously.
The bigger concern is that with multiple clients configured, if one Loki is not available it will hinder logs being sent to the other.
I have not been able to reproduce, no, I wrote an app to generate a lot of log files (1000) and populate them locally however, memory consumption seems to peak and not climb unbounded.
The only difference between what I was testing and what was reported, I was running promtail pointing to a port where no Loki exists. The original report was sending logs to Loki and getting back 400 errors for timestamp entries that were way too old.
One thing I have noticed, even with debug logging off, the nature of the go-kit logger still generates the log statements but doesn't output them. There is a massive amount of allocations associated with this in the logentry pipeline. I'm gonna work on an improvement for this.
This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.
stalebot
added
the
stale
A stale issue or PR that will automatically be closed.
label
Sep 18, 2019
Describe the bug
The way logs are read/sent currently are two discrete processes which result in log lines being held in memory during retries on send to Loki, while new lines continue to be read. This can lead to a large volume of memory being consumed if reading a lot of log lines while promtail attempts to resend logs. This is currently unbounded as there is no feedback between the reading and sending.
To Reproduce
Start promtail without a Loki instance consuming high volumes of logs and watch the memory grow.
Expected behavior
Ideally we don't want to have unbounded memory growth when Loki goes down or there are errors sending logs.
The text was updated successfully, but these errors were encountered: