Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Promtail unbounded memory consumption when Loki is not available (or when there are errors sending logs) #904

Closed
slim-bean opened this issue Aug 16, 2019 · 4 comments
Labels
stale A stale issue or PR that will automatically be closed.

Comments

@slim-bean
Copy link
Collaborator

Describe the bug

The way logs are read/sent currently are two discrete processes which result in log lines being held in memory during retries on send to Loki, while new lines continue to be read. This can lead to a large volume of memory being consumed if reading a lot of log lines while promtail attempts to resend logs. This is currently unbounded as there is no feedback between the reading and sending.

To Reproduce

Start promtail without a Loki instance consuming high volumes of logs and watch the memory grow.

Expected behavior
Ideally we don't want to have unbounded memory growth when Loki goes down or there are errors sending logs.

@slim-bean
Copy link
Collaborator Author

slim-bean commented Aug 16, 2019

Looking at this a little closer I think my assumptions about how this is happening are wrong. It seems that the reading->sending might be synchronous and the memory growth is a function of many many log files being read simultaneously.

The bigger concern is that with multiple clients configured, if one Loki is not available it will hinder logs being sent to the other.

@pracucci
Copy link
Contributor

Have you been able to reproduce it? If yes, may you share more details, please? I was trying to reproduce it without much success.

@slim-bean
Copy link
Collaborator Author

I have not been able to reproduce, no, I wrote an app to generate a lot of log files (1000) and populate them locally however, memory consumption seems to peak and not climb unbounded.

The only difference between what I was testing and what was reported, I was running promtail pointing to a port where no Loki exists. The original report was sending logs to Loki and getting back 400 errors for timestamp entries that were way too old.

One thing I have noticed, even with debug logging off, the nature of the go-kit logger still generates the log statements but doesn't output them. There is a massive amount of allocations associated with this in the logentry pipeline. I'm gonna work on an improvement for this.

@stale
Copy link

stale bot commented Sep 18, 2019

This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale A stale issue or PR that will automatically be closed. label Sep 18, 2019
@stale stale bot closed this as completed Sep 25, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale A stale issue or PR that will automatically be closed.
Projects
None yet
Development

No branches or pull requests

2 participants