-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loki consumes lots of memory even on empty queries with a long time span #2900
Comments
This is because you're using this matcher |
Thanks for the quick reply. So it means that before checking other filters ( The number of index entries is correlated with the number of streams, ergo the number of unique label sets. One way to alleviate the problem is to reduce the number of labels. |
Indeed, add a single matcher like namespace="dev" or cluster="us-central1" and it should be better. I'm looking into improving the index allocations, but that's long term I wouldn't wait on it. |
Thanks. |
Describe the bug
The longer the query time span, the more memory is consumed by Loki. It applies even if there are no query results.
To Reproduce
service loki status
reported 11 GB of used memory, and the process was killed by OOM killer.More reasonable interval of 1 month, RAM usage reported by
service loki status
is ~7 GB.Expected behavior
Query succeeds, and the RAM usage is low.
Screenshots, Promtail config, or terminal output
Right before
loki
process got killed, I had collected metrics: metrics.txt and rango tool pprof
:Config:
The text was updated successfully, but these errors were encountered: