-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Compactor retention removing all streams when retention_stream is set #4409
Comments
+1 |
2 similar comments
+1 |
+1 |
/cc @cyriltovena |
i hit a similar issues, but when I check the chunk folder, the data still there, just no query from output, even query last 5 mins data. However, it works again after restarting it. |
Hi! This issue has been automatically marked as stale because it has not had any We use a stalebot among other tools to help manage the state of issues in this project. Stalebots are also emotionless and cruel and can close issues which are still very relevant. If this issue is important to you, please add a comment to keep it open. More importantly, please add a thumbs-up to the original issue entry. We regularly sort for closed issues which have a We may also:
We are doing our best to respond, organize, and prioritize all issues but it can be a challenging task, |
i hit a similar issues, any updates ? Note: I originally started with loki grafana/loki:2.4.0 and could not get retention through the compactor to work at all |
update for my fix: config | new default | old default| |
hi, thanks for your feedback, my question is if I change Loki YAML retention config and change the retention time with nondefault value in Kubernetes cluster (helm chart + Loki version 2.4), does Loki immediately apply it or not? what is the reload config in the Loki Config file for retention? |
PR #4573 has fixed the issue of default stream retention not being applied properly. |
Describe the bug
When retention_stream is set in the limits config all steams are truncated, even when they do not match the selector.
Similar issue was reported in #3881
If I leave the config with retention_stream in it, logs are only stored for 24h. Without the retention_stream, logs last for several days, left over weekend and had logs for all 4 days Thursday - Tuesday.
For testing I run the loki query
{app=~".+"}
across a time period until I find the earliest log (within 24 hours always returns for several apps)So far have tested:
app="app-that-does-not-exist"
compactor logs after restart seemed to indicate that is was only using the smallest period
To Reproduce
Steps to reproduce the behavior:
Note: I originally started with loki
grafana/loki:2.3.0
and could not get retention through the compactor to work at all (running same config) so switched tografana/loki:main-fbfc8ab
install loki with tanka as per https://grafana.com/docs/loki/latest/installation/tanka/
customizations:
Expected behavior
I expected retention to work on a per stream basis and fallback to retention_period if the selector does not match the stream as described in the docs https://grafana.com/docs/loki/latest/operations/storage/retention/
Environment:
Screenshots, Promtail config, or terminal output
loki config
full compactor logs
The text was updated successfully, but these errors were encountered: