-
Notifications
You must be signed in to change notification settings - Fork 3.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Loki crashes when the storage is full #2314
Comments
We don’t support that yet, but it’s in our plan. |
Similar to #162 - time-based and volume-based retention |
This issue has been automatically marked as stale because it has not had any activity in the past 30 days. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
I have the same issue. Is there a workaround for this? I using the helm-chart loki-stack, with the default settings. |
No, You can deploy a sidecar container, which can monitor Loki's disk and to clean some of the chunks when they reach +90% |
Hm okay. Thanks for the quick answer. Seems not the memory. The problems a the inodes. |
you can get the inode usage with |
@Kristian-ZH could you please share the cleanup script? Thanks in advance |
I use this. I create a sidecarecontainer which call this script with cron
|
We have a Golang application which is containerised and deployed in the cluster. |
thanks a lot for your script, can you please tell me how to make a sidecar container, I tried with https://gist.github.com/AntonFriberg/692eb1a95d61aa001dbb4ab5ce00d291, but for some reason the task is not completed |
delete_files_if_low_memory is the script from above. This is the Dockerfile for the image.
crontab
In the values.yaml for loki I have this part:
|
Thanks for sharing this script! Just what we needed. Just 1 small modification we had to do is to use |
The script itself works great, however this container configuration didn't work for me, the problem for me was that I have loki's psp enabled (enforcing non-root execution) which cauesed crond to fail (it must elevate to run the job) Thank you for sharing this solution! |
Hi mates! Working in the Loki Charmed Operator we noticed this ugly bug. 😱 We thought that maybe we would tackle this issue with some external solution, however wouldn't it be better to implement a functionality in Loki itself that allows us to set a storage limit like in Prometheus The purpose of the following document is to draft a solution to this. If we come up with a workable solution, we could contribute it to the Loki project. Jen Villa (Grafana Product Manager) told us:
Comments are welcome!! https://docs.google.com/document/d/15V42tcDlZR46hLq8o-2MsN1BRWhiRGwF0r8rgkV2Mwk/edit @Kristian-ZH @noamApps @aseychell @kaflake @NawiLan @cyriltovena |
I had this same issue and I applied the sidecarecontainer workaround proposed here: #2314 (comment)
It looks like somehow the index files got corrupted. Not sure how to work around this without losing all of my logs. The only alternative I can think of is deleting the PV and attaching a new one to the pod, but this would mean losing all of my logs. This is my Loki configuration:
|
After deleting chunks, how do I recreate index? #4755 |
Describe the bug
We have a PVC with 1GB storage mounted to Loki's data folder.
The retention period of the Loki is 14 days
We filled the storage with logs for a weak and after that Loki started constantly sending this error and can not accept more logs:
Also we are not able to create a queries from Grafana because it says
To Reproduce
Steps to reproduce the behaviour:
Expected behaviour
I expect Loki to trigger a deletion of the oldest chunks and indices (As the Elastic-search curator does) when its storage is full and Loki is unable to accept more logs. Otherwise once the storage max capacity is hit, the Loki dies...
The text was updated successfully, but these errors were encountered: