-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Retention problems #4869
Comments
I am using a storage class that stores data on NFS. storageSpec: kubectl get storageclasses.storage.k8s.ioNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE |
@brancomrt I am also facing the same issue with the retention. I set my retention to Were you able to resolve this? TIA Below are my args in the statefulset passed to prometheus v2.54.1
|
It was mentioned here in a comment that its resolved in v2.21 but I am using v2.54 and issue still persists. |
I cant find exact ref to this but because default block size is compacted every 2 hrs you cannot set retention to below that value without changing serveral other parameters as well. regardless, this is a ticket is relevant for upstream prom/operator and not the chart repo |
Thank you @DrFaust92 |
This should be closed because it is not a bug but rather a limit of default prometheus configuration. |
With the following args configuration, I am seeing the the But when I pass I am not sure if the chart is defaulting the values or its a upstream prometheus issue.
|
@chanakya-svt a minimum block duration that is longer than the maximum block duration doesn't make sense. |
@rouke-broersma I tried to look into the charts to see if the chart is passing any args thats causing this, but I couldn't pinpoint to anything. Can you confirm if this is upstream prometheus issue? if so, I can create an issue in the prometheus repo. thank you. |
we have the same issue with 2.51 |
Describe the bug a clear and concise description of what the bug is.
I am experiencing issues with the configuration of retention policies in the kube-prometheus-stack when installed via Helm chart version 61.7.1.
I set the parameter prometheus.prometheusSpec.retention to a value of 10m or 1h for testing data rotation purposes, but the storage PVC keeps growing and does not clean up the data.
What's your helm version?
version.BuildInfo{Version:"v3.14.4", GitCommit:"81c902a123462fd4052bc5e9aa9c513c4c8fc142", GitTreeState:"clean", GoVersion:"go1.21.9"}
What's your kubectl version?
Client Version: v1.27.10 Kustomize Version: v5.0.1 Server Version: v1.28.12+rke2r1
Which chart?
kube-prometheus-stack
What's the chart version?
61.7.1
What happened?
I am experiencing issues with the configuration of retention policies in the kube-prometheus-stack when installed via Helm chart version 61.7.1.
I set the parameter prometheus.prometheusSpec.retention to a value of 10m or 1h for testing data rotation purposes, but the storage PVC keeps growing and does not clean up the data.
What you expected to happen?
Automatic cleanup of Prometheus storage data on the PVC
How to reproduce it?
Waiting for the retention period defined in the values.yaml and checking the storage size of the PVC prometheus-kube-prometheus-stack-prometheus-db-prometheus-kube-prometheus-stack-prometheus-0 to see if it decreases.
Enter the changed values of values.yaml?
prometheus.prometheusSpec.retention
Enter the command that you execute and failing/misfunctioning.
helm upgrade kube-prometheus-stack -n monitoring ./
Local values.yaml chart.
Anything else we need to know?
No response
The text was updated successfully, but these errors were encountered: