-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow setting retention per metric (e.g rule aggregation) #903
Comments
There have been discussions on per time-series retention on upstream Prometheus before, I think at least having a discussion with the team is worth it, just to see if there are any insights from back then.
What do you mean by this? As far as I can tell there is no design written up anywhere, but may very well have missed it. Off the top of my head, this could be a configuration which is a combination of a Prometheus style label selector, as well a respective rule of which resolution to keep for how long. As a whole this is definitely not trivial, but I agree much needed. |
Sorry - no, proposal has to be in place, that's what I meant. (: Relabel-like config makes sense but essentially we are talking about rewrite in compactor for this right? |
Configuration is a technicality, I'm not entirely sure relabelling would work exactly but something close to that probably yes. I agree the compactor is the component that needs to take care of this by re-writing blocks. |
With the federation system we have in place now, we have trained users that want metrics to be preserved (federated) have to use a specific recording rule style name so that the metrics would get federated. We would love to continue this practice with Thanos and only retain metrics that meet a specific format for an extended period of time. |
Some offline discussions revealed that users still do it with Thanos, but using federation + Thanos on top.
I think we should aim to allow users to avoid this, but cannot see immdiately blocker for that, otherwise than more complex system and query being able to fetch data with some lag (rule eval lag + federated scrape) |
Just saw these discussion threads on potential feature requirement Actually we are having same
I would like to know your thoughts on this idea. |
Just an FYI - not retaining raw data will lead to problems (you won't be able to zoom into your metrics anymore): https://thanos.io/components/compact.md/#downsampling-resolution-and-retention:
|
@wogri I think you are using grafana for visualization. Checkout PR grafana/grafana#19121. At the moment this PR is not in a grafana release, therefore I use the master. |
Thanks @Reamer! |
@bwplotka any thoughts on this idea to support |
Extra context can be found here: prometheus/prometheus#1381 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This is being worked on on prometheus, once done there I would say we can implement it here with the same semantics and configuration. |
The current plan is to first tackle #1598 then try to implement this. Putting this issue as GSoC project as well.
The current decision is that Prometheus will not implement this, and the work has to be done first externally. It would be nice though if our work would work could be reused for vanilla Prometheus as well (as usual). |
Hey @bwplotka, I would like to work on this and it would be very helpful if you could suggest me some resources to get started. |
This issue/PR has been automatically marked as stale because it has not had recent activity. Please comment on status otherwise the issue will be closed in a week. Thank you for your contributions. |
We need that still (:
…On Fri, 28 Feb 2020 at 16:10, stale[bot] ***@***.***> wrote:
This issue/PR has been automatically marked as stale because it has not
had recent activity. Please comment on status otherwise the issue will be
closed in a week. Thank you for your contributions.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#903?email_source=notifications&email_token=ABVA3O2LMUBFONYWRPI4Z2DRFEZOHA5CNFSM4G5CNURKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOENJBKWA#issuecomment-592581976>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABVA3O26ETGQMQE54LYWLSLRFEZOHANCNFSM4G5CNURA>
.
|
Hello 👋 Looks like there was no activity on this issue for the last two months. |
I’m still interested in this feature. |
I am still interested in this as well. Now with the new bucket rewrite tool and the |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Still needed. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Still needed |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Still needed. |
Hello 👋 Looks like there was no activity on this issue for the last two months. |
Still needed :( |
Still needed |
Any news about this feature? @bwplotka @csmarchbanks |
Chiming in here, as I'm also highly in need for this feature for a project |
Does the narrower "multi-tenant compactor", or more generally matching only on external labels fall under this too? As I understand it, while similar it would require a less invasive changes, as each tenant has it's own separate TSDB and blocks. So with that it would be possible to only take the external labels in ThanosMeta and look up a deletion policy based on that, right?
|
Still needed. I guess the work that was to be started as part of GSoC has not been finished? Is there maybe some branch or fork in which work on this has begun? |
In ideal world retention is not necessary on downsampling/raw leve, but on aggregation level.
We need to have the way to bring it through in LTS system like Thanos.
AC:
It cames to the fact that you want per metric retention ideally on compactor. This is bit related to delete_series as it might involve block rewrite in edge case... We need to design this.
Thoughts @improbable-ludwik @brancz @devnev @domgreen
The text was updated successfully, but these errors were encountered: