-
-
Notifications
You must be signed in to change notification settings - Fork 198
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache metadata more #800
Comments
From the original comment, you proposed 3 options:
As a user I would be prefer to be in control whether to use the cache and not. I guess option 1 and 2 wouldn't allow to have such control for an high-level request like |
The more I look at this, the more I think this needs to be done via (2) or (3), not (1). Saving for the next next release. |
Issue #800 was created as a follow up idea to strengthen caching of metadata requests in the client. This pushes the mapped metadata caching logic deeper into the guts of issuing metadata requests, so that no caching is ever missed. The next commit will introduce a new API to request potentially cached metadata. For #800.
This can be used to reduce the number of metadata requests issued. As followup, kadm should almost globally use this function. Closes #800.
As a follow up to #800, we convert kadm to using cached metadata everywhere except for the actual Metadata function. With a quick local test using `rpk group describe`, this brings the prior 4 metadata requests down to 1.
grafana/mimir#8886 (comment)
@pracucci to reply to your latest message in the thread -- if we strengthen the caching within franz-go itself, then it addresses caching the metadata request you mention,
"""
The pt1 of the PR description refers to the Metadata request issued to discover the partitions:
franz-go/pkg/kadm/metadata.go
Lines 432 to 436 in 6b61d17
"""
My proposal covers caching that^^, at which point the only extra caching your PR would provide is the 5 extra seconds (your PR uses 15s caching, but also elsewhere in Mimir you use 10s MetadataMinAge)
The text was updated successfully, but these errors were encountered: