Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support per-request-level shouldBypassCache configuration in the client Reader implementation #2879

Open
Yesphet opened this issue Jul 12, 2024 · 8 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@Yesphet
Copy link
Contributor

Yesphet commented Jul 12, 2024

When getting an Object through the default client, whether to read from the cache has been determined based on client.Options when creating the Client. In our scenario, for a same Object, most of the time we want to read it from the local cache, but in a few scenarios with high consistency requirements, we need to read it from apiserver directly. At present, we create two clients, one with cache and one without bringing implementation. Can we support using the same client to achieve this ability, such as putting a Value in the context or add a GetOption/ListOption to decide whether to bypass cache for this request?

I'm not sure if this is a general requirement. If so, I can implement it

@sbueringer
Copy link
Member

There's also the APIReader which never uses the cache.

Not sure about adding an option. The problem is it wouldn't make sense for a lot of implementations of the Client interface

@Yesphet
Copy link
Contributor Author

Yesphet commented Jul 15, 2024

There's also the APIReader which never uses the cache.

Not sure about adding an option. The problem is it wouldn't make sense for a lot of implementations of the Client interface

cluster.GetClient() and cluster.GetAPIReader() are essentially two clients, one with cache and one without. This is also our current implementation, but I don't think it's that elegant because we have to build two clients (we only used the controller-runtime client package to replace client-go ), so I want to discuss whether there is a solution that can be solved with only one client

@alvaroaleman
Copy link
Member

What is your use-case for wanting this?

@Yesphet
Copy link
Contributor Author

Yesphet commented Jul 18, 2024

What is your use-case for wanting this?

@alvaroaleman Sorry for the late reply. We deploy our business workloads through a set of pods, with one pod acting as the leader and the others as followers. Additionally, we have a component responsible for the high availability of the business within this set of pods. This component uses the controller-runtime/pkg/client to access the Kubernetes API server. It labels/annotates the pods to record some information like Role.
Most of the time, this component can get pod informations through the client cache. However, when the leader is not working, the high availability component needs to access the api-server directly to ensure the consistency of the labels and annotations on the pods. Currently, our solution involves creating two clients: one configured with a cache and one without. This approach feels inelegant and results in a lot of redundant client configuration code. Therefore, I would like to discuss whether it is possible to support a per-request-level cache toggle using a single client, like:

cli.Get(WithDisableCache(ctx), ...)

@alvaroaleman
Copy link
Member

However, when the leader is not working, the high availability component needs to access the api-server directly to ensure the consistency of the labels and annotations on the pods

Can you eloborate on why you can not use the cache there?

@Yesphet
Copy link
Contributor Author

Yesphet commented Jul 30, 2024

Can you eloborate on why you can not use the cache there?

Mainly because in some scenarios, the data in the cache may have latency and consistency issues, such as api-server overload or network jitter. However, we can still read outdated data from the cache in these scenarios, which can cause our processes to continue executing with incorrect data, and we prefer it to directly return an error if we cannot read the latest pod annotations

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 28, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

5 participants