Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New Caching mechanism #4919

Closed
yuvalweber opened this issue Aug 29, 2023 · 4 comments
Closed

New Caching mechanism #4919

yuvalweber opened this issue Aug 29, 2023 · 4 comments
Labels
feature-request All issues for new features that have not been committed to needs-discussion stale All issues that are marked as stale due to inactivity

Comments

@yuvalweber
Copy link
Contributor

Proposal

I have an Idea but I want to hear your opinion regarding it.
Right now we have the ability to cache metric values till the polling interval by caching them on the keda operator.
We can specify the field usecachedMetrics

apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
  name: cache-scale-example
  namespace: default
spec:
  scaleTargetRef:
    kind: StatefulSet
    name: cache-workload
  minReplicaCount: 1
  maxReplicaCount: 3
  pollingInterval: 60
  triggers:
    - type: prometheus
      metadata:
        serverAddress: https://prometheus:9090
        query: sum(http_request_total)
        threshold: "1000"
        ignoreNullValues: "false"
      useCachedMetrics: true

This is good in normal and small clusters , but in large clusters where you could have multiple metric-server pods activated in load balancing mode (--enable-aggregator-routing=true) then it can make a massive impact on the operator (which needs to respond to many requests heading from different metric servers).

I was wondering if maybe we can give users the ability to use some kind of external cache - They can point to redis they maybe have on the cloud or memcached they have deployed on some kind of pod and by that reducing the amount of requests that the operator gets to retrieve value from cache.

Use-Case

I think this Idea could help us to motivate more users in production to use this because we bring them more stable mechanism and other resources that they can trust on

Is this a feature you are interested in implementing yourself?

Maybe

Anything else?

No response

@yuvalweber yuvalweber added feature-request All issues for new features that have not been committed to needs-discussion labels Aug 29, 2023
@JorTurFer
Copy link
Member

I agree with you in the problem related with the load, because the same pod has to dispatch all the requests and also process the ScaledObject.

I'm not totally sure if adding an external cache is the best option because it's another 3rd party component to maintain and not all he users want to have an extra component for working. Instead of an external cache (which can be an option, I'm not saying NO to it), we can work on supporting any kind of sharding in the operator.
I think that something like they do in ArgoCD could be a good option. They shard the stuff deterministically based on the cluster name. We could do something similar but with the namespaces, I mean, based on the namespace we can decide which operator instance has to process the resource. From the metrics server pov, we could add a label in the HPA with the required information to hit the proper operator instance which is processing the current SO.

WDYT @zroubalik ?

Copy link

stale bot commented Nov 3, 2023

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale All issues that are marked as stale due to inactivity label Nov 3, 2023
Copy link

stale bot commented Nov 10, 2023

This issue has been automatically closed due to inactivity.

@stale stale bot closed this as completed Nov 10, 2023
@zroubalik
Copy link
Member

I understand the reasons, but I concur @JorTurFer 's sentiment. I think this issue could be fixed also by introducing KEDA multitenancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion stale All issues that are marked as stale due to inactivity
Projects
Archived in project
Development

No branches or pull requests

3 participants