-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New Caching mechanism #4919
Comments
I agree with you in the problem related with the load, because the same pod has to dispatch all the requests and also process the ScaledObject. I'm not totally sure if adding an external cache is the best option because it's another 3rd party component to maintain and not all he users want to have an extra component for working. Instead of an external cache (which can be an option, I'm not saying NO to it), we can work on supporting any kind of sharding in the operator. WDYT @zroubalik ? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed due to inactivity. |
I understand the reasons, but I concur @JorTurFer 's sentiment. I think this issue could be fixed also by introducing KEDA multitenancy. |
Proposal
I have an Idea but I want to hear your opinion regarding it.
Right now we have the ability to cache metric values till the polling interval by caching them on the keda operator.
We can specify the field usecachedMetrics
This is good in normal and small clusters , but in large clusters where you could have multiple metric-server pods activated in load balancing mode (
--enable-aggregator-routing=true
) then it can make a massive impact on the operator (which needs to respond to many requests heading from different metric servers).I was wondering if maybe we can give users the ability to use some kind of external cache - They can point to redis they maybe have on the cloud or memcached they have deployed on some kind of pod and by that reducing the amount of requests that the operator gets to retrieve value from cache.
Use-Case
I think this Idea could help us to motivate more users in production to use this because we bring them more stable mechanism and other resources that they can trust on
Is this a feature you are interested in implementing yourself?
Maybe
Anything else?
No response
The text was updated successfully, but these errors were encountered: