-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use predictive analytics to activate a pod #197
Comments
Ok let's run this in kubeflow😁
Le mer. 15 mai 2019 à 16:24, Jeff Hollan <notifications@github.com> a
écrit :
… A very “blue sky” feature but would be amazing to have KEDA look at
historic data and patterns for deployments to try to predict when events
may be coming in and scale proactively
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#197?email_source=notifications&email_token=AB3W6GMUBS3WEBK44PHG7K3PVQMLBA5CNFSM4HND3DV2YY3PNVWWK3TUL52HS4DFUVEXG43VMWVGG33NNVSW45C7NFSM4GT6CFWQ>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AB3W6GN2BHTCZZAVDY565S3PVQMLBANCNFSM4HND3DVQ>
.
|
Especially for RabbitMQ scaler, that would be great. If we can use some additional variables too such as "consumer utilization", "consumer ack" and "delivery" with the "queueLength" variable, maybe we can scale pods in a reactive way? These days I'm playing with KEDA and testing it on our staging area. And KEDA currently scaling our pods based on the "queueLength" variable. It's great. But our some RabbitMQ consumers doing I/O based operations. If KEDA keeps continuing the scaling as a linear based on the "queueLength" variable, our other I/O services, that in the consumer, are starting to get in the bottleneck. |
That's a really interesting idea. A few months ago I was doing research about best practices/patterns for autoscaling application and I stumbled on a research paper covering doing autoscaling using a predictive model.
@jeffhollan what historic data you had in mind? Also, should it be something maintained by KEDA itself or should this be a pluggable solution so anyone can use a custom model / algo? |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 7 days if no further activity occurs. Thank you for your contributions. |
We were dreaming about the same thing and developed this kind of solution. It works based on our simple, but working AI model and predicts pretty well. PR: #2418 |
A hill-climbing algorithm like is used for the CLR Thread Pool could be a candidate for this. It basically will add threads (or instances in this case) and see if it makes a positive impact on backlog, and if not then remove that instance. It's a reactive as opposed to predictive approach but it may be more generally applicable as opposed to needing to specify the cyclical period to look back on (hourly batch process? daily user load? weekly jobs? one-off events?). It also doesn't require additional storage of historical data. Given the way KEDA interfaces with HPAs, it would be a bit round-about (needing to manipulate reported metric values to get the desired instance count directly), but that's the interface we have to work with without rewriting a new pod auto-scaler. |
I'd love to see a generic interface that will be siting between metrics reported from scalers and HPA. There we can "manipulate" the metrics the way we would like to. For example adding more logic in evaluation of metrics from multiple triggers or pluging in some AI/ML model. The only option to "manipulate" metrics that we have today is the Writing a new pod autoscaler is something I'd like to avoid 😅 |
A very “blue sky” feature but would be amazing to have KEDA look at historic data and patterns for deployments to try to predict when events may be coming in and scale proactively
The text was updated successfully, but these errors were encountered: