-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide CPU/Memory scaler #1183
Comments
Does this mean that metrics servier is no longer needed? |
Yes, it's jsut an abstraction on top of the HPA; but as a user I don't have to care about that as KEDA handles that for me. |
I am interested in completing this feature. I think there are two ways to achieve this:
@tomconte @zroubalik Which way is more appropriate? I tend to use the second method, to provide a new scaler for cpu/memory to instead of |
Our idea is to use a scaler definition which defines the CPU/Memory needs and just use that for horizontalPodAutoscalerConfig. Reasoning for that is:
But I can see the confusion if the setting is still there. However, if we remove it then we should have it for 2.0. Thoughts @zroubalik ? |
My idea is to add the following configuration, using apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: cron-scaledobject
namespace: default
spec:
scaleTargetRef:
name: my-deployment
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true/false # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: resource # provider cpu/memory scaler
metadata:
name: cpu/memory
type: value/ utilization/ averagevalue
value: 60 # Optional
averageValue: 40 # Optional
averageUtilization: 50 # Optional Of course, the wheel will not be repeated. The resource scaler is only responsible for generating an hpa resource object (in implementation, this scaler may need to be specially processed, instead of generating a |
I personnaly would intro |
like this : ...
triggers:
- type: cpu/memory # provider cpu/memory scaler
metadata:
type: value/ utilization/ averagevalue
value: 60 # Optional
averageValue: 40 # Optional
averageUtilization: 50 # Optional if this design is ok, I plan to implement it. PTAL @tomconte @zroubalik |
What if it's just this:
|
yes, this will be more user-friendly. apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: cron-scaledobject
namespace: default
spec:
scaleTargetRef:
name: my-deployment
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true/false # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: cpu/memory # provider cpu/memory scaler
metadata:
type: value/ utilization/ averagevalue
value: 60 Any other suggestions? |
LGTM, thanks! |
In addition, this scaler is not applicable to ScaledJob.
|
/assign me |
Does the percentage are from container CPU usage or from k8s requests CPU usage? |
It does not support per container usage yet (see #3146) but is only on pod-level. |
@tomkerkhove Do I need the K8s metrics server in order to utilize this or does Keda collects the CPU/memory usage of each pod? EDIT: #1644 |
@joeynaor yes, you need that. |
hey guys! @silenceper @tomkerkhove I got this error when trying to use this scaler:
Did I miss anything? This is my ScaledObject file:
Thanks!! |
|
It's either cpu or memory |
Provide CPU/Memory scaler which acts as an abstraction on top of HPA functionality.
Today, you can already scale on CPU/Memory (docs) through
horizontalPodAutoscalerConfig.resourceMetrics
but it requires you to at least have one trigger defined.This is not ideal given KEDA users should be able to fully rely on KEDA for autoscaling, and not have the need to add HPAs as well. If we have dedicated scalers for these (who don't go through metrics server) they have a consistent experience without knowing the Kubernetes internals and only use ScaledObjects.
Do you need this as well? Don't hesitate to give a 👍
The text was updated successfully, but these errors were encountered: