-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[v2] Adding Standard Resource metrics to KEDA #874
Conversation
KEDA Docs pr: |
With this change, our new spec looks like this: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: {scaled-object-name}
spec:
scaleTargetRef:
deploymentName: {deployment-name} # must be in the same namespace as the ScaledObject
containerName: {container-name} #Optional. Default: deployment.spec.template.spec.containers[0]
pollingInterval: 30 # Optional. Default: 30 seconds
cooldownPeriod: 300 # Optional. Default: 300 seconds
minReplicaCount: 0 # Optional. Default: 0
maxReplicaCount: 100 # Optional. Default: 100
hpaConfig: # Optional. If not set, KEDA won't scale based on resource utilization
resourceMetrics:
name: cpu/memory # Name of the resource to be targeted
target:
type: value/ utilization/ averagevalue
value: 60 # Optional
averageValue: 40 # Optional
averageUtilization: 50 # Optional
behavior:
triggers:
# {list of triggers to activate the deployment} However, scaling on resource of the deployment feels like another trigger for me; not something we slap on the new scaled object spec or is this just me? Reason why this feels odd to me is that ScaledObject relies on triggers, for example Service Bus queue, where we will almost never want to scale on CPU in the same scaled object but rather a new trigger in the list or another scaled object, no? |
@tomkerkhove I had a discussion with @zroubalik before making this change and this is how we came up with a solution that we will support it as a standard resource metric than as a scaler. This is because HPA also has different types of metrics: Resource metrics and External metrics. Scalers support external metrics. So, we thought of using the resource metrics of HPA directly. Hence made this change. But looking forward to a discussion on this. |
One of the strengths about KEDA is that people who are not a Kubernetes expert don't need to know how Kubernetes handles scaling. I can see why we want to open up HPA configuration to those who have the expertise, but for scaling on resource metrics I would say that that's different and has to align with other scalers to be more consistent. An example of a current one is this: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
labels:
app: order-processor
deploymentName: order-processor
spec:
scaleTargetRef:
deploymentName: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
triggers:
- type: azure-servicebus
metadata:
queueName: orders
queueLength: '5'
authenticationRef:
name: trigger-auth-service-bus-orders But if I were to only scale on CPU then it would look like this: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
labels:
app: order-processor
deploymentName: order-processor
spec:
scaleTargetRef:
deploymentName: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
hpaConfig: # Optional. If not set, KEDA won't scale based on resource utilization
resourceMetrics:
name: cpu # Name of the resource to be targeted
target:
type: averagevalue
averageValue: 40 # Optional
triggers:
- type: azure-servicebus
metadata:
queueName: orders
queueLength: '5'
authenticationRef:
name: trigger-auth-service-bus-orders What if it looked like this: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
labels:
app: order-processor
deploymentName: order-processor
spec:
scaleTargetRef:
deploymentName: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
advanced:
horizontalPodAutoscalerConfig: # Optional. If not set, KEDA won't scale based on resource
behavior:
triggers:
- type: resource-metrics
metadata:
metric: cpu
value: 40
authenticationRef:
name: trigger-auth-service-bus-orders This allows people to:
PS: I'm not saying you didn't do a good job but just want to have this aligned because it deviates from the rest and we dare to assume everybody is a Kubernetes expert which is far from true. Thoughts from you folks as well on this @zroubalik @ahmelsayed @jeffhollan? |
I have seen some requests for this functionality among users and I like this idea, eventhough it is a corner case and won't be (likely) used that much. @tomkerkhove I am not saying that your proposed solution is bad, I can see your point behind putting the setting under the This solution has small technical side-benefits, we don't have to care about validation of that If we introduce a separate scaler for this (as you have proposed), we will have to validate the fields, keep track of that Spec (I admit that it will likely not change in the future) and implement more things around scalers as this is a special kind of scaler which requires a special handling. These changes shouldn't be too complex and hard to implement, but I just wanted to bring this under the consideration as well. BTW I'd prefer |
@zroubalik Have changed the field name to horizontalPodAutoscalerConfig |
@zroubalik I can see what you're saying but that means you are forcing people to understand how Kubernetes uses HPA and they should know they need an HPA to scale based on CPU/Memory. Not everybody has this knowledge, far from actually. For me this is not a good approach as KEDA should be the autoscaling layer folks have to know, all the rest is technical detail. It would be more inline with scaling other workloads. If I have an API & queue worker then I'd need to have this: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
labels:
app: order-processor
deploymentName: order-processor
spec:
scaleTargetRef:
deploymentName: order-processor
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
triggers:
- type: azure-servicebus
metadata:
queueName: orders
queueLength: '5'
authenticationRef:
name: trigger-auth-service-bus-orders API: apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-api-scaler
labels:
app: order-api
deploymentName: order-api
spec:
scaleTargetRef:
deploymentName: order-api
# minReplicaCount: 0 Change to define how many minimum replicas you want
maxReplicaCount: 10
hpaConfig: # Optional. If not set, KEDA won't scale based on resource utilization
resourceMetrics:
name: cpu # Name of the resource to be targeted
target:
type: averagevalue
averageValue: 40 # Optional
triggers: This feels odd, no? One might say you can just use HPA directly but then you don't have a unified layer anymore. |
For me; KEDA is really about making autoscaling deadsimple and use same approach, regardless of the source. |
@tomkerkhove yeah, you have got some points. Let's discuss this on the call next week. |
@ahmelsayed what do you think about this? |
From my prospective, one of the main reasons for using HPA for 1->N scaling has always been being able to make use of all the default resources HPA supports for scaling there, along side the custom metrics provided by KEDA. Providing the right interface for that however was always a bit of an unknown. I didn't know what's the best way to allow that that will be both flexible enough to be useful, but doesn't leak a lot of underlying concepts and complicated the ScaledObject interface. I think there is value in having a generic way for users to patch the HPA generated by KEDA if needed (even if just as an advanced scenario), while still retaining the option to add knobs on the ScaledObject later on for that as well. When I thought about it before, I was thinking of a way to allow users to provide a merge patch for HPA per ScaledObject that KEDA could apply on the generated HPA as a final step as this would allow you to see what the object looks like, then define any transform/patch for it that you like. In that case the example Tom shared above apiVersion: keda.k8s.io/v1alpha1
kind: ScaledObject
metadata:
name: order-processor-scaler
spec:
scaleTargetRef:
deploymentName: order-processor
triggers:
- type: azure-servicebus
metadata:
queueName: orders
queueLength: '5'
authenticationRef:
name: trigger-auth-service-bus-orders
+ patches:
+ hpa: {} or have it defined as a patch in another object with the same name, something like apiVersion: keda.k8s.io/v1alpha1
kind: ScaledPatch
metadata:
name: order-processor-scaler
spec:
hpaPatch: {} However, it seemed a bit of an overkill at the time and I wasn't sure if we might find a better way in the future, so I punted on it. The interface proposed by this PR seems to more or less match what I was thinking about above (albeit it only takes I understand @tomkerkhove concern about additional complexity, but I think if the option listed under an advanced section as a patch to the generated HPA and not something that's always in your face when learning about KEDA would meet that requirement half-way. |
Hey, I'm glad I found this PR because this is exactly what I was trying to do. Hopefully there is a consensus soon. Is there a roadmap for when 2.0.0 will be stable enough for release? |
We're actually trying to fix 2 things as far as I see:
I'm fully ok with opening up the HPA related changes as @ahmelsayed suggested or the current proposal but I would highly recommend to move the CPU/memory scaler to triggers since that's aligned with another approach. Let's discuss during the standup tomorrow maybe? |
+1 |
@ckuduvalli we disscussed this on the call and for now we will go with direction proposed in this PR. We can add a separate resource based scalers later, if there's a need from community. Could you please update you PR:
@tomkerkhove @ahmelsayed am I correct? 😄 |
Certainly! This is what we've discussed during standup yesterday:
|
sorry I couldn't attend the standup yesterday. @zroubalik @tomkerkhove will update the pr as per the comments |
No worries! |
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
@tomkerkhove @ahmelsayed @zroubalik Have updated this pr as well as the docs pr. Please check |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@ahmelsayed ok to merge? |
@@ -53,7 +53,7 @@ func (r *ReconcileScaledObject) newHPAForScaledObject(logger logr.Logger, scaled | |||
|
|||
var behavior *autoscalingv2beta2.HorizontalPodAutoscalerBehavior | |||
if r.kubeVersion.MinorVersion >= 18 { | |||
behavior = scaledObject.Spec.Behavior | |||
behavior = scaledObject.Spec.Advanced.HorizontalPodAutoscalerConfig.Behavior |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Advanced
and HorizontalPodAutoscalerConfig
could be nil, right? should the condition be updated to && scaledObject.Spec.Advanced != nil && scaledObject.Spec.Advanced.HorizontalPodAutoscalerConfig != nil
?
Also in other places
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
valid point. Have made the change
pkg/controller/scaledobject/hpa.go
Outdated
@@ -153,10 +157,22 @@ func (r *ReconcileScaledObject) getScaledObjectMetricSpecs(logger logr.Logger, s | |||
return scaledObjectMetricSpecs, nil | |||
} | |||
|
|||
func getResourceMetrics(resourceMetrics []*autoscalingv2beta2.ResourceMetricSource) []autoscalingv2beta2.MetricSpec { | |||
var metrics []autoscalingv2beta2.MetricSpec |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nit: I think it's slightly better to preallocate the slice to the correct size since that's known in this case.
var metrics []autoscalingv2beta2.MetricSpec | |
metrics := make ([]autoscalingv2beta2.MetricSpec, 0, len(resourceMetrics)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
Signed-off-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Co-authored-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local> Signed-off-by: Chaitanya Kuduvalli Ramachandra chaitanya.r@ETC02Y77ZVJG5H.local
Co-authored-by: Chaitanya Kuduvalli Ramachandra <chaitanya.r@ETC02Y77ZVJG5H.local> Signed-off-by: Chaitanya Kuduvalli Ramachandra chaitanya.r@ETC02Y77ZVJG5H.local
Signed-off-by: Chaitanya Kuduvalli Ramachandra chaitanya.r@ETC02Y77ZVJG5H.local
Provide a description of what has been changed
KEDA currently supports only external metrics. We wish to support standard HPA metrics like CPU and memory usage through KEDA. This pr is to enable the same.
Checklist
Fixes #852