-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multiple Prometheus Scalers in same scaled object return same metric values #702
Comments
Thanks for reporting! |
@ycabrer thanks for reporting. I have found the potential problem in the Metrics Server implementation. This code is returning all metrics for each ScaledObject. We have two options how to solve it:
|
@zroubalik I see |
This seems to be working,
This should be fine I think since all metrics seem to be external metrics. I'm not sure if iterating through Let me know what you think. I can open a pull request. |
@ycabrer unfortunately I was forced to revert your commit. There are 2 problems:
//EDIT: added Kafka scaler note |
@zroubalik I was able to recreate the issue, It looks like it is an issue with the casing when comparing the metric name with the scaler. We are getting It looks like the kubernetes client changes it to lowercase as explained kubernetes issue # 72996 |
camoCase:
lowercase:
I think another issue is that the metric name is hard coded to lagThreshold. This would cause the same issue with multiple metrics being retrieved. Ideally it would have a unique name. Maybe based on the kafka topic or the broker. |
@ycabrer excellent investigation. Thanks!
Yeah, that is true, each scaler has hardcoded metric name, therefore multiple scalers with the same metric in one ScaledObject will not work, we are aware of that. We can try to find solution how to fix this as it might be a good idea for upcoming v2.0 release. |
…ffect rate limiting (kedacore#702) Signed-off-by: Ara Pulido <ara.pulido@datadoghq.com>
Hi,
I am using two prometheus scalers on the same scaled object to scale a single deployment based on different queries.
Each metric has a distinct name and query yet both query results are returned on each metric. This is causing strange metrics in the resulting HPA.
Expected Behavior
Each distinct metric / query in a scaled object should only return the data it is requesting
Actual Behavior
Each distinct metric / query is returning all metrics for the all queries.
Steps to Reproduce the Problem
get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/qa/mq_messages_per_consumer_queue_1?labelSelector=deploymentName%3Dmultiple-queue-consumer" | jq
result for queue 1:
get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/qa/mq_messages_per_consumer_queue_1?labelSelector=deploymentName%3Dmultiple-queue-consumer" | jq
result for queue 2:
The first value of 21 is the metric for queue-1 and the second is the metric for queue-2 in each case
HPA
Scaled Object
Specifications
The text was updated successfully, but these errors were encountered: