Editing ScaledObject on Workload in Production #6334
Unanswered
corradomatt
asked this question in
Q&A / Need Help
Replies: 1 comment 6 replies
-
I think that I see the issue in the
It looks like the connection to GCP metrics breaks and then the scaler uses the fallback:
failureThreshold: 3
replicas: 10 Should I increase the |
Beta Was this translation helpful? Give feedback.
6 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hey all! First, thanks for a great project while I'm not using it in production yet testing is going very well and I'm excited to see it in action.
I've been running load tests using Keda ScaledObjects to control the HPA for my Deployment. I'm using the
gcp-stackdriver
trigger on a custom GCP log metric from the workload itself and it's been working great. However, I wanted to test the process for making adjustments to themetadata.targetValue
of my trigger and noticed something concerning. I have theminReplicaCount
currently set to10
and during a load test when I adjusted thetargetValue
I noticed that my workload was scaled down from 170 pods to 10 before the new calculation kicked in a scaled it up to the expected200+
pods. The process happened quickly enough and my load test hardly noticed the issue, but I know this wouldn't be the case in production.How can I ensure that the HPA doesn't scale down my workload to the
minReplicaCount
value when making minor adjustments to the ScaleObject?Beta Was this translation helpful? Give feedback.
All reactions