You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the annotation autoscaling.keda.sh/paused-replicas: "0"doc can be added to pause the scaling.
A similar annotation could be added to stop triggers evaluation as well.
Another option would be to have an annotation that prevents logging on triggers evaluation error.
Use-Case
Maintenance
When doing maintenance on services used as triggers (e.g. Postgres or metric API) Keda logs error even with autoscaling.keda.sh/paused-replicas: "0" annotation
Night downscaling
We downscale part of our testing infrastructure at night by setting all replicas to 0 and adding the autoscaling.keda.sh/paused-replicas: "0" annotation. However, as our scaledObject trigger is a metric API kind that targets a service that now points to 0 replicas, Keda logs errors on every evaluation.
Is this a feature you are interested in implementing yourself?
No
Anything else?
No response
The text was updated successfully, but these errors were encountered:
Proposal
Currently, the annotation
autoscaling.keda.sh/paused-replicas: "0"
doc can be added to pause the scaling.A similar annotation could be added to stop triggers evaluation as well.
Another option would be to have an annotation that prevents logging on triggers evaluation error.
Use-Case
Maintenance
When doing maintenance on services used as triggers (e.g. Postgres or metric API) Keda logs error even with
autoscaling.keda.sh/paused-replicas: "0"
annotationNight downscaling
We downscale part of our testing infrastructure at night by setting all replicas to 0 and adding the
autoscaling.keda.sh/paused-replicas: "0"
annotation. However, as our scaledObject trigger is a metric API kind that targets a service that now points to 0 replicas, Keda logs errors on every evaluation.Is this a feature you are interested in implementing yourself?
No
Anything else?
No response
The text was updated successfully, but these errors were encountered: