diff --git a/content/docs/2.7/concepts/scaling-deployments.md b/content/docs/2.7/concepts/scaling-deployments.md index 1b235d6c6..79d45b310 100644 --- a/content/docs/2.7/concepts/scaling-deployments.md +++ b/content/docs/2.7/concepts/scaling-deployments.md @@ -212,3 +212,15 @@ Using this method can preserve a replica and enable long-running executions. Ho ### Run as jobs The other alternative to handling long running executions is by running the event driven code in Kubernetes Jobs instead of Deployments or Custom Resources. This approach is discussed [in the next section](../scaling-jobs). + +### Pause autoscaling + +It can be useful to instruct KEDA to pause autoscaling of objects, if you want to do to cluster maintenance or you want to avoid resource starvation by removing non-mission-critical workloads. You can enable this by adding the below annotation to your `ScaledObject` definition: + +```yaml +metadata: + annotations: + autoscaling.keda.sh/paused-replicas: "0" +``` + +The above annotation will scale your current workload to 0 replicas and pause autoscaling. You can set the value of replicas for an object to be paused at to any arbitary number. To enable autoscaling again, simply remove the annotation from the `ScaledObject`definition.