Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add cpu/memory scaler #277

Merged
merged 3 commits into from
Oct 13, 2020
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 0 additions & 25 deletions content/docs/2.0/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,13 +51,6 @@ spec:
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true/false # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
resourceMetrics: # Optional. If not set, KEDA won't scale based on resource utilization
- name: cpu/memory # Name of the metric to scale on
target:
type: value/ utilization/ averagevalue
value: 60 # Optional
averageValue: 40 # Optional
averageUtilization: 50 # Optional
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 300
Expand Down Expand Up @@ -145,13 +138,6 @@ For example a `Deployment` with `3 replicas` is created, then `ScaledObject` is
```yaml
advanced:
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
resourceMetrics: # Optional. If not set, KEDA won't scale based on resource utilization
- name: cpu/memory # Name of the metric to scale on
target:
type: value/ utilization/ averagevalue
value: 60 # Optional
averageValue: 40 # Optional
averageUtilization: 50 # Optional
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 300
Expand All @@ -163,17 +149,6 @@ advanced:

**`horizontalPodAutoscalerConfig:`**
silenceper marked this conversation as resolved.
Show resolved Hide resolved

This section contains configuration that is the same as some parts of the standard Horizontal Pod Autoscaler (HPA) configuration. KEDA would feed properties from this section into a appropriate places into the HPA configuration. This way one can modify the HPA that is being created and managed by KEDA.

**`horizontalPodAutoscalerConfig.resourceMetrics:`**

This configuration can be used to scale resources based on standard resource metrics like CPU / Memory. KEDA would feed this value as resource metric(s) into the HPA itself. Please follow [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/) for details.
* `name`: This is the name of the resource to be targeted as a metric (cpu, memory etc)
* `type`: represents whether the metric type is Utilization, Value, or AverageValue.
* `value`: is the target value of the metric (as a quantity).
* `averageValue`: is the target value of the average of the metric across all relevant pods (quantity)
* `averageUtilization`: is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods. Currently only valid for Resource metric source type.

**`horizontalPodAutoscalerConfig.behavior`:**

Starting from Kubernetes v1.18 the autoscaling API allows scaling behavior to be configured through the HPA behavior field. This way one can directly affect scaling of 1<->N replicas, which is internally being handled by HPA. KEDA would feed values from this section directly to the HPA's `behavior` field. Please follow [Kubernetes documentation](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#support-for-configurable-scaling-behavior) for details.
Expand Down
51 changes: 51 additions & 0 deletions content/docs/2.0/scalers/cpu.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
+++
title = "CPU"
layout = "scaler"
availability = "v2.0+"
maintainer = "Community"
description = "Scale applications based on cpu metrics."
go_file = "cpu_memory_scaler"
+++

> **Notice:**
> - This scaler will never scale to 0 and even when user define multiple scaler types (eg. Kafka + cpu/memory, or Prometheus + cpu/memory), the deployment will never scale to 0
> - This scaler only applies to ScaledObject, not to Scaling Jobs.

### Trigger Specification

This specification describes the `cpu` trigger that scales based on cpu metrics.

```yaml
triggers:
- type: cpu
metadata:
# Required
type: Value/ Utilization/ AverageValue
value: 60
```

**Parameter list:**

- `type ` represents whether the metric type is Utilization, Value, or AverageValue. Required.
- `value ` this value depends on the type setting
- if `type` set to `Value` this value is target value of the metric (as a quantity)
- if `type` set to `Utilization ` this value is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
- if `type` set to `AverageValue` this value is the target value of the average of the metric across all relevant pods (quantity).

### Example

```yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: cpu-scaledobject
namespace: default
spec:
scaleTargetRef:
name: my-deployment
triggers:
- type: cpu
metadata:
type: Utilization
type: "50"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
type: "50"
value: 50

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks.

```
49 changes: 49 additions & 0 deletions content/docs/2.0/scalers/memory.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
+++
title = "Memory"
layout = "scaler"
availability = "v2.0+"
maintainer = "Community"
description = "Scale applications based on memory metrics."
go_file = "cpu_memory_scaler"
+++
> **Notice:**
> - This scaler will never scale to 0 and even when user define multiple scaler types (eg. Kafka + cpu/memory, or Prometheus + cpu/memory), the deployment will never scale to 0
> - This scaler only applies to ScaledObject, not to Scaling Jobs.
### Trigger Specification

This specification describes the `memory` trigger that scales based on memory metrics.

```yaml
triggers:
- type: memory
metadata:
# Required
type: Value/ Utilization/ AverageValue
value: 60
```

**Parameter list:**

- `type ` represents whether the metric type is Utilization, Value, or AverageValue. Required.
- `value ` this value depends on the type setting
- if `type` set to `Value` this value is target value of the metric (as a quantity)
- if `type` set to `Utilization ` this value is the target value of the average of the resource metric across all relevant pods, represented as a percentage of the requested value of the resource for the pods.
- if `type` set to `AverageValue` this value is the target value of the average of the metric across all relevant pods (quantity).

### Example

```yaml
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: memory-scaledobject
namespace: default
spec:
scaleTargetRef:
name: my-deployment
triggers:
- type: memory
metadata:
type: Utilization
type: "50"
```