Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Release: Prepare v2.14 #1382

Merged
merged 5 commits into from
Apr 25, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 0 additions & 4 deletions .htmltest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,3 @@ CheckExternal: false
IgnoreAltMissing: true
IgnoreEmptyHref: true
IgnoreInternalURLs:
- /docs/2.12/authentication-providers/aws/
- /docs/2.12/authentication-providers/aws-secret-manager/
- /docs/2.12/authentication-providers/configmap/
- /docs/2.12/authentication-providers/gcp-secret-manager/
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,7 +171,7 @@ Remember to create the folder for next version with already existing docs in
current version.

Make sure that the version on `content/docs/{next-version}/deploy.md` is updated
and uses the next version, instead of the current one.
and uses the next version, instead of the current one. Ensure that Kubernetes cluster version is updated as well.

Ensure that compatibility matrix on `content/docs/{next-version}/operate/cluster.md` is updated with the compatibilities for the incoming version.

Expand Down
2 changes: 1 addition & 1 deletion config.toml
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ alpine_js_version = "2.2.1"
favicon = "favicon.png"

[params.versions]
docs = ["2.13", "2.12", "2.11", "2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]
docs = ["2.14", "2.13", "2.12", "2.11", "2.10", "2.9", "2.8", "2.7", "2.6", "2.5", "2.4", "2.3", "2.2", "2.1", "2.0", "1.5", "1.4"]

# Site fonts. For more options see https://fonts.google.com.
[[params.fonts]]
Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.10/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ Trigger fields:

### Caching Metrics (Experimental)

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.11/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.12/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.13/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 0 additions & 2 deletions content/docs/2.13/scalers/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,6 @@ spec:
- type: prometheus
metadata:
serverAddress: http://<prometheus-host>:9090
metricName: http_requests_total
threshold: '100'
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))
authModes: "custom"
Expand Down Expand Up @@ -424,7 +423,6 @@ spec:
- type: prometheus
metadata:
serverAddress: https://test-azure-monitor-workspace-name-9ksc.eastus.prometheus.monitor.azure.com
metricName: http_requests_total
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '100.50'
activationThreshold: '5.5'
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
+++
title = "GCP Secret Manager"
+++

You can pull secrets from GCP Secret Manager into the trigger by using the `gcpSecretManager` key.

The `secrets` list defines the mapping between the secret and the authentication parameter.

GCP IAM Service Account credentials can be used for authenticating with the Secret Manager service, which can be provided using a Kubernetes secret. Alternatively, `gcp` pod identity provider is also supported for GCP Secret Manager using `podIdentity` inside `gcpSecretManager`.

```yaml
gcpSecretManager: # Optional.
secrets: # Required.
- parameter: {param-name-used-for-auth} # Required.
id: {secret-manager-secret-name} # Required.
version: {secret-manager-secret-name} # Optional.
podIdentity: # Optional.
provider: gcp # Required.
credentials: # Optional.
clientSecret: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-gcp-iam-sa-secret} # Required.
key: {key-within-the-secret} # Required.
```

### Steps to create the IAM Service Account Kubernetes secret
- Create a new GCP IAM service account. In case you would like to use an existing service account, you can skip this step.

```shell
gcloud iam service-accounts create GSA_NAME \
--project=GSA_PROJECT
```

Replace the following:

GSA_NAME: the name of the new IAM service account.\
GSA_PROJECT: the project ID of the Google Cloud project for your IAM service account.

- Ensure that your IAM service account has [roles](https://cloud.google.com/iam/docs/understanding-roles) which provide sufficient [permissions](https://cloud.google.com/iam/docs/permissions-reference) needed to retrieve the secrets, such as the [Secret Manager Secret Accessor](https://cloud.google.com/secret-manager/docs/access-control#secretmanager.secretAccessor). You can grant additional roles using the following command:

```shell
gcloud projects add-iam-policy-binding PROJECT_ID \
--member "serviceAccount:GSA_NAME@GSA_PROJECT.iam.gserviceaccount.com" \
--role "ROLE_NAME"
```

Replace the following:

PROJECT_ID: your Google Cloud project ID. \
GSA_NAME: the name of your IAM service account. \
GSA_PROJECT: the project ID of the Google Cloud project of your IAM service account. \
ROLE_NAME: the IAM role to assign to your service account, like roles/secretmanager.secretaccessor.

- Either setup [GCP workload identity](./gcp-workload-identity) or create a JSON key credential for authenticating with the service account:

```shell
gcloud iam service-accounts keys create KEY_FILE \
--iam-account=GSA_NAME@PROJECT_ID.iam.gserviceaccount.com
```

Replace the following:

KEY_FILE: the file path to a new output file for the private key in your local machine. \
GSA_NAME: the name of your IAM service account. \
PROJECT_ID: your Google Cloud project ID.

- Create a Kubernetes secret for storing the SA key file in the same namespace where you will create the `TriggerAuthentication` resource:

```shell
kubectl create secret generic NAME --from-file=KEY=KEY_FILE -n NAMESPACE
```

Replace the following:

NAME: name of the Kubernetes secret resource. \
KEY: Kubernetes secret key for the SA. \
KEY_FILE: the file path to the SA in your local machine. \
NAMESPACE: the namespace in which the `TriggerAuthentication` resource will be created.

Now you can create the `TriggerAuthentication` resource which references the secret-name and key for the SA.
2 changes: 1 addition & 1 deletion content/docs/2.14/concepts/scaling-deployments.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ Trigger fields:

### Caching Metrics

This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then is this request routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.
This feature enables caching of metric values during polling interval (as specified in `.spec.pollingInterval`). Kubernetes (HPA controller) asks for a metric every few seconds (as defined by `--horizontal-pod-autoscaler-sync-period`, usually 15s), then this request is routed to KEDA Metrics Server, that by default queries the scaler and reads the metric values. Enabling this feature changes this behavior, KEDA Metrics Server tries to read metric from the cache first. This cache is being updated periodically during the polling interval.

Enabling this feature can significantly reduce the load on the scaler service.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.14/deploy.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ We provide a few approaches to deploy KEDA runtime in your Kubernetes clusters:
- [Operator Hub](#operatorhub)
- [YAML declarations](#yaml)

> 💡 **NOTE:** KEDA requires Kubernetes cluster version 1.24 and higher
> 💡 **NOTE:** KEDA requires Kubernetes cluster version 1.27 and higher

Don't see what you need? Feel free to [create an issue](https://github.com/kedacore/keda/issues/new) on our GitHub repo.

Expand Down
2 changes: 1 addition & 1 deletion content/docs/2.14/operate/cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ As a reference, this compatibility matrix shows supported k8s versions per KEDA

| KEDA | Kubernetes |
|-----------|---------------|
| v2.14 | v1.28 - v1.30 |
| v2.14 | v1.27 - v1.29 |
| v2.13 | v1.27 - v1.29 |
| v2.12 | v1.26 - v1.28 |
| v2.11 | v1.25 - v1.27 |
Expand Down
2 changes: 0 additions & 2 deletions content/docs/2.14/scalers/prometheus.md
Original file line number Diff line number Diff line change
Expand Up @@ -389,7 +389,6 @@ spec:
- type: prometheus
metadata:
serverAddress: http://<prometheus-host>:9090
metricName: http_requests_total
threshold: '100'
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m]))
authModes: "custom"
Expand Down Expand Up @@ -424,7 +423,6 @@ spec:
- type: prometheus
metadata:
serverAddress: https://test-azure-monitor-workspace-name-9ksc.eastus.prometheus.monitor.azure.com
metricName: http_requests_total
query: sum(rate(http_requests_total{deployment="my-deployment"}[2m])) # Note: query must return a vector/scalar single element response
threshold: '100.50'
activationThreshold: '5.5'
Expand Down
8 changes: 8 additions & 0 deletions content/docs/2.15/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "The KEDA Documentation"
weight = 1
+++

Welcome to the documentation for **KEDA**, the Kubernetes Event-driven Autoscaler. Use the navigation to the left to learn more about how to use KEDA and its components.

Additions and contributions to these docs are managed on [the keda-docs GitHub repo](https://github.com/kedacore/keda-docs).
8 changes: 8 additions & 0 deletions content/docs/2.15/authentication-providers/_index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
+++
title = "Authentication Providers"
weight = 5
+++

Available authentication providers for KEDA:

{{< authentication-providers >}}
14 changes: 14 additions & 0 deletions content/docs/2.15/authentication-providers/aws-eks.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "AWS EKS Pod Identity Webhook"
+++

[**EKS Pod Identity Webhook**](https://github.com/aws/amazon-eks-pod-identity-webhook), which is described more in depth [here](https://aws.amazon.com/blogs/opensource/introducing-fine-grained-iam-roles-service-accounts/), allows you to provide the role name using an annotation on a service account associated with your pod.

> ⚠️ **WARNING:** [`aws-eks` auth has been deprecated](https://github.com/kedacore/keda/discussions/5343) and support for it will be removed from KEDA on v3. We strongly encourage the migration to [`aws` auth](./aws.md).

You can tell KEDA to use EKS Pod Identity Webhook via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-eks # Optional. Default: none
```
14 changes: 14 additions & 0 deletions content/docs/2.15/authentication-providers/aws-kiam.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
+++
title = "AWS Kiam Pod Identity"
+++

[**Kiam**](https://github.com/uswitch/kiam/) lets you bind an AWS IAM Role to a pod using an annotation on the pod.

> ⚠️ **WARNING:** `aws-kiam` auth has been deprecated given [AWS KIAM is no longer maintained](https://github.com/uswitch/kiam/#-%EF%B8%8Fthis-project-is-now-being-abandoned-%EF%B8%8F-). As a result, [support for it will be removed from KEDA on v2.15](https://github.com/kedacore/keda/discussions/5342). We strongly encourage the migration to [`aws` auth](./aws.md).

You can tell KEDA to use Kiam via `podIdentity.provider`.

```yaml
podIdentity:
provider: aws-kiam # Optional. Default: none
```
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
+++
title = "AWS Secret Manager"
+++

You can integrate AWS Secret Manager secrets into your trigger by configuring the `awsSecretManager` key in your KEDA scaling specification.

The `podIdentity` section configures the usage of AWS pod identity with the provider set to AWS.

The `credentials` section specifies AWS credentials, including the `accessKey` and `secretAccessKey`.

- **accessKey:** Configuration for the AWS access key.
- **secretAccessKey:** Configuration for the AWS secret access key.

The `region` parameter is optional and represents the AWS region where the secret resides, defaulting to the default region if not specified.

The `secrets` list within `awsSecretManager` defines the mapping between the AWS Secret Manager secret and the authentication parameter used in your application, including the parameter name, AWS Secret Manager secret name, and an optional version parameter, defaulting to the latest version if unspecified.

### Configuration

```yaml
awsSecretManager:
podIdentity: # Optional.
provider: aws # Required.
credentials: # Optional.
accessKey: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-aws-credentials} # Required.
key: {key-in-k8s-secret} # Required.
accessSecretKey: # Required.
valueFrom: # Required.
secretKeyRef: # Required.
name: {k8s-secret-with-aws-credentials} # Required.
key: {key-in-k8s-secret} # Required.
region: {aws-region} # Optional.
secrets: # Required.
- parameter: {param-name-used-for-auth} # Required.
name: {aws-secret-name} # Required.
version: {aws-secret-version} # Optional.
```
Loading