Skip to content

Commit

Permalink
remove documentation for container-freezer
Browse files Browse the repository at this point in the history
Signed-off-by: Paul S. Schweigert <paul@paulschweigert.com>

The container-freezer is being archived as discussed in
knative-extensions/container-freezer#217,
so this commit is to remove its documentation from the website.
  • Loading branch information
psschwei authored and knative-prow-robot committed Mar 31, 2023
1 parent 832ac85 commit 7095730
Show file tree
Hide file tree
Showing 4 changed files with 0 additions and 78 deletions.
1 change: 0 additions & 1 deletion config/nav.yml
Original file line number Diff line number Diff line change
Expand Up @@ -111,7 +111,6 @@ nav:
- Configuring scale bounds: serving/autoscaling/scale-bounds.md
- Additional autoscaling configuration for Knative Pod Autoscaler: serving/autoscaling/kpa-specific.md
- Autoscale Sample App - Go: serving/autoscaling/autoscale-go/README.md
- Configuring container-freezer: serving/autoscaling/container-freezer.md
# Serving - developer docs
- Developer Topics:
- Services:
Expand Down
1 change: 0 additions & 1 deletion docs/serving/autoscaling/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,4 +15,3 @@ To use autoscaling for your application if it is enabled on your cluster, you mu
* Try out the [Go Autoscale Sample App](autoscale-go/README.md).
* Configure your Knative deployment to use the Kubernetes Horizontal Pod Autoscaler (HPA) instead of the default KPA. For how to install HPA, see [Install optional Serving extensions](../../install/yaml-install/serving/install-serving-with-yaml.md#install-optional-serving-extensions).
* Configure the [types of metrics](autoscaling-metrics.md) that the Autoscaler consumes.
* Configure your Knative Service to use [container-freezer](container-freezer.md), which freezes the running process when the pod's traffic drops to zero. The most valuable benefit is reducing the cold-start time within this configuration.
54 changes: 0 additions & 54 deletions docs/serving/autoscaling/container-freezer.md

This file was deleted.

22 changes: 0 additions & 22 deletions docs/serving/configuration/deployment.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,25 +109,3 @@ data:
# List of repositories for which tag to digest resolving should be skipped
registries-skipping-tag-resolving: registry.example.com
```

## Enable container-freezer service

You can configure queue-proxy to pause pods when not in use by enabling the `container-freezer` service. It calls a stand-alone service (via a user-specified endpoint) when a pod's traffic drops to zero or scales up from zero. To enable it, set `concurrency-state-endpoint` to a non-empty value. With this configuration, you can achieve some features like freezing running processes in pods or billing based on the time it takes to process the requests.

Before you configure this, you need to implement the endpoint API. The official implementation is container-freezer. You can install it by following the installation instructions in the [container-freezer README](https://github.com/knative-sandbox/container-freezer).

The following example shows how to enable the container-freezer service. When using `$HOST_IP`, the container-freezer service inserts the appropriate value for each node at runtime:

```yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: config-deployment
namespace: knative-serving
labels:
serving.knative.dev/release: devel
annotations:
knative.dev/example-checksum: "fa67b403"
data:
concurrency-state-endpoint: "http://$HOST_IP:9696"
```

0 comments on commit 7095730

Please sign in to comment.