Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

various improvements in wording and phrase #170

Merged
merged 1 commit into from
Jan 19, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 10 additions & 10 deletions docs/FAQ.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,22 +6,22 @@ sidebar_position: 11

### I have a couple of nodes with low utilization, but they are not scaled down. Why?

Have you set the [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)?
Have you set up the [cluster autoscaler](https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler)?

If cluster autoscaler is set up, it should be correctly configured to scale down the nodes.
If the cluster autoscaler is set up, it should be correctly configured to scale down the nodes.
To see the possible issues, check the [cluster autoscaler documentation](https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#i-have-a-couple-of-nodes-with-low-utilization-but-they-are-not-scaled-down-why).

### I want avoid to deploy cert-manager. What are the alternatives?
### I want to avoid deploying cert-manager. What are the alternatives?

You can find alternatives to cert-manager installation in the [cert-manager alternatives](/docs/advanced/webhook-cert-management#without-cert-manager) section.

### How many CO2 is produced by pod?
### How much CO2 is produced by a pod?

This calculations are based on the following assumptions:
These calculations are based on the following assumptions:

- Emissions for cloud server using 100% green electricity: **160 Kg CO2eq / year and server** (from [goclimate.com](https://www.goclimate.com/blog/the-carbon-footprint-of-servers/#:~:text=Cloud%20server%20using%20100%%20green%20electricity:%20160%20kg%20CO2e%20/%20year%20and%20server)
- Emissions for a cloud server using 100% green electricity: **160 Kg CO2eq / year and server** (from [goclimate.com](https://www.goclimate.com/blog/the-carbon-footprint-of-servers/#:~:text=Cloud%20server%20using%20100%%20green%20electricity:%20160%20kg%20CO2e%20/%20year%20and%20server)
)
- Cluster node of 2 cpu. We approximate **1 node is 1 server**
- Cluster node of 2 CPUs. We approximate **1 node is 1 server**
- **15 pods per node**

With this assumption, the mean consumption of CO2 per pod in a year is 160 / 15 = **11 Kg CO2eq / year per pod**.
Expand All @@ -32,15 +32,15 @@ import ConsumptionCalculator from '../src/components/ConsumptionCalculator'

### What resources are supported?

*kube-green* add default support to `Deployments`, `StatefulSets`, and `CronJobs`,
but it is possible to add support for other resources using the patches feature. More information are available
*kube-green* adds default support to `Deployments`, `StatefulSets`, and `CronJobs`,
but it is possible to add support for other resources using the patches feature. More information is available
in the [configuration](configuration.md) section.

### How can I contribute to the project?

You can contribute to the project in many ways.

If you are using *kube-green*, you can list as [adopter](./adopters.md).
If you are using *kube-green*, you can list yourself as an [adopter](./adopters.md).

If you have some feedback, open an issue or a discussion in the [GitHub repository](https://github.com/kube-green/kube-green).

Expand Down
25 changes: 12 additions & 13 deletions docs/advanced/webhook-cert-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,16 +18,16 @@ If you want to avoid deploying `cert-manager`, you can use the following alterna
### Manual management of certificates

To manually manage the certificates, you need to create a K8s secret of type `kubernetes.io/tls` with `tls.crt` and `tls.key` keys.
The certificate in this secret must be signed by a CA and valid for the DNS name:
The certificate in this secret must be signed by a CA and valid for the DNS names:

- SERVICE_NAME
- SERVICE_NAME.NAMESPACE
- SERVICE_NAME.NAMESPACE.svc
- SERVICE_NAME.NAMESPACE.svc.cluster.local

where `SERVICE_NAME` is the name of the service which exposes the webhook (`kube-green-webhook-service` by default) and `NAMESPACE` is the namespace where the service is deployed.
where `SERVICE_NAME` is the name of the service that exposes the webhook (`kube-green-webhook-service` by default) and `NAMESPACE` is the namespace where the service is deployed.

Once created the secret, it must be mounted in the `kube-green` deployment as volume. If the secret is called `webhook-server-cert`, the volumes configuration should be the following:
Once the secret is created, it must be mounted in the `kube-green` deployment as a volume. If the secret is called `webhook-server-cert`, the volumes configuration should be the following:

```yaml
volumes:
Expand All @@ -45,9 +45,9 @@ volumeMounts:
mountPath: /tmp/k8s-webhook-server/serving-certs
```

The CA which sign the certificate must be set as caBundle of clientConfig in the webhook configuration.
The CA that signs the certificate must be set as caBundle of clientConfig in the webhook configuration.

If you are using the `kustomize` configuration in the [kube-green repository](https://github.com/kube-green/kube-green/tree/main/config), you can comment all the parts below the `[CERT-MANAGER]` comment and write a kustomization to insert the caBundle correctly.
If you are using the `kustomize` configuration in the [kube-green repository](https://github.com/kube-green/kube-green/tree/main/config), you can comment out all the parts below the `[CERT-MANAGER]` comment and write a kustomization to insert the caBundle correctly.

Example of the webhook configuration to patch, with `<CA_BUNDLE>` as the base64 of the `ca.crt` file:

Expand All @@ -58,13 +58,12 @@ webhooks:
caBundle: <CA_BUNDLE>
```

Each time the certificate will expire, you will need to update the secret with a new certificate.
Each time the certificate expires, you will need to update the secret with a new certificate.

<details>
<summary><i>Generate Self-Signed Certificates step by step</i></summary>

To generate self-signed certificates, it is possible to use the following commands (take this as an example):

To generate self-signed certificates, you can use the following commands (take this as an example):

Write a file with the following content with the openssl configuration (name it `openssl.conf`):

Expand Down Expand Up @@ -108,20 +107,20 @@ openssl req -new -key tls.key -out tls.csr -config openssl.conf
openssl x509 -req -in tls.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out tls.crt -days 365 -extfile openssl.conf -extensions req_ext
```

After the creation of the certificates, you can create the secret with the following command:
After creating the certificates, you can create the secret with the following command:

```bash
kubectl create secret tls webhook-server-cert --cert=./tls.crt --key=./tls.key
```

Once generated, you can create the `kube-green` manifests (commenting the `[CERT-MANAGER]` part), create the base64 of the `ca.crt` file and patch the webhook configuration with the new caBundle.
Once generated, you can create the `kube-green` manifests (commenting out the `[CERT-MANAGER]` part), create the base64 of the `ca.crt` file and patch the webhook configuration with the new caBundle.

</details>

### Automated Management of Webhook Certificates

It is possible to manage the certificates using some tools which automate the process described above.
It is possible to manage the certificates using some tools that automate the process described above.

One tool that can be used is [kube-webhook-certgen](https://github.com/kubernetes/ingress-nginx/tree/main/images/kube-webhook-certgen). It is possible to view a configuration of this tool in the [kube-green helm chart](https://github.com/kube-green/kube-green/tree/main/charts). In this case, there are some jobs which create the certificate if it does not exist and patch the webhook manifest at runtime.
One tool that can be used is [kube-webhook-certgen](https://github.com/kubernetes/ingress-nginx/tree/main/images/kube-webhook-certgen). You can view a configuration of this tool in the [kube-green helm chart](https://github.com/kube-green/kube-green/tree/main/charts). In this case, there are some jobs that create the certificate if it does not exist and patch the webhook manifest at runtime.

It is possible to enable it with setting the `jobsCert.enabled` to `true` in the `values.yaml` file of the chart and setting `certManager.enabled` to false.
You can enable it by setting `jobsCert.enabled` to `true` in the `values.yaml` file of the chart and setting `certManager.enabled` to false.
30 changes: 15 additions & 15 deletions docs/configuration.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,28 +7,28 @@ sidebar_position: 4
In the namespace where you want to enable *kube-green*, apply the `SleepInfo` resource.
An example of `SleepInfo` is accessible [at this link](https://github.com/kube-green/kube-green/blob/main/testdata/working-hours.yml).

By default, the default managed resources are
By default, the managed resources are:

* `Deployments`: enabled by default, it could be disabled with the `suspendDeployments` field;
* `StatefulSets`: enabled by default, it could be disabled with the `suspendStatefulSets` field;
* `CronJobs`: disabled by default, it could be enabled with the `suspendCronJobs` field.
* `Deployments`: enabled by default, it can be disabled with the `suspendDeployments` field;
* `StatefulSets`: enabled by default, it can be disabled with the `suspendStatefulSets` field;
* `CronJobs`: disabled by default, it can be enabled with the `suspendCronJobs` field.

You can manage also other resources adding [custom patches](#patches).
You can manage other resources by adding [custom patches](#patches).

Check the [API reference](apireference_v1alpha1.md) for the SleepInfo CRD to understand each field.

## Patches

Patches are used to define how to change the resources so that the runtime will "sleep". The patches are applied to the resources at the sleep time and are reverted at the wake up time.

In this way, it is possible to support all the Kubernetes resources, also the ones defined through the custom resource definitions.
To let *kube-green* support a custom resource, you need to define the specific `patch` for the resource inside the `SleepInfo` (the API reference is available [here](apireference_v1alpha1.md)) and add the permission to the ClusterRole associated to the *kube-green* manager ([here how to configure the RBAC](./installation/rbac.md)), if not already set.
In this way, it is possible to support all the Kubernetes resources, including those defined through custom resource definitions.
To let *kube-green* support a custom resource, you need to define the specific `patch` for the resource inside the `SleepInfo` (the API reference is available [here](apireference_v1alpha1.md)) and add the permission to the ClusterRole associated with the *kube-green* manager ([here is how to configure the RBAC](./installation/rbac.md)), if not already set.

## Examples

### Simple SleepInfo resource

The follow configuration sets a sleep to 20:00 and wake up to 08:00 from monday to friday (in Rome timezone) for the default managed resources.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 from Monday to Friday (in Rome timezone) for the default managed resources.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -44,7 +44,7 @@ spec:

### Exclude resources

The follow configuration sets a sleep to 20:00 and wake up to 08:00 from monday to friday (in Rome timezone), for the default managed resources and the `CronJobs`, excluding the `Deployment` named `api-gateway`.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 from Monday to Friday (in Rome timezone), for the default managed resources and the `CronJobs`, excluding the `Deployment` named `api-gateway`.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -65,7 +65,7 @@ spec:

### Sleep without wake up

The follow configuration sets a sleep to 20:00 from monday to friday (in Rome timezone) for the default managed resources and the `CronJobs`. The wake up is not set, so the resources will be suspended until them will be manually changed (for example, through a redeployment).
The following configuration sets a sleep time to 20:00 from Monday to Friday (in Rome timezone) for the default managed resources and the `CronJobs`. The wake up time is not set, so the resources will be suspended until they are manually changed (for example, through a redeployment).

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -81,7 +81,7 @@ spec:

### Suspend only CronJobs

The follow configuration sets a sleep to 20:00 and wake up to 08:00 on each day of the week (in Rome timezone), only for `CronJobs`, excluding the specific `CronJob` named `do-not-suspend`.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 on each day of the week (in Rome timezone), only for `CronJobs`, excluding the specific `CronJob` named `do-not-suspend`.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -104,7 +104,7 @@ spec:

### Exclude with labels

The follow configuration sets a sleep to 20:00 and wake up to 08:00 on each day of the week (in Rome timezone), for the default managed resources, excluding the resources with the label `kube-green.dev/exclude: true`.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 on each day of the week (in Rome timezone), for the default managed resources, excluding the resources with the label `kube-green.dev/exclude: true`.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -123,7 +123,7 @@ spec:

### Include with labels

The follow configuration sets a sleep to 20:00 and wake up to 08:00 on each day of the week (in Rome timezone), for the default and `CronJobs` resources with the label `kube-green.dev/include: true`.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 on each day of the week (in Rome timezone), for the default and `CronJobs` resources with the label `kube-green.dev/include: true`.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand All @@ -143,9 +143,9 @@ spec:

### Custom patches

The follow configuration sets a sleep to 20:00 and wake up to 08:00 from monday to friday (in Rome timezone) for the default managed resources, the `CronJobs` and add the support to the not managed resource `ReplicaSets`.
The following configuration sets a sleep time to 20:00 and wake up time to 08:00 from Monday to Friday (in Rome timezone) for the default managed resources, the `CronJobs` and adds support to the not managed resource `ReplicaSets`.

This is only an example on how to add custom patches to the resources. The patch in this example sets the `replicas` field to `0`. In this way, it is possible to support also some custom resources.
This is only an example of how to add custom patches to the resources. The patch in this example sets the `replicas` field to `0`. In this way, it is possible to support some custom resources.

```yaml
apiVersion: kube-green.com/v1alpha1
Expand Down
14 changes: 7 additions & 7 deletions docs/getting-started.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ How many of your dev/preview pods stay on during weekends? Or at night? It's a w

*kube-green* is a simple k8s addon that automatically shuts down (some of) your resources when you don't need them.

How many CO2 produces yearly a pod?
By our assumption, it's about 11 Kg CO2eq per year per pod ([here](./FAQ.mdx#how-many-co2-is-produced-by-pod) the calculation).
How much CO2 does a pod produce yearly?
By our assumption, it's about 11 Kg CO2eq per year per pod ([here](./FAQ.mdx#how-many-co2-is-produced-by-pod) is the calculation).

Use this tool to calculate it:

Expand All @@ -24,19 +24,19 @@ Keep reading to find out how to use it, and if you have ideas on how to improve

## Tutorials

Try our tutorials to get started. Are available [here](tutorials/kind.md).
Try our tutorials to get started. They are available [here](tutorials/kind.md).

## Install

To start using kube-green, you need to install it in a kubernetes cluster.
To start using kube-green, you need to install it in a Kubernetes cluster.
[Click here](./installation/index.md) to see how to install.

## Create and deploy SleepInfo

You can take a look at example configuration [available here](https://github.com/kube-green/kube-green/tree/main/testdata), or create it with the docs [here](configuration.md).
You can take a look at example configurations [available here](https://github.com/kube-green/kube-green/tree/main/testdata), or create it with the docs [here](configuration.md).

And that's it! Now, let *kube-green* to sleep your pods and to save CO2!
And that's it! Now, let *kube-green* sleep your pods and save CO2!

## Real use cases

To see the real use case example, check [here](real-usecase/first-usage.md).
To see real use case examples, check [here](real-usecase/first-usage.md).
10 changes: 5 additions & 5 deletions docs/installation/cloud-provider.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,19 +6,19 @@ sidebar_position: 2

## GKE

When Google configure the control plane for private cluster, they automatically configure VPC peering between your Kubernetes cluster network and a separate Google managed project.
When Google configures the control plane for a private cluster, they automatically configure VPC peering between your Kubernetes cluster network and a separate Google-managed project.

To restrict what Google is able to access in your cluster, the firewall rules configured restrict access to your Kubernetes pods. This means that the webhook won't work, and you see an error like `Internal error occurred: failed calling webhook ...:`
To restrict what Google is able to access in your cluster, the firewall rules configured restrict access to your Kubernetes pods. This means that the webhook won't work, and you will see an error like `Internal error occurred: failed calling webhook ...:`

So, to use the webhook component with a GKE private cluster, you need to configure an additional firewall rule to allow the GKE control plane to access to your webhook pod.
So, to use the webhook component with a GKE private cluster, you need to configure an additional firewall rule to allow the GKE control plane to access your webhook pod.

*kube-green* uses webhook (exposed on port 9443) to check that SleepInfo CRD is valid. In order to make it works, you must open the 9443 port (or change the exposed port by configuration) otherwise it would not possible to add SleepInfo CRD.
*kube-green* uses a webhook (exposed on port 9443) to check that the SleepInfo CRD is valid. In order to make it work, you must open port 9443 (or change the exposed port by configuration); otherwise, it will not be possible to add the SleepInfo CRD.

[Here](https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules) you can read more information about how to add firewall rules to GKE.

## AWS EKS

When using a custom CNI on EKS (such as cilium), the webhook cannot be reached by kube-green. This happens because the control plane cannot be configured to run on a custom CNI, so the CNIs differ between control plane and worker nodes.
When using a custom CNI on EKS (such as cilium), the webhook cannot be reached by kube-green. This happens because the control plane cannot be configured to run on a custom CNI, so the CNIs differ between the control plane and worker nodes.

To address this, set `hostNetwork: true` in the deployment of the webhook to run it in the host network.

Expand Down
Loading
Loading