Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: versioned docs #419

Merged
merged 8 commits into from
Oct 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -252,4 +252,4 @@ version-docs:
-w /docs \
-u $(shell id -u):$(shell id -g) \
node:${NODE_VERSION} \
sh -c "yarn install --frozen-lockfile && yarn run docusaurus docs:version ${NEWVERSION}"
ashnamehrotra marked this conversation as resolved.
Show resolved Hide resolved
sh -c "yarn install --frozen-lockfile && yarn run docusaurus docs:version ${NEWVERSION}"
2 changes: 1 addition & 1 deletion docs/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
},
"dependencies": {
"@docusaurus/core": "2.1.0",
"@docusaurus/preset-classic": "2.0.1",
"@docusaurus/preset-classic": "2.1.0",
"@mdx-js/react": "^1.6.22",
"clsx": "^1.2.1",
"prism-react-renderer": "^1.3.5",
Expand Down
11 changes: 11 additions & 0 deletions docs/versioned_docs/version-v0.4.x/code-of-conduct.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: Code of Conduct
---

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).

Resources:

- [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/)
- [Microsoft Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/)
- Contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with questions or concerns
16 changes: 16 additions & 0 deletions docs/versioned_docs/version-v0.4.x/contributing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
title: Contributing
---

There are several ways to get involved with Eraser

- Join the [mailing list](https://groups.google.com/u/1/g/eraser-dev) to get notifications for releases, security announcements, etc.
- Participate in the [biweekly community meetings](https://docs.google.com/document/d/1Sj5u47K3WUGYNPmQHGFpb52auqZb1FxSlWAQnPADhWI/edit) to disucss development, issues, use cases, etc.
- Join the `#eraser` channel on the [Kubernetes Slack](https://slack.k8s.io/)
- View the [development setup instructions](https://azure.github.io/eraser/docs/development)

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
8 changes: 8 additions & 0 deletions docs/versioned_docs/version-v0.4.x/custom-scanner.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
title: Custom Scanner
---

## Creating a Custom Scanner
To create a custom scanner for non-compliant images, provide your scanner image to Eraser in deployment.

In order for the custom scanner to communicate with the collector and eraser containers, utilize `ReadCollectScanPipe()` to get the list of all non-running images to scan from collector. Then, use `WriteScanErasePipe()` to pass the images found non-compliant by your scanner to eraser for removal. Both functions can be found in [util](../../pkg/utils/utils.go).
11 changes: 11 additions & 0 deletions docs/versioned_docs/version-v0.4.x/customization.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
---
title: Customization
---

By default, successful jobs will be deleted after a period of time. You can change this behavior by setting the following flags in the eraser-controller-manager:

- `--job-cleanup-on-success-delay`: Duration to delay job deletion after successful runs. 0 means no delay. Defaults to `0`.
- `--job-cleanup-on-error-delay`: Duration to delay job deletion after errored runs. 0 means no delay. Defaults to `24h`.
- `--job-success-ratio`: Ratio of successful/total runs to consider a job successful. 1.0 means all runs must succeed. Defaults to `1.0`.

For duration, valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".
30 changes: 30 additions & 0 deletions docs/versioned_docs/version-v0.4.x/exclusion.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
---
title: Exclusion
---

## Excluding registries, repositories, and images
Eraser can exclude registries (example, `docker.io/library/*`) and also specific images with a tag (example, `docker.io/library/ubuntu:18.04`) or digest (example, `sha256:80f31da1ac7b312ba29d65080fd...`) from its removal process.

To exclude any images or registries from the removal, create configmap(s) with the label `eraser.sh/exclude.list=true` in the eraser-system namespace with a JSON file holding the excluded images.

```bash
$ cat > sample.json <<EOF
{"excluded": ["docker.io/library/*", "ghcr.io/azure/test:latest"]}
EOF

$ kubectl create configmap excluded --from-file=excluded=sample.json --namespace=eraser-system
$ kubectl label configmap excluded eraser.sh/exclude.list=true -n eraser-system
```

## Exempting Nodes from the Eraser Pipeline
Exempting nodes with `--filter-nodes` is added in v0.3.0. When deploying Eraser, you can specify whether there is a list of nodes you would like to `include` or `exclude` from the cleanup process using the `--filter-nodes` argument.

_See [Eraser Helm Chart](https://github.com/Azure/eraser/blob/main/charts/eraser/README.md) for more information on deployment._

Nodes with the selector `eraser.sh/cleanup.filter` will be filtered accordingly.
- If `include` is provided, eraser and collector pods will only be scheduled on nodes with the selector `eraser.sh/cleanup.filter`.
- If `exclude` is provided, eraser and collector pods will be scheduled on all nodes besides those with the selector `eraser.sh/cleanup.filter`.

Unless specified, the default value of `--filter-nodes` is `exclude`. Because Windows nodes are not supported, they will always be excluded regardless of the `eraser.sh/cleanup.filter` label or the value of `--filter-nodes`.

Additional node selectors can be provided through the `--filter-nodes-selector` flag.
5 changes: 5 additions & 0 deletions docs/versioned_docs/version-v0.4.x/faq.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
---
title: FAQ
---
## Why am I still seeing vulnerable images?
Eraser currently targets **non-running** images, so any vulnerable images that are currently running will not be removed. In addition, the default vulnerability scanning with Trivy removes images with `CRITICAL` vulnerabilities. Any images with lower vulnerabilities will not be removed. This can be configured with the `--severity` flag.
15 changes: 15 additions & 0 deletions docs/versioned_docs/version-v0.4.x/installation.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
title: Installation
---

## Manifest

To install Eraser with the manifest file, run the following command:

```bash
kubectl apply -f https://raw.githubusercontent.com/Azure/eraser/v0.4.0/deploy/eraser.yaml
```

## Helm

If you'd like to install and manage Eraser with Helm, follow the install instructions [here](https://github.com/Azure/eraser/blob/main/charts/eraser/README.md)
10 changes: 10 additions & 0 deletions docs/versioned_docs/version-v0.4.x/introduction.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
title: Introduction
slug: /
---

# Introduction

When deploying to Kubernetes, it's common for pipelines to build and push images to a cluster, but it's much less common for these images to be cleaned up. This can lead to accumulating bloat on the disk, and a host of non-compliant images lingering on the nodes.

The current garbage collection process deletes images based on a percentage of load, but this process does not consider the vulnerability state of the images. **Eraser** aims to provide a simple way to determine the state of an image, and delete it if it meets the specified criteria.
59 changes: 59 additions & 0 deletions docs/versioned_docs/version-v0.4.x/manual-removal.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
---
title: Manual Removal
---

Create an `ImageList` and specify the images you would like to remove. In this case, the image `docker.io/library/alpine:3.7.3` will be removed.

```shell
cat <<EOF | kubectl apply -f -
apiVersion: eraser.sh/v1alpha1
kind: ImageList
metadata:
name: imagelist
spec:
images:
- docker.io/library/alpine:3.7.3 # use "*" for all non-running images
EOF
```

> `ImageList` is a cluster-scoped resource and must be called imagelist. `"*"` can be specified to remove all non-running images instead of individual images.

Creating an `ImageList` should trigger an `ImageJob` that will deploy Eraser pods on every node to perform the removal given the list of images.

```shell
$ kubectl get pods -n eraser-system
eraser-system eraser-controller-manager-55d54c4fb6-dcglq 1/1 Running 0 9m8s
eraser-system eraser-kind-control-plane 1/1 Running 0 11s
eraser-system eraser-kind-worker 1/1 Running 0 11s
eraser-system eraser-kind-worker2 1/1 Running 0 11s
```

Pods will run to completion and the images will be removed.

```shell
$ kubectl get pods -n eraser-system
eraser-system eraser-controller-manager-6d6d5594d4-phl2q 1/1 Running 0 4m16s
eraser-system eraser-kind-control-plane 0/1 Completed 0 22s
eraser-system eraser-kind-worker 0/1 Completed 0 22s
eraser-system eraser-kind-worker2 0/1 Completed 0 22s
```

The `ImageList` custom resource status field will contain the status of the last job. The success and failure counts indicate the number of nodes the Eraser agent was run on.

```shell
$ kubectl describe ImageList imagelist
...
Status:
Failed: 0
Success: 3
Timestamp: 2022-02-25T23:41:55Z
...
```

Verify the unused images are removed.

```shell
$ docker exec kind-worker ctr -n k8s.io images list | grep alpine
```

If the image has been successfully removed, there will be no output.
103 changes: 103 additions & 0 deletions docs/versioned_docs/version-v0.4.x/quick-start.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
---
title: Quick Start
---

This tutorial demonstrates the functionality of Eraser and validates that non-running images are removed succesfully.

## Deploy a DaemonSet

After following the [install instructions](installation.md), we'll apply a demo `DaemonSet`. For illustrative purposes, a DaemonSet is applied and deleted so the non-running images remain on all nodes. The alpine image with the `3.7.3` tag will be used in this example. This is an image with a known critical vulnerability.

First, apply the `DaemonSet`:

```shell
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: alpine
spec:
selector:
matchLabels:
app: alpine
template:
metadata:
labels:
app: alpine
spec:
containers:
- name: alpine
image: docker.io/library/alpine:3.7.3
EOF
```

Next, verify that the Pods are running or completed. After the `alpine` Pods complete, you may see a `CrashLoopBackoff` status. This is expected behavior from the `alpine` image and can be ignored for the tutorial.

```shell
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
alpine-2gh9c 1/1 Running 1 (3s ago) 6s
alpine-hljp9 0/1 Completed 1 (3s ago) 6s
```

Delete the DaemonSet:

```shell
$ kubectl delete daemonset alpine
```

Verify that the Pods have been deleted:

```shell
$ kubectl get pods
No resources found in default namespace.
```

To verify that the `alpine` images are still on the nodes, exec into one of the worker nodes and list the images. If you are not using a kind cluster or Docker for your container nodes, you will need to adjust the exec command accordingly.

List the nodes:

```shell
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
kind-control-plane Ready control-plane 45m v1.24.0
kind-worker Ready <none> 45m v1.24.0
kind-worker2 Ready <none> 44m v1.24.0
```

List the images then filter for `alpine`:

```shell
$ docker exec kind-worker ctr -n k8s.io images list | grep alpine
docker.io/library/alpine:3.7.3 application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed
docker.io/library/alpine@sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 application/vnd.docker.distribution.manifest.list.v2+json sha256:8421d9a84432575381bfabd248f1eb56f3aa21d9d7cd2511583c68c9b7511d10 2.0 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm64/v8,linux/ppc64le,linux/s390x io.cri-containerd.image=managed

```

## Automatically Cleaning Images

After deploying Eraser, it will automatically clean images in a regular interval. This interval can be set by `--repeat-period` argument to `eraser-controller-manager`. The default interval is 24 hours (`24h`). Valid time units are "ns", "us" (or "µs"), "ms", "s", "m", "h".

Eraser will schedule collector pods to each node in the cluster, and each pod will contain 3 containers: collector, scanner, and eraser that will run to completion.

```shell
$ kubectl get pods -n eraser-system
NAMESPACE NAME READY STATUS RESTARTS AGE
eraser-system collector-kind-control-plane-sb789 0/3 Completed 0 26m
eraser-system collector-kind-worker-j84hm 0/3 Completed 0 26m
eraser-system collector-kind-worker2-4lbdr 0/3 Completed 0 26m
eraser-system eraser-controller-manager-86cdb4cbf9-x8d7q 1/1 Running 0 26m
```

The collector container sends the list of all images to the scanner container, which scans and reports non-compliant images to the eraser container for removal of images that are non-running. Once all pods are completed, they will be automatically cleaned up.

> If you want to remove all the images periodically, you can skip the scanner container by removing the `--scanner-image` argument. If you are deploying with Helm, use `--set scanner.image.repository=""` to remove the scanner image. In this case, each collector pod will hold 2 containers: collector and eraser.

```shell
$ kubectl get pods -n eraser-system
NAMESPACE NAME READY STATUS RESTARTS AGE
eraser-system collector-kind-control-plane-ksk2b 0/2 Completed 0 50s
eraser-system collector-kind-worker-cpgqc 0/2 Completed 0 50s
eraser-system collector-kind-worker2-k25df 0/2 Completed 0 50s
eraser-system eraser-controller-manager-86cdb4cbf9-x8d7q 1/1 Running 0 55s
```
80 changes: 80 additions & 0 deletions docs/versioned_docs/version-v0.4.x/releasing.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
---
title: Releasing
---

## Overview

The release process consists of three phases: versioning, building, and publishing.

Versioning involves maintaining the following files:

- **Makefile** - the Makefile contains a VERSION variable that defines the version of the project.
- **manager.yaml** - the controller-manager deployment yaml contains the latest release tag image of the project.
- **eraser.yaml** - the eraser.yaml contains all eraser resources to be deployed to a cluster including the latest release tag image of the project.

The steps below explain how to update these files. In addition, the repository should be tagged with the semantic version identifying the release.

Building involves obtaining a copy of the repository and triggering a build as part of the GitHub Actions CI pipeline.

Publishing involves creating a release tag and creating a new _Release_ on GitHub.

## Versioning

1. Obtain a copy of the repository.

```
git clone git@github.com:Azure/eraser.git
```

1. If this is a patch release for a release branch, check out applicable branch, such as `release-0.1`. If not, branch should be `main`

1. Execute the release-patch target to generate patch. Give the semantic version of the release:

```
make release-manifest NEWVERSION=vX.Y.Z
```

1. Promote staging manifest to release.

```
make promote-staging-manifest
```

1. If it's a new minor release (e.g. v0.**4**.x -> 0.**5**.0), tag docs to be versioned. Make sure to keep patch version as `.x` for a minor release.

```
make version-docs NEWVERSION=v0.5.x
```

1. Preview the changes:

```
git status
git diff
```

## Building and releasing

1. Commit the changes and push to remote repository to create a pull request.

```
git checkout -b release-<NEW VERSION>
git commit -a -s -m "Prepare <NEW VERSION> release"
git push <YOUR FORK>
```

2. Once the PR is merged to `main` or `release` branch (`<BRANCH NAME>` below), tag that commit with release version and push tags to remote repository.

```
git checkout <BRANCH NAME>
git pull origin <BRANCH NAME>
git tag -a <NEW VERSION> -m '<NEW VERSION>'
git push origin <NEW VERSION>
```

3. Pushing the release tag will trigger GitHub Actions to trigger `release` job.
This will build the `ghcr.io/azure/eraser` and `ghcr.io/azure/eraser-manager` images automatically, then publish the new release tag.

## Publishing

1. GitHub Action will create a new release, review and edit it at https://github.com/Azure/eraser/releases
Loading