Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

README #34

Merged
merged 6 commits into from
Feb 17, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .goreleaser.yml
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ builds:
archives:
- builds:
- kubectl-kubbernecker
name_template: "kubectl-{{ .ProjectName }}_{{ .Tag }}_{{ .Os }}_{{ .Arch }}{{ if .Arm }}v{{ .Arm }}{{ end }}"
name_template: "kubectl-{{ .ProjectName }}_{{ .Os }}-{{ .Arch }}"
wrap_in_directory: false
format: tar.gz
files:
Expand Down
171 changes: 124 additions & 47 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,78 +1,155 @@
# kubbernecker
// TODO(user): Add simple overview of use/purpose
[![GitHub release](https://img.shields.io/github/release/zoetrope/kubbernecker.svg?maxAge=60)](https://github.com/zoetrope/kubbernecker/releases)
[![CI](https://github.com/zoetrope/kubbernecker/actions/workflows/ci.yaml/badge.svg)](https://github.com/zoetrope/kubbernecker/actions/workflows/ci.yaml)
[![PkgGoDev](https://pkg.go.dev/badge/github.com/zoetrope/kubbernecker?tab=overview)](https://pkg.go.dev/github.com/zoetrope/kubbernecker?tab=overview)

## Description
// TODO(user): An in-depth paragraph about your project and overview of use
# Kubbernecker

## Getting Started
You’ll need a Kubernetes cluster to run against. You can use [KIND](https://sigs.k8s.io/kind) to get a local cluster for testing, or run against a remote cluster.
**Note:** Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster `kubectl cluster-info` shows).
**Project Status**: Alpha

### Running on the cluster
1. Install Instances of Custom Resources:
Kubbernecker is tools that helps to check the number of changes made to Kubernetes resources.
It provides two tools: `kubbernecker-metrics` and `kubectl-kubbernecker`.

```sh
kubectl apply -f config/samples/
```
`kubbernecker-metrics` is an exporter that exposes the number of changes made to Kubernetes resources as Prometheus
metrics.
It helps to monitor the changes made to resources within a Kubernetes cluster.

2. Build and push your image to the location specified by `IMG`:
`kubectl-kubbernecker` is a kubectl plugin that shows the number of changes made to Kubernetes resources and the manager
who made the changes.
It helps to quickly check the changes made to resources within a Kubernetes cluster.

```sh
make docker-build docker-push IMG=<some-registry>/kubbernecker:tag
```
The name of Kubbernecker comes from rubbernecker.
It is like staring at a fight between Kubernetes controllers.

## Motivation

In a Kubernetes cluster, different controllers may continuously edit the same resource, leading to a race condition.
It can cause increased loads on kube-apiserver and performance issues.
Kubbernecker helps to solve these problems by checking the number of changes made to Kubernetes resources.

## Installation

3. Deploy the controller to the cluster with the image specified by `IMG`:
### kubbernecker-metrics

```sh
make deploy IMG=<some-registry>/kubbernecker:tag
You need to add this repository to your Helm repositories:

```console
$ helm repo add kubbernecker https://zoetrope.github.io/kubbernecker/
$ helm repo update
```

### Uninstall CRDs
To delete the CRDs from the cluster:
To install the chart with the release name `kubbernecker` using a dedicated namespace(recommended):

```sh
make uninstall
```
$ helm install --create-namespace --namespace kubbernecker kubbernecker kubbernecker/kubbernecker
```

### Undeploy controller
UnDeploy the controller from the cluster:
Specify parameters using `--set key=value[,key=value]` argument to `helm install`.
Alternatively a YAML file that specifies the values for the parameters can be provided like this:

```sh
make undeploy
```console
$ helm install --create-namespace --namespace kubbernecker kubbernecker -f values.yaml kubbernecker/kubbernecker
```

## Contributing
// TODO(user): Add detailed information on how you would like others to contribute to this project
Values:

### How it works
This project aims to follow the Kubernetes [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/).
| Key | Type | Default | Description |
|-------------------------------|--------|-----------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------|
| image.repository | string | `"ghcr.io/zoetrope/kubbernecker"` | Kubbernecker image repository to use. |
| image.tag | string | `{{ .Chart.AppVersion }}` | Kubbernecker image tag to use. |
| image.imagePullPolicy | string | `IfNotPresent` | imagePullPolicy applied to Kubbernecker image. |
| resources | object | `{"requests":{"cpu":"100m","memory":"20Mi"}}` | Specify resources. |
| config.targetResources | list | `[]` (See [values.yaml]) | Target Resources. If this is empty, all resources will be the target. |
| config.namespaceSelector | list | `{}` (See [values.yaml]) | Selector of the namespace to which the target resource belongs. If this is empty, all namespaces will be the target. |
| config.enableClusterResources | bool | `false` | If `targetResources` is empty, whether to include cluster-scope resources in the target. If `targetResources` is not empty, this field will be ignored. |

It uses [Controllers](https://kubernetes.io/docs/concepts/architecture/controller/),
which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
### kubectl-kubbernecker

### Test It Out
1. Install the CRDs into the cluster:
Download the binary and put it in a directory of your `PATH`.
The following is an example to install the plugin in `/usr/local/bin`.

```sh
make install
```console
$ OS=$(go env GOOS)
$ ARCH=$(go env GOARCH)
$ curl -L -sS https://github.com/zoetrope/kubbernecker/releases/latest/download/kubectl-kubbernecker_${OS}-${ARCH}.tar.gz \
| tar xz -C /usr/local/bin kubectl-kubbernecker
```

2. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
NOTE: In the future, this tool will be able to be installed by [krew](https://krew.sigs.k8s.io).

## Usage

### kubbernecker-metrics

`kubbernecker-metrics` exposes the following metrics:

```sh
make run
| Name | Type | Description | Labels |
|--------------------------------------|---------|--------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| `kubbernecker_resource_events_total` | counter | Total number of events for Kubernetes resources. | `group`: group </br> `version`: version </br> `kind`: kind </br>`namespace`: namespace </br> `event_type`: event type ("add", "update" or "delete") </br> `resource_name`: resource name |

### kubectl-kubbernecker

`kubectl-kubbernecker` has two subcommands:

`watch` sub-command prints the number of times a resource is updated.

```console
$ kubectl kubbernecker watch -n default configmap
{
"gvk": {
"group": "",
"version": "v1",
"kind": "ConfigMap"
},
"namespaces": {
"default": {
"resources": {
"test-cm": {
"add": 0,
"delete": 0,
"update": 9
}
}
}
}
}
```

**NOTE:** You can also run this in one step by running: `make install run`
`blame` sub-command prints the name of managers that updated the given resource.

```console
$ kubectl kubbernecker blame -n default configmap test-cm
{
"managers": {
"manager1": {
"update": 4
},
"manager2": {
"update": 4
}
},
"lastUpdate": "2023-02-17T22:25:20+09:00"
}
```

### Modifying the API definitions
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
## Development

```sh
make manifests
Tools for developing kubbernecker are managed by aqua.
Please install aqua as described in the following page:

https://aquaproj.github.io/docs/reference/install

Then install the tools.

```console
$ cd /path/to/kubbernecker
$ aqua i -l
```

**NOTE:** Run `make --help` for more information on all potential `make` targets
You can start development with tilt.

More information can be found via the [Kubebuilder Documentation](https://book.kubebuilder.io/introduction.html)
```console
$ make start-dev
$ tilt up
```

[values.yaml]: ./charts/kubbernecker/values.yaml
6 changes: 2 additions & 4 deletions charts/kubbernecker/templates/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ metadata:
control-plane: manager
{{- include "kubbernecker.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.replicas }}
replicas: 1
selector:
matchLabels:
control-plane: manager
Expand All @@ -42,10 +42,8 @@ spec:
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: KUBERNETES_CLUSTER_DOMAIN
value: {{ .Values.kubernetesClusterDomain }}
image: {{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}
imagePullPolicy: IfNotPresent
imagePullPolicy: {{ .Values.image.imagePullPolicy }}
livenessProbe:
httpGet:
path: /healthz
Expand Down
44 changes: 37 additions & 7 deletions charts/kubbernecker/values.yaml
Original file line number Diff line number Diff line change
@@ -1,16 +1,46 @@
# Default image
image:
# Kubbernecker image repository to use.
repository: ghcr.io/zoetrope/kubbernecker
# Kubbernecker image tag to use.
tag: app-version-placeholder
# imagePullPolicy applied to Kubbernecker image.
imagePullPolicy: IfNotPresent
# Resource limits and requests for Kubbernecker
resources:
limits:
cpu: 500m
memory: 128Mi
requests:
cpu: 10m
memory: 64Mi
replicas: 1
kubernetesClusterDomain: cluster.local
cpu: 100m
memory: 256Mi
# Kubbernecker configuration
config:
# Target Resources. If this is empty, all resources will be the target.
# Specify the resource type to be monitored by `group`, `version` and `kind`.
# `namespaceSelector` can select the namespaces to which the resource belongs.
# `resourceSelector` can select the target resources by its labels.
targetResources: []
# Example:
# - group: ""
# version: "v1"
# kind: "Pod"
# - group: "apps"
# version: "v1"
# kind: "Deployment"
# namespaceSelector:
# matchLabels:
# app: "frontend"
# - group: "storage.k8s.io"
# version: "v1"
# kind: "StorageClass"
# resourceSelector:
# matchLabels:
# team: "myteam"

# Selector of the namespace to which the target resource belongs. If this is empty, all namespaces will be the target.
namespaceSelector: {}
# Example:
# namespaceSelector:
# matchLabels:
# role: admin

# If `targetResources` is empty, whether to include cluster-scope resources in the target. If `targetResources` is not empty, this field will be ignored.
enableClusterResources: false
11 changes: 8 additions & 3 deletions cmd/kubectl-kubbernecker/sub/blame.go
Original file line number Diff line number Diff line change
Expand Up @@ -23,9 +23,14 @@ func newBlameCmd() *cobwrap.Command[*blameOptions] {
cmd := &cobwrap.Command[*blameOptions]{
Command: &cobra.Command{
Use: "blame TYPE[.VERSION][.GROUP] NAME",
Short: "",
Long: ``,
Args: cobra.ExactArgs(2),
Short: "Print the name of managers that updated the given resource",
Long: `Print the name of managers that updated the given resource.

Examples:
# Print managers that updated "test" ConfigMap resource
kubectl kubbernecker blame configmap test
`,
Args: cobra.ExactArgs(2),
},
Options: &blameOptions{},
}
Expand Down
21 changes: 16 additions & 5 deletions cmd/kubectl-kubbernecker/sub/watch.go
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,20 @@ func newWatchCmd() *cobwrap.Command[*watchOptions] {

cmd := &cobwrap.Command[*watchOptions]{
Command: &cobra.Command{
Use: "watch",
Short: "",
Long: ``,
Use: "watch (TYPE[.VERSION][.GROUP]...)",
Short: "Print the number of times a resource is updated",
Long: `Print the number of times a resource is updated.

Examples:
# Watch Pod resources in "default" namespace
kubectl kubbernecker watch pods -n default

# Watch Pod resources in all namespaces
kubectl kubbernecker watch pods --all-namespaces

# Watch all resources in all namespaces
kubectl kubbernecker watch --all-resources --all-namespaces
`,
},
Options: &watchOptions{},
}
Expand All @@ -55,10 +66,10 @@ func (o *watchOptions) Fill(cmd *cobra.Command, args []string) error {
o.resources = args

if len(o.resources) > 0 && o.allResources {
return errors.New("resources and --all-resources cannot be used together")
return errors.New("the type of resource and `--all-namespaces` flag cannot be used together")
}
if len(o.resources) == 0 && !o.allResources {
return errors.New("resources or --all-resources is required but not provided")
return errors.New("you must specify the type of resource to get or `--all-namespaces` flag")
}

return nil
Expand Down
Loading