This project contains GoCardless' Kubernetes extensions, in the form of operators, admission controller webhooks and associated CLIs. The aim of this project is to provide a space to write Kubernetes extensions where:
- Doing the right thing is easy; it is difficult to make mistakes!
- Each category of Kubernetes extension has a well defined implementation pattern
- Writing meaningful tests is easy, with minimal boilerplate
Theatre provides various extensions to vanilla Kubernetes. These extensions are
grouped under separate API groups, all of which exist under the
*.crd.gocardless.com
namespace.
Utilities to extend the default Kubernetes role-based access control (RBAC) resources. These CRDs are motivated by real-world use cases when using Kubernetes with an organisation that uses GSuite, and which frequently onboards new developers.
DirectoryRoleBinding
is a resource that provisions standardRoleBinding
s, which contain the subjects defined in a Google group.
Note: In a GKE Kubernetes cluster this may soon be superseded by the Google Groups for GKE functionality.
Extends core workload resources with new CRDs. Extensions within this group can be expected to create or mutate pods, deployments, etc.
- Consoles: Provide engineers with a temporary
dedicated pod to perform operational tasks, avoiding the need to provide
pods/exec
permissions on production workloads.
Utilities for interacting with Vault. Primarily used to inject secret material into pods by use of annotations.
secrets-injector.vault.crd.gocardless.com
webhook for injecting thetheatre-secrets
tool to populate a container's environment with secrets from Vault before executing.
As well as Kubernetes controllers this project also contains supporting CLI utilities.
theatre-consoles
is a suite of commands that provides the ability to create,
list, attach to and authorise consoles.
Run: go run cmd/theatre-consoles/main.go
See the command README for further details.
Run: go run cmd/theatre-secrets/main.go
Theatre assumes developers have several tools installed to provide development and testing capabilities. The following will configure a macOS environment with all the necessary dependencies:
make install-tools-homebrew
make install-tools-kubebuilder
make install-tools
sudo mkdir /usr/local/kubebuilder
curl -fsL "https://github.com/kubernetes-sigs/kubebuilder/releases/download/v2.3.1/kubebuilder_2.3.1_$(go env GOOS)_$(go env GOARCH).tar.gz" | \
sudo tar -xvz --strip=1 -C /usr/local/kubebuilder
export KUBEBUILDER_ASSETS=/usr/local/kubebuilder/bin
For developing changes, you can make use of the acceptance testing
infrastructure to install the code into a local Kubernetes-in-Docker
(Kind) cluster.
Ensure kind
is installed (as per the [getting started
steps][#getting-started]) and then run the following:
make build
make test
make acceptance-e2e
You can also run the individual commands, check the Makefile for more details,
Example: get the kind cluster read for the acceptance tests
go run cmd/acceptance/main.go prepare # prepare the cluster, install theatre
At this point a development cluster has been provisioned. Your current local Kubernetes context will have been changed to point to the test cluster. You should see the following if you inspect kubernetes:
$ kubectl get pods --all-namespaces | grep -v kube-system
NAMESPACE NAME READY STATUS RESTARTS AGE
theatre-system theatre-rbac-manager-0 1/1 Running 0 5m
theatre-system theatre-vault-manager-0 1/1 Running 0 5m
theatre-system theatre-workloads-manager-0 1/1 Running 0 5m
vault vault-0 1/1 Running 0 5m
All of the controllers and webhooks, built from the local working copy of the code, have been installed into the cluster.
As this is a fully-fledged Kubernetes cluster, at this point you are able to
interact with it as you would with any other cluster, but also have the ability
to use the custom resources defined in theatre, e.g. creating a Console
.
If changes are made to the code, then you must re-run the prepare
step in
order to update the cluster with images built from the new binaries.
Theatre has test suites at several different levels, each of which play a specific role. All of these suites are written using the Ginkgo framework.
In order to setup your local testing environment for unit and integration tests do the following:
$ make install-tools
$ # install setup-envtest which configures etcd and kube-apiserver binaries for envtest
$ # https://book.kubebuilder.io/reference/envtest.html#configuring-envtest-for-integration-tests
$ # https://github.com/kubernetes-sigs/controller-runtime/tree/master/tools/setup-envtest#envtest-binaries-manager
$ # Configures envtest to use k8s 1.24.x binaries, in your shell (if required)
$ eval $(setup-envtest use -i -p env 1.24.x)
-
Unit: Standard unit tests, used to exhaustively specify the functionality of functions or objects.
Invoked with the
ginkgo
CLI. (requires setup-envtest variable)
make test
-
Integration: Integration tests run the custom controller code and integrates this with a temporary Kubernetes API server, therefore providing an environment where the Kubernetes API can be used to manipulate custom objects.
This environment has no Kubernetes nodes, and does not run any other controllers such as
kube-controller-manager
, therefore it will not run pods.These suites provide a good balance between runtime and realism, and are therefore useful for rapid iteration when developing changes.
Invoked with the
ginkgo
CLI. -
Acceptance: Acceptance is used for full end-to-end (E2E) tests, provisioning a fully functional Kubernetes cluster with all custom controllers and webhooks installed.
The acceptance tests are much slower to set up, and so are typically used sparingly compared to the other suites, but provide essential validation in CI and at the end of development cycles that the code correctly interacts with the other components of a Kubernetes cluster.
Invoked with:
make acceptance-run