Table of Contents generated with DocToc
- Development Guide
If you would like to contribute to the kubefed project, this guide will help you get started.
The federation deployment depends on kubebuilder
, etcd
, kubectl
, and
kube-apiserver
>= v1.13 being installed in the path. The kubebuilder
(v1.0.8
as of this writing) release packages all of these dependencies together.
These binaries can be installed via the download-binaries.sh
script, which
downloads them to ./bin
:
./scripts/download-binaries.sh
export PATH=$(pwd)/bin:${PATH}
The kubefed deployment requires kubernetes version >= 1.11. To see a detailed list of binaries required, see the prerequisites section in the user guide.
This repo depends on docker
>= 1.12 to do the docker build work.
Set up your Docker environment
As per the docs for kubebuilder, bootstrapping a new kubefed API type can be accomplished as follows:
# Bootstrap and commit a new type
$ kubebuilder create api --group <your-group> --version v1alpha1 --kind <your-kind>
$ git add .
$ git commit -m 'Bootstrapped a new api resource <your-group>.kubefed.k8s.io./v1alpha1/<your-kind>'
# Modify and commit the bootstrapped type
$ vi pkg/apis/<your-group>/v1alpha1/<your-kind>_types.go
$ git commit -a -m 'Added fields to <your-kind>'
# Update the generated code and commit
$ make generate
$ git add .
$ git commit -m 'Updated generated code'
The generated code will need to be updated whenever the code for a type is modified. Care should be taken to separate generated from non-generated code in the commit history.
The kubefed E2E tests must be executed against a deployed federation of one or more clusters. Optionally, the federation controllers can be run in-memory to enable debugging.
Many of the tests validate CRUD operations for each of the federated types enabled by default:
- the objects are created in the target clusters.
- a label update is reflected in the objects stored in the target clusters.
- a placement update for the object is reflected in the target clusters.
- deleted resources are removed from the target clusters.
The read operation is implicit.
In order to run E2E tests, you first need to:
- Create clusters
- See the user guide for a way to deploy clusters for testing kubefed.
- Deploy the kubefed control plane
- To deploy the latest version of the kubefed control plane, follow the Helm chart deployment in the user guide.
- To deploy your own changes, follow the Test Your Changes section of this guide.
Once completed, return here for instructions on running the e2e tests.
Follow the below instructions to run E2E tests against a test federation.
To run E2E tests for all types:
cd test/e2e
go test -args -kubeconfig=/path/to/kubeconfig -v=4 -test.v
To run E2E tests for a single type:
cd test/e2e
go test -args -kubeconfig=/path/to/kubeconfig -v=4 -test.v \
--ginkgo.focus='Federated "secrets"'
It may be helpful to use the delve debugger to gain insight into the components involved in the test:
cd test/e2e
dlv test -- -kubeconfig=/path/to/kubeconfig -v=4 -test.v \
--ginkgo.focus='Federated "secrets"'
Running the kubefed controllers in-memory for a test run allows the controllers to be targeted by a debugger (e.g. delve) or the golang race detector. The prerequisite for this mode is scaling down the kubefed controller manager:
-
Reduce the
kubefed-controller-manager
deployment replicas to 0. This way we can launch the necessary kubefed controllers ourselves via the test binary.kubectl scale deployments kubefed-controller-manager -n kube-federation-system --replicas=0
Once you've reduced the replicas to 0, you should see the
kubefed-controller-manager
deployment update to show 0 pods running:kubectl -n kube-federation-system get deployment.apps kubefed-controller-manager NAME DESIRED CURRENT AGE kubefed-controller-manager 0 0 14s
-
Run tests.
cd test/e2e go test -race -args -kubeconfig=/path/to/kubeconfig -in-memory-controllers=true \ --v=4 -test.v --ginkgo.focus='Federated "secrets"'
Additionally, you can run delve to debug the test:
cd test/e2e dlv test -- -kubeconfig=/path/to/kubeconfig -in-memory-controllers=true \ -v=4 -test.v --ginkgo.focus='Federated "secrets"'
Follow the cleanup instructions in the user guide.
In order to test your changes on your kubernetes cluster, you'll need to build an image and a deployment config.
NOTE: When federation CRDs are changed, you need to run:
make generate
This step ensures that the CRD resources in helm chart are synced.
Ensure binaries from kubebuilder for etcd
and kube-apiserver
are in the path (see prerequisites).
If you just want to have this automated, then run the following command
specifying your own image. This assumes you've used the steps documented
above to
set up two kind
or minikube
clusters (cluster1
and cluster2
):
./scripts/deploy-federation.sh <containerregistry>/<username>/kubefed:test cluster2
NOTE: You can list multiple joining cluster names in the above command. Also, please make sure the joining cluster name(s) provided matches the joining cluster context from your kubeconfig. This will already be the case if you used the steps documented above to create your clusters.
In order to test the latest master changes (tagged as canary
) on your
kubernetes cluster, you'll need to generate a config that specifies the correct
image and generated CRDs. To do that, run the following command:
make generate
./scripts/deploy-federation.sh <containerregistry>/<username>/kubefed:canary cluster2
In order to test the latest stable released version (tagged as latest
) on
your kubernetes cluster, follow the
Helm Chart Deployment instructions from the user guide.
If you are going to add some new sections for the document, make sure to update the table of contents. This can be done manually or with doctoc.