Skip to content

Commit

Permalink
feat: Canary Release with Flagger
Browse files Browse the repository at this point in the history
Uses fluxcd/flagger#372 to let Flagger manages traffic between two Kubernetes services each is managed by a Helm release.
  • Loading branch information
mumoshu committed Nov 23, 2019
1 parent 693fd33 commit 4ab0d8d
Show file tree
Hide file tree
Showing 13 changed files with 426 additions and 82 deletions.
69 changes: 69 additions & 0 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,75 @@ jobs:
make e2e/h1-smi
cat e2e.aggregate.log
e2e-h1-smi-flagger:
machine:
image: circleci/classic:201808-01
steps:
- checkout
- run:
name: Install testing tools
command: |
GOBIN=~/bin go get -u github.com/tsenart/vegeta
sudo mv ~/bin/vegeta /usr/local/bin/
vegeta --version
GOBIN=~/bin go get -u github.com/rs/jaggr
sudo mv ~/bin/jaggr /usr/local/bin/
jaggr -version
# We need a recent jq for @base64d
curl -Lo ~/bin/jq https://github.com/stedolan/jq/releases/download/jq-1.6/jq-linux64
chmod +x ~/bin/jq
sudo mv ~/bin/jq /usr/local/bin/
jq --version
- run:
name: Install Helm
environment:
HELM_VERSION: v3.0.0-rc.2
command: |
HELM_FILENAME="helm-${HELM_VERSION}-linux-amd64.tar.gz"
curl -Lo ${HELM_FILENAME} "https://get.helm.sh/${HELM_FILENAME}"
tar zxf ${HELM_FILENAME} linux-amd64/helm
chmod +x linux-amd64/helm
sudo mv linux-amd64/helm /usr/local/bin/
- run:
name: Setup Kubernetes
environment:
# See https://hub.docker.com/r/kindest/node/tags for available tags(k8s versions)
K8S_VERSION: v1.12.10
KIND_VERSION: v0.5.1
command: |
curl -Lo kubectl https://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/linux/amd64/kubectl
chmod +x kubectl && sudo mv kubectl /usr/local/bin/
curl -Lo ./kind https://github.com/kubernetes-sigs/kind/releases/download/${KIND_VERSION}/kind-$(uname)-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/
kind create cluster --name minikube --image kindest/node:${K8S_VERSION}
KUBECONFIG="$(kind get kubeconfig-path --name="minikube")"
echo Copying ${KUBECONFIG} to ~/.kube/config so that it is available to the succeeding steps.
cp ${KUBECONFIG} ~/.kube/config
- run:
name: Wait for nodes to become ready
command: JSONPATH='{range .items[*]}{@.metadata.name}:{range @.status.conditions[*]}{@.type}={@.status};{end}{end}'; until kubectl get nodes -o jsonpath="$JSONPATH" 2>&1 | grep -q "Ready=True"; do sleep 1; done
- run:
name: Configuring RBAC
command: |
# Allow xds-configmap-loader to read configmaps in the same namespace.
# Otherwise it complains like this and makes envoy pods into crash loops:
# get configmap default/envoy-xds: non 200 response code: 403: &{GET https://kubernetes/api/v1/namespaces/default/configmaps/envoy-xds *snip*
kubectl create clusterrolebinding default --clusterrole cluster-admin --serviceaccount default:default
- run:
name: Run tests
command: |
make e2e/h1-smi-flagger
cat e2e.aggregate.log
workflows:
version: 2
test:
Expand Down
6 changes: 5 additions & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,10 @@ e2e/h2c-smi:
e2e/h1-smi:
USE_SMI=1 ./e2e/run-testsuite.sh

.PHONY: e2e/h1-smi-flagger
e2e/h1-smi-flagger:
USE_FLAGGER=1 USE_SMI=1 ./e2e/run-testsuite.sh

.PHONY: e2e/jplot
e2e/jplot:
./e2e/run.sh "tail -f e2e.aggregate.log | ./e2e/tools.sh jplot"
./e2e/run.sh "tail -n 100 -f e2e.aggregate.log | ./e2e/tools.sh jplot"
55 changes: 52 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,13 +152,15 @@ Usage of ./crossover:

## Getting Started

### ConfigMap-Only Mode

Try weighted load-balancing using `crossover`!

Deploy Envoy along with the loader using the `stable/envoy` chart:

```
helm repo add stable https://kubernetes-charts.storage.googleapis.com
helm upgrade --install envoy stable/envoy -f example/values.yaml
helm upgrade --install envoy stable/envoy -f example/values.yaml -f example/values.services.yaml
```

Then install backends - we use @stefanprodan's awesome [podinfo](https://github.com/stefanprodan/podinfo):
Expand All @@ -182,18 +184,65 @@ Finally, try changing load-balancing weights instantly and without restarting En

```
# 100% bold-olm
helm upgrade --install envoy stable/envoy -f example/values.yaml \
helm upgrade --install envoy stable/envoy -f example/values.yaml -f example/values.services.yaml \
--set services.podinfo.backends.eerie-octopus-podinfo.weight=0 \
--set services.podinfo.backends.bold-olm-podinfo.weight=100
# 100% eerie-octopus
helm upgrade --install envoy stable/envoy -f example/values.yaml \
helm upgrade --install envoy stable/envoy -f example/values.yaml -f example/values.services.yaml \
--set services.podinfo.backends.eerie-octopus-podinfo.weight=100 \
--set services.podinfo.backends.bold-olm-podinfo.weight=0
```

See [example/values.yaml](example/values.yaml) for more details on the configuration.

### ConfigMap + SMI TrafficSplit Mode

The setup is mostly similar to that for CofigMap-only mode.

Just add `services.podinfo.smi.enabled=true` while installing Envoy:

```
helm upgrade --install envoy stable/envoy \
-f example/values.yaml -f example/values.services.yaml \
--set services.podinfo.smi.enabled=true
```

Now you are ready to change weights by creating and modifying TrafficSplit like this:

```
apiVersion: split.smi-spec.io/v1alpha2
kind: TrafficSplit
metadata:
name: podinfo
spec:
# The root service that clients use to connect to the destination application.
service: podinfo
# Services inside the namespace with their own selectors, endpoints and configuration.
backends:
- service: eerie-octopus-podinfo
weight: 0
- service: bold-olm-podinfo
weight: 100
```

Under the hood, `crossover` reads `podinfo` trafficsplit and `envoy-xds` configmap, merges the trafficsplit into the configmap to produce the final configmap `envoy-xds-gen`. It is `envoy-xds-gen` which is loaded into `envoy`.

For convenience, there are several manifest files each with different set of weights:

```
# 100% bold-olm
kubectl apply -f podinfo-v0.trafficsplit.yaml
# 75% bold-olm 25% eerie-octopus
kubectl apply -f podinfo-v1.trafficsplit.yaml
#...
# 0% bold-olm 100% eerie-octupus
kubectl apply -f podinfo-v4.trafficsplit.yaml
```

## Developing

Bring your own K8s cluster, move to the project root, and run the following commands to give it a ride:
Expand Down
Loading

0 comments on commit 4ab0d8d

Please sign in to comment.