Skip to content

Commit

Permalink
Merge pull request #325 from weaveworks/appmesh-grcp
Browse files Browse the repository at this point in the history
Allow gPRC protocol for App Mesh
  • Loading branch information
stefanprodan authored Oct 5, 2019
2 parents 5cdacf8 + 76800d0 commit 20af98e
Show file tree
Hide file tree
Showing 6 changed files with 89 additions and 67 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,6 @@ metadata:
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
targetRef:
Expand All @@ -94,6 +93,10 @@ spec:
# Istio virtual service host names (optional)
hosts:
- podinfo.example.com
# Istio traffic policy (optional)
trafficPolicy:
tls:
mode: ISTIO_MUTUAL
# HTTP match conditions (optional)
match:
- uri:
Expand Down Expand Up @@ -144,7 +147,7 @@ spec:
topic="podinfo"
}[1m]
)
# external checks (optional)
# testing (optional)
webhooks:
- name: load-test
url: http://flagger-loadtester.test/
Expand Down
3 changes: 3 additions & 0 deletions artifacts/appmesh/canary.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@ spec:
service:
# container port
port: 9898
# container port name (optional)
# can be http or grpc
portName: http
# App Mesh reference
meshName: global
# define the canary analysis timing and KPIs
Expand Down
51 changes: 23 additions & 28 deletions docs/gitbook/how-it-works.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@ metadata:
spec:
# service mesh provider (optional)
# can be: kubernetes, istio, linkerd, appmesh, nginx, gloo, supergloo
# use the kubernetes provider for Blue/Green style deployments
provider: istio
# deployment reference
targetRef:
Expand All @@ -38,13 +37,7 @@ spec:
# container port
port: 9898
# service port name (optional, will default to "http")
portName: http-podinfo
# Istio gateways (optional)
gateways:
- public-gateway.istio-system.svc.cluster.local
# Istio virtual service host names (optional)
hosts:
- podinfo.example.com
portName: http
# promote the canary without analysing it (default false)
skipAnalysis: false
# define the canary analysis timing and KPIs
Expand All @@ -71,15 +64,13 @@ spec:
# milliseconds
threshold: 500
interval: 30s
# external checks (optional)
# testing (optional)
webhooks:
- name: integration-tests
url: http://podinfo.test:9898/echo
timeout: 1m
# key-value pairs (optional)
- name: load-test
url: http://flagger-loadtester.test/
timeout: 5s
metadata:
test: "all"
token: "16688eb5e9f289f1991c"
cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"
```
**Note** that the target deployment must have a single label selector in the format `app: <DEPLOYMENT-NAME>`:
Expand All @@ -102,8 +93,8 @@ spec:
Besides `app` Flagger supports `name` and `app.kubernetes.io/name` selectors. If you use a different
convention you can specify your label with the `-selector-labels` flag.

The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Service and
the Istio Virtual Service. The container port from the target deployment should match the `service.port` value.
The target deployment should expose a TCP port that will be used by Flagger to create the ClusterIP Services.
The container port from the target deployment should match the `service.port` value.

### Canary status

Expand Down Expand Up @@ -201,10 +192,11 @@ spec:
# Istio virtual service host names (optional)
hosts:
- frontend.example.com
# Istio traffic policy (optional)
# Istio traffic policy
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
tls:
# use ISTIO_MUTUAL when mTLS is enabled
mode: DISABLE
# HTTP match conditions (optional)
match:
- uri:
Expand Down Expand Up @@ -291,8 +283,8 @@ metadata:
spec:
host: frontend-primary
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
tls:
mode: DISABLE
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
Expand All @@ -302,15 +294,15 @@ metadata:
spec:
host: frontend-canary
trafficPolicy:
loadBalancer:
simple: LEAST_CONN
tls:
mode: DISABLE
```

Flagger keeps in sync the virtual service and destination rules with the canary service spec.
Any direct modification to the virtual service spec will be overwritten.

To expose a workload inside the mesh on `http://backend.test.svc.cluster.local:9898`,
the service spec can contain only the container port:
the service spec can contain only the container port and the traffic policy:

```yaml
apiVersion: flagger.app/v1alpha3
Expand All @@ -321,6 +313,9 @@ metadata:
spec:
service:
port: 9898
trafficPolicy:
tls:
mode: DISABLE
```

Based on the above spec, Flagger will create several ClusterIP services like:
Expand Down Expand Up @@ -531,15 +526,15 @@ sum(
)
```

App Mesh query:
Envoy query (App Mesh or Gloo):

```javascript
sum(
rate(
envoy_cluster_upstream_rq{
kubernetes_namespace="$namespace",
kubernetes_pod_name=~"$workload",
response_code!~"5.*"
envoy_response_code!~"5.*"
}[$interval]
)
)
Expand Down Expand Up @@ -584,7 +579,7 @@ histogram_quantile(0.99,
)
```

App Mesh query:
Envoy query (App Mesh or Gloo):

```javascript
histogram_quantile(0.99,
Expand Down
26 changes: 16 additions & 10 deletions docs/gitbook/install/flagger-install-on-eks-appmesh.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,7 @@ The App Mesh integration with EKS is made out of the following components:
### Create a Kubernetes cluster

In order to create an EKS cluster you can use [eksctl](https://eksctl.io).
Eksctl is an open source command-line utility made by Weaveworks in collaboration with Amazon,
it’s a Kubernetes-native tool written in Go.
Eksctl is an open source command-line utility made by Weaveworks in collaboration with Amazon.

On MacOS you can install eksctl with Homebrew:

Expand Down Expand Up @@ -137,7 +136,17 @@ Status:
Type: MeshActive
```

### Install Flagger, Prometheus and Grafana
In order to collect the App Mesh metrics that Flagger needs to run the canary analysis,
you'll need to setup a Prometheus instance to scrape the Envoy sidecars.

Install the App Mesh Prometheus:

```sh
helm upgrade -i appmesh-prometheus eks/appmesh-prometheus \
--wait --namespace appmesh-system
```

### Install Flagger and Grafana

Add Flagger Helm repository:

Expand All @@ -151,20 +160,17 @@ Install Flagger's Canary CRD:
kubectl apply -f https://raw.githubusercontent.com/weaveworks/flagger/master/artifacts/flagger/crd.yaml
```

Deploy Flagger and Prometheus in the _**appmesh-system**_ namespace:
Deploy Flagger in the _**appmesh-system**_ namespace:

```bash
helm upgrade -i flagger flagger/flagger \
--namespace=appmesh-system \
--set crd.create=false \
--set meshProvider=appmesh \
--set prometheus.install=true
--set metricsServer=appmesh-prometheus:9090
```

In order to collect the App Mesh metrics that Flagger needs to run the canary analysis,
you'll need to setup a Prometheus instance to scrape the Envoy sidecars.

You can enable **Slack** notifications with:
You can enable Slack or MS Teams notifications with:

```bash
helm upgrade -i flagger flagger/flagger \
Expand All @@ -181,7 +187,7 @@ Deploy Grafana in the _**appmesh-system**_ namespace:
```bash
helm upgrade -i flagger-grafana flagger/grafana \
--namespace=appmesh-system \
--set url=http://flagger-prometheus.appmesh-system:9090
--set url=http://appmesh-prometheus:9090
```

You can access Grafana using port forwarding:
Expand Down
3 changes: 3 additions & 0 deletions docs/gitbook/usage/appmesh-progressive-delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -67,6 +67,9 @@ spec:
service:
# container port
port: 9898
# container port name (optional)
# can be http or grpc
portName: http
# App Mesh reference
meshName: global
# App Mesh egress (optional)
Expand Down
Loading

0 comments on commit 20af98e

Please sign in to comment.