Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Change builtin metrics to work with Istio >= 1.5 #623

Merged
merged 1 commit into from
Jun 17, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/gitbook/dev/upgrade-guide.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Istio 1.5 comes with a breaking change for Flagger uses. In Istio telemetry v2 t
`istio_request_duration_seconds_bucket` has been removed and replaced with `istio_request_duration_milliseconds_bucket`
and this breaks the `request-duration` metric check.

You can create a metric template using the new duration metric like this:
If are using **Istio 1.4**, you can create a metric template using the old duration metric like this:

```yaml
apiVersion: flagger.app/v1beta1
Expand All @@ -66,7 +66,7 @@ spec:
0.99,
sum(
rate(
istio_request_duration_milliseconds_bucket{
istio_request_duration_seconds_bucket{
reporter="destination",
destination_workload_namespace="{{ namespace }}",
destination_workload=~"{{ target }}"
Expand All @@ -85,6 +85,6 @@ metrics:
name: latency
namespace: istio-system
thresholdRange:
max: 500
max: 0.500
interval: 1m
```
6 changes: 3 additions & 3 deletions docs/gitbook/tutorials/istio-progressive-delivery.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ This guide shows you how to use Istio and Flagger to automate canary deployments

## Prerequisites

Flagger requires a Kubernetes cluster **v1.11** or newer and Istio **v1.0** or newer.
Flagger requires a Kubernetes cluster **v1.11** or newer and Istio **v1.5** or newer.

Install Istio with telemetry support and Prometheus:

```bash
istioctl manifest apply --set profile=default
```

Install Flagger using Kustomize (kubectl 1.14) in the `istio-system` namespace:
Install Flagger using Kustomize (kubectl >= 1.14) in the `istio-system` namespace:

```bash
kubectl apply -k github.com/weaveworks/flagger//kustomize/istio
Expand Down Expand Up @@ -149,7 +149,7 @@ spec:
cmd: "hey -z 1m -q 10 -c 2 http://podinfo-canary.test:9898/"
```

**Note** that when using Istio 1.5 you have to replace the `request-duration`
**Note** that when using Istio 1.4 you have to replace the `request-duration`
with a [metric template](https://docs.flagger.app/dev/upgrade-guide#istio-telemetry-v2).

Save the above resource as podinfo-canary.yaml and then apply it:
Expand Down
4 changes: 2 additions & 2 deletions pkg/metrics/observers/istio.go
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ var istioQueries = map[string]string{
0.99,
sum(
rate(
istio_request_duration_seconds_bucket{
istio_request_duration_milliseconds_bucket{
reporter="destination",
destination_workload_namespace="{{ namespace }}",
destination_workload=~"{{ target }}"
Expand Down Expand Up @@ -75,6 +75,6 @@ func (ob *IstioObserver) GetRequestDuration(model flaggerv1.MetricTemplateModel)
return 0, fmt.Errorf("running query failed: %w", err)
}

ms := time.Duration(int64(value*1000)) * time.Millisecond
ms := time.Duration(int64(value)) * time.Millisecond
return ms, nil
}
4 changes: 2 additions & 2 deletions pkg/metrics/observers/istio_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,13 +49,13 @@ func TestIstioObserver_GetRequestSuccessRate(t *testing.T) {
}

func TestIstioObserver_GetRequestDuration(t *testing.T) {
expected := ` histogram_quantile( 0.99, sum( rate( istio_request_duration_seconds_bucket{ reporter="destination", destination_workload_namespace="default", destination_workload=~"podinfo" }[1m] ) ) by (le) )`
expected := ` histogram_quantile( 0.99, sum( rate( istio_request_duration_milliseconds_bucket{ reporter="destination", destination_workload_namespace="default", destination_workload=~"podinfo" }[1m] ) ) by (le) )`

ts := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
promql := r.URL.Query()["query"][0]
assert.Equal(t, expected, promql)

json := `{"status":"success","data":{"resultType":"vector","result":[{"metric":{},"value":[1,"0.100"]}]}}`
json := `{"status":"success","data":{"resultType":"vector","result":[{"metric":{},"value":[1,"100"]}]}}`
w.Write([]byte(json))
}))
defer ts.Close()
Expand Down