-
Notifications
You must be signed in to change notification settings - Fork 737
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"no values found for nginx metric request-success-rate" with Prometheus Operator and nginx provider #421
Comments
You need to do the load test against the public address so that traffic goes over nginx, see https://docs.flagger.app/usage/nginx-progressive-delivery |
Or use the ClusterIP address of your nginx ingress and set the Host header in hey. |
Hi @stefanprodan, thanks for the quick reply. Also great work on Flagger! Unfortunately, still having issues even when switching the load tests to hit the public IP/hostname:
I updated the Flagger HelmRelease to install another prometheus ( |
Any news on this? |
I have the same issue when using prometheus-operator and recent nginx-ingress. After a quick look it seems that flagger is using different namespace label schema. i.e. in case of podinfo test:
instead of:
In short: namespace -> exported_namespace |
I guess prometheus-operator changes that label since Flagger e2e tests for NGINX are passing #489. The solution is to use metric templates e.g.: apiVersion: flagger.app/v1beta1
kind: MetricTemplate
metadata:
name: error-rate
namespace: ingress-nginx
spec:
provider:
type: prometheus
address: http://promethues.monitoring:9090
query: |
100 - sum(
rate(
nginx_ingress_controller_requests{
exported_namespace="{{ namespace }}",
ingress="{{ ingress }}",
status!~"5.*"
}[{{ interval }}]
)
)
/
sum(
rate(
nginx_ingress_controller_requests{
exported_namespace="{{ namespace }}",
ingress="{{ ingress }}"
}[{{ interval }}]
)
)
* 100 Replace metrics:
- name: error-rate
templateRef:
name: error-rate
namespace: ingress-nginx
thresholdRange:
max: 1
interval: 1m |
Yea, that did the trick. That's probably a common use-case that people will be using flagger along with prometheus-operator. Maybe issue should be documented somewhere. |
This has been documented here https://docs.flagger.app/v/master/tutorials/prometheus-operator |
Link is broken :-) |
I installed Flagger with Flux as follows:
nginx-ingress is installed as follows:
We are also using the prometheus-operator. I can confirm from Prometheus dashboard that nginx metrics are being collected, and also confirmed flagger can connect to the metricsServer endpoint specified in the
HelmRelease
:I made some changes to the podinfo helm chart to support creating an ingress and providing this to the canary spec. My canary spec:
podinfo-frontend:
My issue: every canary progression fails with:
Halt advancement no values found for nginx metric request-success-rate probably podinfo-frontend.flagger is not receiving traffic
I confirmed the
hey
loadtesting is working from the flagger-loadtester pod. Any thoughts as to what's going on? Thanks very much.The text was updated successfully, but these errors were encountered: