Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add user-specified labels/annotations to Canary for resulting Service object #510

Closed
stealthybox opened this issue Mar 19, 2020 · 4 comments · Fixed by #538
Closed

Add user-specified labels/annotations to Canary for resulting Service object #510

stealthybox opened this issue Mar 19, 2020 · 4 comments · Fixed by #538

Comments

@stealthybox
Copy link
Member

A user asked this question on Slack:

Can I customize the annotations on the generated clusterIP services?
(link)

Setting labels/annotations on the resulting Service is not currently supported
The Service labels are currently set to match the labelSelector, and the annotations map is initialized as empty.

This is where those values are created in the code:
https://github.com/weaveworks/flagger/blob/ea42f70/pkg/controller/scheduler.go#L115
This is where they are set on the Service:
https://github.com/weaveworks/flagger/blob/ea42f70/pkg/router/kubernetes_default.go#L118-L119

I think it makes sense to add fields to Canary for custom labels and annotations on the resulting Service.
This is a valid use-case for folks who use operators/controllers/mutating-admission-webhooks to mutate Services or related network infra (DNS/Ingress/TLS/etc) as well as OPA policy rules based on Service labels and annotations.

This would require an API change and trivial code change in these two linked places.

I would propose two map[string]string fields be added to Canary:

  • spec.service.annotations
  • spec.service.labels

Flagger may add or override annotations to these maps when creating or updating the Service to support other mesh providers that are configured by Service annotations such as Traefik or Ambassador(#142)

Related #85
Related #142

@stefanprodan
Copy link
Member

Flagger creates 3 ClusterIP services: apex, canary, primary. Adding the metadata to all 3 would result in duplicated Prometheus metrics (if you add prom annotations), if we only add it to the apex then canary pods will be excluded since the apex selector is for the primary pods. Adding the metadata to all 3 would also break the ingress controllers mentioned, adding it only to the apex service could work for those but would not cover OPA and such.

@stealthybox
Copy link
Member Author

stealthybox commented Mar 19, 2020

Flagger creates 3 ClusterIP services: apex, canary, primary

Ah thanks, I found the explanation and related reconile functions.

Adding the metadata to all 3 would result in duplicated Prometheus metrics (if you add prom annotations), if we only add it to the apex then canary pods will be excluded since the apex selector is for the primary pods.

For the prometheus scrape implementation, the apex/primary scrape results would be statistically similar since the pod selectors overlap like you mentioned.
The canary metrics are a unique set.

This is a bit of an abstraction leak, because prometheus is chasing the different Service objects to the same targets.
Other controllers that actually operato on Services themselves don't have this problem.

Adding the metadata to all 3 would also break the ingress controllers mentioned

This would be correct for any annotations that are unique keys.
Traefik has annotations that are Service specific (ex: buffering, size-limits, connection-limits, load-balancing algorithms, request/response headers).
None Traefik's Service annotations look globally unique. docs

Another use-case would be CNI's that operate on Services.
kube-router uses annotations for advertisement, direct-server-return, hairpinning, and load-balancing algorithms: docs


Would breaking out each service in the Canary API make sense?

  • spec.service.apex.annotations
  • spec.service.apex.labels
  • spec.service.primary.annotations
  • spec.service.primary.labels
  • spec.service.canary.annotations
  • spec.service.canary.labels

This would force the user to think about how they decorate each Service which should help prevent the kinds of bugs you mentioned.
It would also enable more granular uses.

@stefanprodan
Copy link
Member

stefanprodan commented Mar 19, 2020

Would breaking out each service in the Canary API make sense?

Yes I think this is the only viable way of adding metadata to services.

@davidbIQ
Copy link

I'd like to add that not having the ablility to add a label to the service is limiting as with the prometheus stack operator which doens't support annotation based discovery of services we can't use a service monitor and will likely be forced to use a pod monitor which isn't as clean imo. What I'd like to see is the ability to have a ServiceMonitor be able to be setup to monitor services with say the tag monitor=true, and be able to request that tag be added to the flagger services that it creates. The other tags it creates would be unchanged and this would just be an additional one to allow for monitoring by prometheus.

The prometheus stack stuff is here: https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack
it specifies in the prometheus.io/scrape section:
The prometheus operator does not support annotation-based discovery of services, using the PodMonitor or ServiceMonitor CRD in its place as they provide far more configuration options.

To accommodate that limitation, we created a service monitor similar to:
https://stackoverflow.com/questions/64445937/prometheus-monitor-all-services-without-creating-servicemonitor-for-each-servic

which is cleaner than our initial pass having 3 service monitors one for the canary, one for the primary and one for the base one for each service (though that did work).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants