-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[kube-prometheus-stack] Add downward compat for Prom CRD #4818
[kube-prometheus-stack] Add downward compat for Prom CRD #4818
Conversation
… CRD fields Running the latest prom-stack versions on legacy OpenShift clusters with no influence on the preinstalled CRDs results in errors such as this: failed to create typed patch object (..): .spec.scrapeConfigNamespaceSelector: field not declared in schema This patch provides a workaround using this values.yaml: prometheus: prometheusSpec: scrapeConfigNamespaceSelector: null scrapeConfigSelectorNilUsesHelmValues: null scrapeConfigSelector: null Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
25a3828
to
7157047
Compare
Hi @schnatterer in general, I understand the use-case and I'm happy to assist here. However, I also like constancy. In conclusion, all selectors should gain this capability. @GMartinez-Sisti What did you think here? The selector having some hacks here. If you would ask me, I would favorite a cleanup. Instead:
I would replace the whole logic to:
and I would like to move the default value to the helm values file, since we support scrapeConfigSelector:
matchLabels:
release: '{{ $.Release.Name | quote }}' In conclusion If Such similar config can be applied to the following selectors:
|
@jkroepke I like the idea, it does make it cleaner! Regarding:
We need to test if this is considered a breaking change. |
It must be considered as breaking change. If end user set However, I feel thats fine by bumping major version + documentation. |
…n in prometheus-community#4818 Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
Hi @jkroepke and @GMartinez-Sisti,
I tried your suggestion and it seems to fix my problem. The default values fail, though.
This can be fixed by dropping the string comparison {{- if not (eq .Values.prometheus.prometheusSpec.scrapeConfigSelector nil)}}
# instead of
{{- if not (or (eq .Values.prometheus.prometheusSpec.scrapeConfigSelector nil) (eq .Values.prometheus.prometheusSpec.scrapeConfigSelector "")) }} Next is this:
Which could be fixed like so scrapeConfigSelector:
{{- tpl (toYaml .Values.prometheus.prometheusSpec.scrapeConfigSelector | indent 4) . | nindent 2 }}
# instead of
scrapeConfigSelector: {{ tpl (toYaml .Values.prometheus.prometheusSpec.scrapeConfigSelector | indent 4) . }} Now templating succeeds but we have redundant quotes: $ helm template . | grep scrapeConfig -A2
scrapeConfigSelector:
matchLabels:
release: '"release-name"'
$ helm template prometheus-community/kube-prometheus-stack --version 58.2.1 | grep scrapeConfig -A2
scrapeConfigSelector:
matchLabels:
release: "release-name" This could be fixed by removing the scrapeConfigSelector:
matchLabels:
release: "{{ $.Release.Name }}" Then, however, the templated YAML contains single quotes instead of the double quotes it contained before. Would that matter? helm template . | grep scrapeConfig -A2
scrapeConfigSelector:
matchLabels:
release: 'release-name' To make sure we're all on the same boat, I pushed the current state of the POC, see 739fe4c . Please have a look at it and tell me what you think. If we're all happy with the concept, I can implement the other selectors. I also would consider this a breaking change and see the downsides of it. What would that mean for me as as contributor? Would you accept a PR by me that document the change and bump the major version? |
Ah, I missed the test cases for setting other options, which do not seem to work as expected 😬 $ helm template . --values - <<EOF | grep 'scrapeConfigSelector' -A2
prometheus:
prometheusSpec:
scrapeConfigSelector: {}
EOF
scrapeConfigSelector:
matchLabels:
release: 'release-name'
$ helm template . --values - <<EOF | grep 'scrapeConfigSelector' -A3
prometheus:
prometheusSpec:
scrapeConfigSelector:
a: b
EOF
scrapeConfigSelector:
a: b
matchLabels:
release: 'release-name' I'm a bit a loss here with my templating skills. |
As an alternative, could we agree on a less invasive fix, as originally proposed, but for all selectors? This way we would at least have consistency, and no breaking change. WDYT @jkroepke ? |
Coud you try to implement the nil check based on Masterminds/sprig#53 (comment) Edit: I see, in helm it not possible to clear a map. After a quick test, a possible implementation could be this: Template:
Values: prometheus:
prometheusSpec:
scrapeConfigSelector:
matchLabels:
release: "{{ $.Release.Name }}" Case 1: Default ValuesResults into spec:
scrapeConfigSelector:
matchLabels:
release: 'release-name' Case 2: Override with nil
Results into
Case 2: Empty array.For prometheus operator, the value nil and the value empty array are different and have different behaviors. Due helm limitations, override with an empty array will be ignored. The only known helm native solution would be overriding the subkey with an nil value.
Results into
|
…stion in prometheus-community#4818 prometheus-community#4818 (comment) Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
Interesting solution, @jkroepke 💡 Let me sum up the breaking change: If you used to set In other words: # What used to be:
$ helm template prometheus-community/kube-prometheus-stack --version 62.3.0 --values - <<EOF | grep scrapeConfigSelector
prometheus:
prometheusSpec:
scrapeConfigSelectorNilUsesHelmValues: false
EOF
scrapeConfigSelector: {}
# Is now:
helm template . --values - <<EOF | grep 'scrapeConfigSelector' -A2
prometheus:
prometheusSpec:
scrapeConfigSelector:
matchLabels: null
EOF
scrapeConfigSelector:
{} And we can live with the additional line break after We can also live with the single quotes in the default values 👇️, right? # What used to be:
$ helm template prometheus-community/kube-prometheus-stack --version 62.3.0 | grep scrapeConfigSelector -A3
scrapeConfigSelector:
matchLabels:
release: "release-name"
# Is now:
$ helm template . | grep 'scrapeConfigSelector' -A2
scrapeConfigSelector:
matchLabels:
release: 'release-name' The same is true for all Plus, I can now set the selector to So what I will do is
And after some polishing during review you will merge the PR? I'm asking because the number of changes will make this quite an invest for me, and I want to make sure we all agree on this path, so my time is not wasted. |
It is at least valid YAML. That what i checked. If it causes problems, I have already an idea to fix that.
Kubernetes don't care if single or double quotes are used. The YAML document will be convered to an internal structure anyways. That why kubectl get yaml looks always different than the input. If you work with
I would agree here, however I also may a different opinion. If there is an breaking change which simplifies the maintenance, I would go of it. We are not writing enterprise software. That why I feel important to hear an different opinion an other maintainer, e.g. @GMartinez-Sisti or @QuentinBisson
Keep in mind, the selectors exists for Alertmanager and Thanos Ruler as well. I know, its ton of work here.
Yes. I got already feedback quickly here. But hold the work until we have an different opinion. |
@GMartinez-Sisti @QuentinBisson Could you state your opinion, if we should go forward with the breaking changes? This comment sums up what we plan, and jkroepke states his opinion in this comment. If you have doubts about a breaking change, we could go with my original approach in 7157047, which is minimally invasive but harder to maintain. I'd still be happy with this approach, because I only suggested a small fix and would be happy to get it faster and with less effort 🙂 |
I'm fine with the breaking change and this is most likely a better way forward :) |
Thanks for the great work @schnatterer and @jkroepke 🚀 |
Was removed by accident. Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
…-community#4818 Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
Signed-off-by: Jan-Otto Kröpke <github@jkroepke.de>
As found by linter Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
As found by linter Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
As found by next linter Signed-off-by: Johannes Schnatterer <johannes.schnatterer@cloudogu.com>
Wow, that process of linting is dragging on. For me as a contributor it would make things a lot faster, if the issue template would contain a pointer to a doc that tells me how to run the different linters locally, e.g. using docker. |
Good point, could you create an issue for that? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM to me. Tested locally.
@schnatterer Thanks for you contribution! |
Yes thank you both for the great work |
🎉 Thanks for accepting my proposal and for the many discussions @jkroepke, @GMartinez-Sisti and @QuentinBisson ! Great result! |
This PR contains the following updates: | Package | Update | Change | |---|---|---| | [kube-prometheus-stack](https://redirect.github.com/prometheus-operator/kube-prometheus) ([source](https://redirect.github.com/prometheus-community/helm-charts)) | major | `62.7.0` -> `63.0.0` | --- ### Release Notes <details> <summary>prometheus-community/helm-charts (kube-prometheus-stack)</summary> ### [`v63.0.0`](https://redirect.github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-63.0.0) [Compare Source](https://redirect.github.com/prometheus-community/helm-charts/compare/kube-prometheus-stack-62.7.0...kube-prometheus-stack-63.0.0) kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator. #### What's Changed - \[kube-prometheus-stack] Add downward compat for Prom CRD by [@​schnatterer](https://redirect.github.com/schnatterer) in [https://github.com/prometheus-community/helm-charts/pull/4818](https://redirect.github.com/prometheus-community/helm-charts/pull/4818) #### New Contributors - [@​schnatterer](https://redirect.github.com/schnatterer) made their first contribution in [https://github.com/prometheus-community/helm-charts/pull/4818](https://redirect.github.com/prometheus-community/helm-charts/pull/4818) **Full Changelog**: prometheus-community/helm-charts@prometheus-conntrack-stats-exporter-0.5.11...kube-prometheus-stack-63.0.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR was generated by [Mend Renovate](https://mend.io/renovate/). View the [repository job log](https://developer.mend.io/github/broersma-forslund/homelab). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzOC44MC4wIiwidXBkYXRlZEluVmVyIjoiMzguODAuMCIsInRhcmdldEJyYW5jaCI6Im1haW4iLCJsYWJlbHMiOltdfQ==--> --------- Co-authored-by: renovate[bot] <29139614+renovate[bot]@users.noreply.github.com> Co-authored-by: Rouke Broersma <mobrockers@gmail.com>
For chart testing the prometheus operator started to require (from 63.0.0 version) label release that equal to `prometheus` in the default deployment done in our CI. The change from the upstream: prometheus-community/helm-charts#4818
For chart testing the prometheus operator started to require (from 63.0.0 version) label release that equal to `prometheus` in the default deployment done in our CI. The change from the upstream: prometheus-community/helm-charts#4818
We use kube-prometheus-stack as a sub chart. We are unable to set the Any suggestions? For us this is a breaking change. |
Allowing false as alternative would be sufficient? |
Perfect |
For chart testing the prometheus operator started to require (from 63.0.0 version) label release that equal to `prometheus` in the default deployment done in our CI. The change from the upstream: prometheus-community/helm-charts#4818
There is no known fix yet, only a proposal. |
Not sure, if this would work right now: kube-prometheus-stack:
prometheus:
prometheusSpec:
additionalConfig:
serviceMonitorNamespaceSelector: ~ I feel the best solution would be convert all Selector values to simple string values. It seems like map values are too magic for modern helm. |
…metheus-community#4818)" This reverts commit 25f32d6 Signed-off-by: Jan-Otto Kröpke <joe@cloudeteer.de>
…)" (#4883) Co-authored-by: Quentin Bisson <quentin.bisson@gmail.com>
This PR contains the following updates: | Package | Type | Update | Change | Pending | |---|---|---|---|---| | [kube-prometheus-stack](https://github.com/prometheus-operator/kube-prometheus) ([source](https://github.com/prometheus-community/helm-charts)) | HelmChart | major | `62.7.0` -> `63.1.0` | `64.0.0` | --- ### Release Notes <details> <summary>prometheus-community/helm-charts (kube-prometheus-stack)</summary> ### [`v63.1.0`](https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-63.1.0) [Compare Source](prometheus-community/helm-charts@kube-prometheus-stack-63.0.0...kube-prometheus-stack-63.1.0) kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator. #### What's Changed - \[kube-prometheus-stack] Add support for alertmanager cluster.label by [@​mfinelli](https://github.com/mfinelli) in prometheus-community/helm-charts#4877 #### New Contributors - [@​mfinelli](https://github.com/mfinelli) made their first contribution in prometheus-community/helm-charts#4877 **Full Changelog**: prometheus-community/helm-charts@kube-prometheus-stack-63.0.0...kube-prometheus-stack-63.1.0 ### [`v63.0.0`](https://github.com/prometheus-community/helm-charts/releases/tag/kube-prometheus-stack-63.0.0) [Compare Source](prometheus-community/helm-charts@kube-prometheus-stack-62.7.0...kube-prometheus-stack-63.0.0) kube-prometheus-stack collects Kubernetes manifests, Grafana dashboards, and Prometheus rules combined with documentation and scripts to provide easy to operate end-to-end Kubernetes cluster monitoring with Prometheus using the Prometheus Operator. #### What's Changed - \[kube-prometheus-stack] Add downward compat for Prom CRD by [@​schnatterer](https://github.com/schnatterer) in prometheus-community/helm-charts#4818 #### New Contributors - [@​schnatterer](https://github.com/schnatterer) made their first contribution in prometheus-community/helm-charts#4818 **Full Changelog**: prometheus-community/helm-charts@prometheus-conntrack-stats-exporter-0.5.11...kube-prometheus-stack-63.0.0 </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiIzNy40NDAuNyIsInVwZGF0ZWRJblZlciI6IjM3LjQ0MC43IiwidGFyZ2V0QnJhbmNoIjoibWFpbiIsImxhYmVscyI6W119--> Reviewed-on: https://code.geekbundle.org/madic/git-ops-dev/pulls/174 Co-authored-by: renovate Bot <renovate@geekbundle.org> Co-committed-by: renovate Bot <renovate@geekbundle.org>
What this PR does / why we need it
Running the latest prom-stack versions on legacy OpenShift clusters with no influence on the preinstalled CRDs results in errors such as this:
This PR provides a workaround using this values.yaml:
Yes, this is no ideal solution, but that seems to be what the enterprise-world requires 😐️
At runtime, the operator yields some warnings about the missing fields, but apparently everything else works.
Special notes for your reviewer
Checklist
[prometheus-couchdb-exporter]
)