-
Notifications
You must be signed in to change notification settings - Fork 564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot install prometheus-operator chart with helmfile due to helm-diff error #1124
Comments
I made an educated guess as to the
Hopefully that command is in the ballpark of the real command. My guess is that until the |
Issue being tracked in |
What version of prometheus-operator are you trying to install? |
Probably your version is a bit outdated. Helm 3 works differently with CRDs: https://helm.sh/docs/topics/chart_best_practices/custom_resource_definitions/ |
I'm using helm v3.1.1, helm-diff plugin v3.1.1 trying to install
`stable/prometheus-operator` chart. Does this chart not properly handle
CRDs? Is there a different chart I should be using?
Thanks for the assistance, Andrew!
…On Thu, Feb 27, 2020 at 5:19 AM Andrew Nazarov ***@***.***> wrote:
Probably your version is a bit outdated. Helm 3 works differently with
CRDs:
https://helm.sh/docs/topics/chart_best_practices/custom_resource_definitions/
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1124?email_source=notifications&email_token=AAA2PMPIXVCGMEHUF6VAAATRE6OUVA5CNFSM4K4MVLUKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEND7MNI#issuecomment-591918645>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA2PMK5DZYKD7MNSZ6IFALRE6OUVANCNFSM4K4MVLUA>
.
|
We are successfully deploying version 8.7.0 with Helm 3. Can you check the exact version of the prometheus-operator chart that is used? |
I'm using hooks to workaround
|
With helmfile (v0.104.1), helm-diff (v3.1.1), and helm (v3.1.2) I'm no
longer having this issue.
…On Tue, Mar 24, 2020 at 11:39 AM agmtr ***@***.***> wrote:
I'am use hooks to workaround
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd prometheuses.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_alertmanagers.yaml"]
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd alertmanagers.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_podmonitors.yaml"]
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd prometheusrules.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_prometheuses.yaml"]
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd servicemonitors.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_prometheusrules.yaml"]
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd podmonitors.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_servicemonitors.yaml"]
- events: ["prepare"]
command: "/bin/sh"
args: ["-c", "kubectl get crd podmonitors.monitoring.coreos.com >/dev/null 2>&1 || \
kubectl apply --validate=false -f https://raw.githubusercontent.com/coreos/prometheus-operator/release-0.37/example/prometheus-operator-crd/monitoring.coreos.com_thanosrulers.yaml"]
set:
- name: prometheusOperator.createCustomResource
value: false```
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#1124 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAA2PMIESYDOZ5SLQBIVS6LRJDO2LANCNFSM4K4MVLUA>
.
|
This is still happening as of helmfile (v0.106.3), helm-diff (v3.1.1), and helm (v3.1.2) What prometheus-operator chart version are you using ?@mojochao I seem to have the same issue with 8.12.3 |
helmfile: v0.109.0 Chart: The issue is not solved. |
Similar. But a different chart. Bash: $ helmfile --version
helmfile version v0.109.1
$ helm version
version.BuildInfo{Version:"v3.2.0", GitCommit:"e11b7ce3b12db2941e90399e874513fbd24bcb71", GitTreeState:"clean", GoVersion:"go1.13.10"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-af3caf", GitCommit:"af3caf6136cd355f467083651cc1010a499f59b1", GitTreeState:"clean", BuildDate:"2020-03-27T21:51:36Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}
$ helm plugin list
NAME VERSION DESCRIPTION
diff 3.1.1 Preview helm upgrade changes as a diff helmfile: repositories:
- name: dynatrace
url: https://raw.githubusercontent.com/Dynatrace/helm-charts/master/repos/stable
releases:
- name: dynatrace-oneagent-operator
namespace: dynatrace
chart: dynatrace/dynatrace-oneagent-operator
version: ~0.7.0 |
Isn't this chart issue, rather than helm or helmfile's? I mean, are those charts migrated to internally use the new |
I would rather say a helm 3 issue. Since doing In my opinion the best solution would be the one suggested in that issue, but it doesn't seem to be a priority atm. So another solution would be to have a way to disable diff in helmfile when the release aren't present (when installing), like a |
Thanks. Interesting idea! Just curious, but do you think how could Helmfile determines if it's an install or upgrade? |
I don't know how you would do it optimally, but I guess it could do |
But does helmfile ever do |
It probably don't. The point I was trying to make is that the root cause is the same. As long as you try to validate an object that is not defined in the API-server, the validation will fail. It can't be running that for an install right, atleast not without the |
We currently using the following workaround: ...
hooks:
# Create CRDs separately in helmfile presync hooks
# https://github.com/roboll/helmfile/issues/1124
# https://github.com/helm/helm/issues/7449
# https://github.com/cloudposse/helmfiles/blob/59490fd2599d6113a14103be919985f9fbcea73a/releases/prometheus-operator.yaml
# Hooks associated to presync events are triggered before each release is applied to the remote cluster.
# This is the ideal event to execute any commands that may mutate the cluster state as it
# will not be run for read-only operations like lint, diff or template.
# These hook install the prometheuses.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd prometheuses.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-prometheus.yaml | kubectl apply -f -; }"]
# This hoook installs the alertmanagers.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd alertmanagers.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-alertmanager.yaml | kubectl apply -f -; }"]
# This hoook installs the prometheusrules.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd prometheusrules.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-prometheusrules.yaml | kubectl apply -f -; }"]
# This hoook installs the servicemonitors.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd servicemonitors.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-servicemonitor.yaml | kubectl apply -f -; }"]
# This hoook installs the podmonitors.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd podmonitors.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-podmonitor.yaml | kubectl apply -f -; }"]
# This hoook installs the thanosrulers.monitoring.coreos.com CustomResourceDefinition if needed
- events: ["presync"]
command: "/bin/sh"
args: ["-c", "kubectl get crd thanosrulers.monitoring.coreos.com >/dev/null 2>&1 || \
{ helm pull stable/prometheus-operator --version {{`{{ .Release.Version }}`}} && tar -Oxzf prometheus-operator-{{`{{ .Release.Version }}`}}.tgz prometheus-operator/crds/crd-thanosrulers.yaml | kubectl apply -f -; }"]
... It's similar to the one posted above by @agmtr but uses the CRDs directly from the chart. This increases compatibility because it uses the same version which is normally bundled with the chart - just deploys it with a presync hook. |
As a workaround I installed helm-diff 3.0.0-rc.7 |
I had the exact same problem, used "8.13.12" version for prometheus-operator and it worked :) |
FWIW using |
I thought helm-diff recently added Can we probably enhance Helmfile to add a new option under
|
Apparently But I've managed to make prometheus-operator installation work with |
`disableOpenAPIValidation: true` might be useful for workaround for broken CRDs that is known to be exist in older OpenShift versions, and `disableValidation: true` is confirmed to allow installing charts like prometheus-operator that tries to install CRDs and CRs in the same chart. Strictly speaking, for the latter case I believe you only need `disableValidation: true` set during the first installation, but for the ease of operation I shall suggest you to always set it. Obviously turning validation mostly(disableOpenAPIValidation) or entirely(disableValidation) result in deferring any real error until sync time. We need completely client-side validation that is able to read CRDs and use it for validating any CRs to catch any error before sync. But it worth an another (big) issue. Fixes #1124
`disableOpenAPIValidation: true` might be useful for workaround for broken CRDs that is known to be exist in older OpenShift versions, and `disableValidation: true` is confirmed to allow installing charts like prometheus-operator that tries to install CRDs and CRs in the same chart. Strictly speaking, for the latter case I believe you only need `disableValidation: true` set during the first installation, but for the ease of operation I shall suggest you to always set it. Obviously turning validation mostly(disableOpenAPIValidation) or entirely(disableValidation) result in deferring any real error until sync time. We need completely client-side validation that is able to read CRDs and use it for validating any CRs to catch any error before sync. But it worth an another (big) issue. Fixes #1124
So after #1373 this should work
|
@mumoshu Thank you, man - as always you rock!!! |
Versions used:
Problem observed:
I can successfully install the stable/prometheus-operator chart with
helm
directly.I cannot install the same chart with
helmfile
:I see that the error is coming out of the helm-diff plugin. What command(s) is helmfile using that causes these errors to surface? I will open an issue in the databus23/helm-diff plugin repo once I understand how to repro this with the
helm diff
command directly.Many thanks in advance!
The text was updated successfully, but these errors were encountered: