Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JsonPatches doesnt work #1434

Closed
4c74356b41 opened this issue Aug 27, 2020 · 22 comments
Closed

JsonPatches doesnt work #1434

4c74356b41 opened this issue Aug 27, 2020 · 22 comments

Comments

@4c74356b41
Copy link

4c74356b41 commented Aug 27, 2020

linebreaks only for readability

Error: accumulating resources: accumulateFile "accumulating resources from 'helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml':
evalsymlink failure on '/tmp/chartify193562796/helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml' : lstat /tmp/chartify193562796/helmx.1.rendered: no such file or directory",
loader.New "Error loading helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml with git: url lacks host: helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml, dir: evalsymlink failure on '/tmp/chartify193562796/helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml' 👍 
lstat /tmp/chartify193562796/helmx.1.rendered: no such file or directory, get: invalid source string: helmx.1.rendered/cluster-crd/templates/01-prometheus-operator.yaml"]

this is the helmfile piece that produces the error:

- name: prometheus-operator
  chart: localFolder
  jsonPatches:
  - target:
      version: v1
      kind: ServiceAccount
      name: prometheus-operator
    patch:
    - op: replace
      path: /metadata/namespace
      value: monitoring

the file that its rendering is pretty much this file. Any pointers?

kustomize: {Version:kustomize/v3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T00:58:46Z GoOs:linux GoArch:amd64}

helmfile: version v0.125.7

helm: version.BuildInfo{Version:"v3.3.0", GitCommit:"8a4aeec08d67a7b84472007529e8097ec3742105", GitTreeState:"dirty", GoVersion:"go1.14.7"}
@4c74356b41
Copy link
Author

also, when running the helm x jsonPatch example I'm getting this:

running kustomize -o /tmp/chartify567350779/templates/kustomized.yaml build --load_restrictor=none --enable_alpha_plugins /tmp/chartify567350779
using the default chart version 1.0.0 due to that no ChartVersion is specified
using requirements.yaml:
{}

running helm dependency up /tmp/chartify958743838
2020/08/27 18:16:42 unable to find plugin root - tried: (''; homed in $KUSTOMIZE_PLUGIN_HOME), ('kustomize/plugin'; homed in $XDG_CONFIG_HOME), ('/root/.config/kustomize/plugin'; homed in default value of $XDG_CONFIG_HOME), ('/root/kustomize/plugin'; homed in home directory)

options: {false [] []   true}
running helm template --debug=false --include-crds --output-dir /tmp/chartify958743838/helmx.1.rendered manifests /tmp/chartify958743838
running helm fetch incubator/raw --untar -d /tmp/chartify390158057
running helm repo list
using requirements.yaml:
dependencies:
  - name: raw
    repository: https://kubernetes-charts-incubator.storage.googleapis.com
    condition: bar.enabled
    alias: bar
    version: '*'

running helm dependency up /tmp/chartify390158057/raw
Error: could not find protocol handler for: git+https

in ./helmfile.yaml: [exit status 1

COMMAND:
  kustomize -o /tmp/chartify567350779/templates/kustomized.yaml build --load_restrictor=none --enable_alpha_plugins /tmp/chartify567350779

OUTPUT:
  2020/08/27 18:16:42 unable to find plugin root - tried: (''; homed in $KUSTOMIZE_PLUGIN_HOME), ('kustomize/plugin'; homed in $XDG_CONFIG_HOME), ('/root/.config/kustomize/plugin'; homed in default value of $XDG_CONFIG_HOME), ('/root/kustomize/plugin'; homed in home directory)]

@4c74356b41
Copy link
Author

so it appears it doesnt work at all xD

@mumoshu
Copy link
Collaborator

mumoshu commented Aug 27, 2020

@4c74356b41 Hey! Thanks for reporting. Could you provide me exact reproduction steps?

I tried to reproduce it myself but had no luck. Here's the structure of the sample project I tried:

$ tree .
.
├── helmfile.yaml
└── localFolder
    └── prometheus-operator.yaml

1 directory, 2 files
$ cat helmfile.yaml
releases:
- name: prometheus-operator
  chart: localFolder
  jsonPatches:
  - target:
      version: v1
      kind: ServiceAccount
      name: prometheus-operator
    patch:
    - op: replace
      path: /metadata/namespace
      value: monitoring

// Btw I did find a bug that helmfile was ignoring jsonPatches when non-kustomize, non-chart directory is specified for chart: localFolder. I'll fix it asap. However at least it doesn't result in errors like Error: accumulating resources: accumulateFile "accumulating resources from that you've seen so yours might be a fundamentally different issue.

mumoshu added a commit to helmfile/chartify that referenced this issue Aug 28, 2020
mumoshu added a commit that referenced this issue Aug 28, 2020
To fix the issue that adhoc json patches were not working on kustomize/raw manifests.

Note that regular kustomize project was working. In other words, this only affetcts `chart: path/to/dir` combined with `jsonPatches: ...` when the `path/to/dir` points to a kustomize project or a local directory containing raw K8s manifests.

Ref #1434 (comment)
@4c74356b41
Copy link
Author

4c74356b41 commented Aug 28, 2020

what might help is that I'm using 4c74356b41/helmfile:azure docker image to test my stuff, maybe its related to environment? I dont have kustomize there, I just grab latest binary and use it

@mumoshu
Copy link
Collaborator

mumoshu commented Aug 28, 2020

@4c74356b41 Maybe. Would you mind sharing me your Dockerfile/Makefile that you used for building the image?

I tried to read https://hub.docker.com/layers/4c74356b41/helmfile/azure/images/sha256-c194fd5b089dd9a5013f73dcf848aa5f2a73ceb259cc30f359da54d6c59353f9?context=explore but it's too obfuscated to read...

@4c74356b41
Copy link
Author

you can just pull the image, nothing harmful in there, but here's the dockerfile:

# Setup build arguments with default versions
ARG TERRAFORM_VERSION=0.13.0
ARG KUBE_VERSION=v1.18.8
ARG HELM_VERSION=v3.3.0
ARG HELMFILE_VERSION=v0.125.7
ARG AZURE_CLI_VERSION=2.11.0
ARG PYTHON_MAJOR_VERSION=3.7

# Download Terraform\Kubectl\Helm binaries
FROM debian:buster-slim as binaries
ARG TERRAFORM_VERSION
ARG KUBE_VERSION
ARG HELM_VERSION
RUN apt-get update
RUN apt-get install --no-install-recommends -y curl=7.64.0-4+deb10u1
RUN apt-get install --no-install-recommends -y ca-certificates=20190110
RUN apt-get install --no-install-recommends -y unzip=6.0-23+deb10u1
RUN apt-get install --no-install-recommends -y gnupg=2.2.12-1+deb10u1
RUN apt-get install --no-install-recommends -y wget
RUN curl -Os https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_SHA256SUMS
RUN curl -Os https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip
RUN curl -Os https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_SHA256SUMS.sig
RUN wget -q https://storage.googleapis.com/kubernetes-release/release/${KUBE_VERSION}/bin/linux/amd64/kubectl -O /usr/local/bin/kubectl
RUN wget -q https://get.helm.sh/helm-${HELM_VERSION}-linux-amd64.tar.gz -O - | tar -xzO linux-amd64/helm > /usr/local/bin/helm
COPY hashicorp.asc hashicorp.asc
RUN gpg --import hashicorp.asc
RUN gpg --verify terraform_${TERRAFORM_VERSION}_SHA256SUMS.sig terraform_${TERRAFORM_VERSION}_SHA256SUMS
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
RUN grep terraform_${TERRAFORM_VERSION}_linux_amd64.zip terraform_${TERRAFORM_VERSION}_SHA256SUMS | sha256sum -c -
RUN unzip -j terraform_${TERRAFORM_VERSION}_linux_amd64.zip

# Install az CLI using PIP
FROM debian:buster-slim as azure-cli
ARG AZURE_CLI_VERSION
ARG PYTHON_MAJOR_VERSION
RUN apt-get update
RUN apt-get install -y --no-install-recommends python3=${PYTHON_MAJOR_VERSION}.3-1
RUN apt-get install -y --no-install-recommends python3-pip=18.1-5
RUN pip3 install setuptools==47.1.1
RUN pip3 install azure-cli==${AZURE_CLI_VERSION}

# Layer to get helmfile stuff
FROM quay.io/roboll/helmfile:${HELMFILE_VERSION} as helmfile

# Build final image
FROM mcr.microsoft.com/powershell:lts-debian-buster-slim
WORKDIR /ci
ENV XDG_DATA_HOME=/home
ARG PYTHON_MAJOR_VERSION

RUN apt-get update \
  && apt-get install -y --no-install-recommends \
    git=1:2.20.1-2+deb10u3 \
    python3=${PYTHON_MAJOR_VERSION}.3-1 \
    python3-distutils=${PYTHON_MAJOR_VERSION}.3-1 \
    curl \
  && apt-get clean \
  && rm -rf /var/lib/apt/lists/* \
  && update-alternatives --install /usr/bin/python python /usr/bin/python${PYTHON_MAJOR_VERSION} 1

COPY --from=binaries /terraform /usr/local/bin/terraform
COPY --from=binaries /usr/local/bin/helm /usr/local/bin/helm
COPY --from=binaries /usr/local/bin/kubectl /usr/local/bin/kubectl
COPY --from=helmfile /usr/local/bin/helmfile /usr/local/bin
COPY --from=helmfile /root/.helm/cache/plugins/ /home/helm/plugins
COPY --from=azure-cli /usr/local/bin/az* /usr/local/bin/
COPY --from=azure-cli /usr/local/lib/python${PYTHON_MAJOR_VERSION}/dist-packages /usr/local/lib/python${PYTHON_MAJOR_VERSION}/dist-packages
COPY --from=azure-cli /usr/lib/python3/dist-packages /usr/lib/python3/dist-packages

RUN chmod +x /usr/local/bin/helmfile && chmod +x /usr/local/bin/helm && chmod +x /usr/local/bin/kubectl

mumoshu added a commit that referenced this issue Aug 28, 2020
To fix the issue that adhoc json patches were not working on kustomize/raw manifests.

Note that regular kustomize project was working. In other words, this only affetcts `chart: path/to/dir` combined with `jsonPatches: ...` when the `path/to/dir` points to a kustomize project or a local directory containing raw K8s manifests.

Ref #1434 (comment)
@4c74356b41
Copy link
Author

I tried using your own docker image (quay.io/roboll/helmfile:helm3-v0.126.0) together with kustomize v3.8.1 (https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.8.1) and it throws the exact same errors:

accumulating resources: accumulateFile "accumulating resources from 'helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml': evalsymlink failure on '/tmp/chartify694271450/helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml' : lstat /tmp/chartify694271450/helmx.1.rendered: no such file or directory", loader.New "Error loading helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml with git: url lacks host: helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml, dir: evalsymlink failure on '/tmp/chartify694271450/helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml' : lstat /tmp/chartify694271450/helmx.1.rendered: no such file or directory, get: invalid source string: helmx.1.rendered/cluster-crd/templates/01-elastic-operator.yaml"

btw, why does your image not have kustomize in it?

@4c74356b41
Copy link
Author

@mumoshu bump

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 1, 2020

@4c74356b41 Could you share me all the files including helmfile.yaml and your local chart? I have no luck so far reproducing this.

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 1, 2020

btw, why does your image not have kustomize in it?

I was just too lazy to add and occasionally update it :) I'm open to review PRs to add/update it though

@4c74356b41
Copy link
Author

4c74356b41 commented Sep 1, 2020

okay, I figured it out. this happens when I try to override yamls from a chart, so doesnt throw when I have:

├── helmfile.yaml
└── localFolder
    └── 01-prometheus-operator.yaml

but throws when I have it like this;

├── helmfile.yaml
└── localFolder
    ├── Chart.yaml
    └── templates
        └── 01-prometheus-operator.yaml

unfortunately it doesnt work when it doesnt throw... I observe no changes to the yamls. using your exact snippet (from above), am I misunderstanding something?

or can I rephrase, is there any other way to change the namespace\tolerations\etc of a static yaml with helmfile (say, I dont want to use a chart for prometheus-operator, or I'm using some service that doesn't yet have a chart)

@4c74356b41
Copy link
Author

mate? can you confirm its something I'm doing, or its really not rendering any changes? @mumoshu

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 2, 2020

@4c74356b41 I noticed I was using a very outdated version of kustomize (v3.2.1) so that might be the cause of your issue. Let me check this soon.

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 2, 2020

$ helmfile version
helmfile version v0.126.2

$ kustomize version
{Version:kustomize/v3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T00:58:46Z GoOs:darwin GoArch:amd64}
$  tree .
.
├── helmfile.0.yaml
├── helmfile.1.yaml
└── localFolder
    └── prometheus-operator.yaml


1 directory, 3 files
$ diff --unified helmfile.{0,1}.yaml
--- helmfile.0.yaml     2020-09-02 20:41:20.000000000 +0900
+++ helmfile.1.yaml     2020-08-28 08:39:52.000000000 +0900
@@ -1,3 +1,12 @@
 releases:
 - name: prometheus-operator
   chart: localFolder
+  jsonPatches:
+  - target:
+      version: v1
+      kind: ServiceAccount
+      name: prometheus-operator
+    patch:
+    - op: replace
+      path: /metadata/namespace
+      value: monitoring
$ helmfile -f helmfile.0.yaml template > manifests.0.yaml
$ helmfile -f helmfile.1.yaml template > manifests.1.yaml
$  diff --unified manifests.{0,1}.yaml | grep -v ^-#

*snip*

 apiVersion: v1
 kind: ServiceAccount
 metadata:
@@ -9,10 +8,9 @@
     app.kubernetes.io/name: prometheus-operator
     app.kubernetes.io/version: v0.41.1
   name: prometheus-operator
-  namespace: default
+  namespace: monitoring

*snip*

So it does seem like changing the SA's namespace to monitoring, as specified in helmfile.yaml jsonPatches. What's the difference? Am I missing something? 🤔

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 2, 2020

or can I rephrase, is there any other way to change the namespace\tolerations\etc of a static yaml with helmfile

There should be no way other than what I demonstrated above.

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 2, 2020

Same with the helmfile image 🤔

$  docker run -it -w $(pwd) -v $(pwd):$(pwd) --rm quay.io/roboll/helmfile:helm3-v0.126.0 bash
bash-5.0# curl -LO https://github.com/kubernetes-sigs/kustomize/releases/download/kustomize%2Fv3.8.1/kustomize_v3.8.1_linux_amd64.tar.gz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100   661  100   661    0     0   6064      0 --:--:-- --:--:-- --:--:--  6009
100 12.5M  100 12.5M    0     0  2340k      0  0:00:05  0:00:05 --:--:-- 2882k
bash-5.0# tar zxvf kustomize_v3.8.1_linux_amd64.tar.gz
kustomize
bash-5.0# mv kustomize /usr/local/bin
bash-5.0# kustomize version
{Version:kustomize/v3.8.1 GitCommit:0b359d0ef0272e6545eda0e99aacd63aef99c4d0 BuildDate:2020-07-16T00:58:46Z GoOs:linux GoArch:amd64}
bash-5.0# helmfile version
helmfile version v0.126.0
bash-5.0# helmfile -f helmfile.0.yaml template > manifests.0.yaml

bash-5.0# helmfile -f helmfile.1.yaml template > manifests.1.yaml

bash-5.0# diff manifests.{0,1}.yaml | grep -v ^#-

*snip*

 apiVersion: v1
 kind: ServiceAccount
 metadata:
@@ -9,10 +8,9 @@
     app.kubernetes.io/name: prometheus-operator
     app.kubernetes.io/version: v0.41.1
   name: prometheus-operator
-  namespace: default
+  namespace: monitoring


@4c74356b41
Copy link
Author

okay, this just started working with v0.126.2, no changes on my end. so I'm not sure how you claim it was working for you prior to that helmfile version :)

@mumoshu
Copy link
Collaborator

mumoshu commented Sep 3, 2020

@4c74356b41 Glad to hear it worked to you! As I said before, it should work only after the fix mentioned in #1434 (comment), which is 94e01b7, that is included since v0.125.9.
Sorry if it took too much time for you. But it would be great if i won't be blamed me as I'm trying my best :)

Also, If you tried any of v0.125.9, v0.126.0, v0.126.1 before finally trying v0.126.2 and it still didn't work at that time, there may be some potential bug that I had fixed unexpectedly. It's just that I tried it several times with v0.125.9 and versions greater than that and it had consistently worked for me. So I hope it was just 94e01b7.

@yurrriq
Copy link
Contributor

yurrriq commented Apr 23, 2021

I'm now running into issues with prometheus-community/kube-prometheus-stack and jsonPatches, seemingly it's not fetching the sub-charts.

Error: found in Chart.yaml, but missing in charts/ directory: kube-state-metrics, prometheus-node-exporter, grafana

@mumoshu
Copy link
Collaborator

mumoshu commented Apr 23, 2021

@yurrriq That's already fixed in the latest, unreleased version of helmfile. Could you try building helmfile from master and see if that fixes the issue for you, too?

@mumoshu
Copy link
Collaborator

mumoshu commented Apr 23, 2021

@yurrriq Please see #1759

@mumoshu
Copy link
Collaborator

mumoshu commented Apr 24, 2021

Well, closing as the original issue has been resolved. Please submit another issue for different issues. Thanks!

@mumoshu mumoshu closed this as completed Apr 24, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants