Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Allow suppress diff line output by regex #475

Merged
merged 8 commits into from
Aug 21, 2023

Conversation

jkroepke
Copy link
Contributor

@jkroepke jkroepke commented Jun 28, 2023

This PR allow to suppress the diff report by using regex.
This option is more designed for power users and give them full control about the output.

There is a new diff option --suppress-output-line-regex which can be applied multiple time.

If a line from the report matches the regex, the line gets removed from the report. If the diff entry has no deltas, the whole diffEntry (file) gets removed, except the headline default, nginx, Deployment (apps) has changed:

Since --suppress-output-line-regex applied only to the output of an diff, the behavior of --detailed-exit-code is not touch. If there are differences which are suppressed, the exit code remains still 2.

Feature in action:

helm diff upgrade prometheus-node-exporter prometheus-community/prometheus-node-exporter --version 4.18.0
default, prometheus-node-exporter, DaemonSet (apps) has changed:
  # Source: prometheus-node-exporter/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: prometheus-node-exporter
    namespace: default
    labels:
-     helm.sh/chart: prometheus-node-exporter-4.17.0
+     helm.sh/chart: prometheus-node-exporter-4.18.0
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: metrics
      app.kubernetes.io/part-of: prometheus-node-exporter
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
-     app.kubernetes.io/version: "1.5.0"
+     app.kubernetes.io/version: "1.6.0"
  spec:
    selector:
      matchLabels:
        app.kubernetes.io/name: prometheus-node-exporter
        app.kubernetes.io/instance: prometheus-node-exporter
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
    template:
      metadata:
        annotations:
          cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
        labels:
-         helm.sh/chart: prometheus-node-exporter-4.17.0
+         helm.sh/chart: prometheus-node-exporter-4.18.0
          app.kubernetes.io/managed-by: Helm
          app.kubernetes.io/component: metrics
          app.kubernetes.io/part-of: prometheus-node-exporter
          app.kubernetes.io/name: prometheus-node-exporter
          app.kubernetes.io/instance: prometheus-node-exporter
-         app.kubernetes.io/version: "1.5.0"
+         app.kubernetes.io/version: "1.6.0"
      spec:
        automountServiceAccountToken: false
        securityContext:
          fsGroup: 65534
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
        serviceAccountName: prometheus-node-exporter
        containers:
          - name: node-exporter
-           image: quay.io/prometheus/node-exporter:v1.5.0
+           image: quay.io/prometheus/node-exporter:v1.6.0
            imagePullPolicy: IfNotPresent
            args:
              - --path.procfs=/host/proc
              - --path.sysfs=/host/sys
              - --path.rootfs=/host/root
              - --path.udev.data=/host/root/run/udev/data
              - --web.listen-address=[$(HOST_IP)]:9100
            securityContext:
              readOnlyRootFilesystem: true
            env:
              - name: HOST_IP
                value: 0.0.0.0
            ports:
              - name: metrics
                containerPort: 9100
                protocol: TCP
            livenessProbe:
              failureThreshold: 3
              httpGet:
                httpHeaders:
                path: /
                port: 9100
                scheme: HTTP
              initialDelaySeconds: 0
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            readinessProbe:
              failureThreshold: 3
              httpGet:
                httpHeaders:
                path: /
                port: 9100
                scheme: HTTP
              initialDelaySeconds: 0
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            volumeMounts:
              - name: proc
                mountPath: /host/proc
                readOnly:  true
              - name: sys
                mountPath: /host/sys
                readOnly: true
              - name: root
                mountPath: /host/root
                mountPropagation: HostToContainer
                readOnly: true
        hostNetwork: true
        hostPID: true
+       nodeSelector:
+         kubernetes.io/os: linux
        tolerations:
          - effect: NoSchedule
            operator: Exists
        volumes:
          - name: proc
            hostPath:
              path: /proc
          - name: sys
            hostPath:
              path: /sys
          - name: root
            hostPath:
              path: /
default, prometheus-node-exporter, Service (v1) has changed:
  # Source: prometheus-node-exporter/templates/service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    name: prometheus-node-exporter
    namespace: default
    labels:
-     helm.sh/chart: prometheus-node-exporter-4.17.0
+     helm.sh/chart: prometheus-node-exporter-4.18.0
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: metrics
      app.kubernetes.io/part-of: prometheus-node-exporter
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
-     app.kubernetes.io/version: "1.5.0"
+     app.kubernetes.io/version: "1.6.0"
    annotations:
+     prometheus.io/scrape: "true"
      prometheus.io/scrape: "true"
  spec:
    type: ClusterIP
    ports:
      - port: 9100
        targetPort: 9100
        protocol: TCP
        name: metrics
    selector:
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
default, prometheus-node-exporter, ServiceAccount (v1) has changed:
  # Source: prometheus-node-exporter/templates/serviceaccount.yaml
  apiVersion: v1
  kind: ServiceAccount
  metadata:
    name: prometheus-node-exporter
    namespace: default
    labels:
-     helm.sh/chart: prometheus-node-exporter-4.17.0
+     helm.sh/chart: prometheus-node-exporter-4.18.0
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: metrics
      app.kubernetes.io/part-of: prometheus-node-exporter
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
-     app.kubernetes.io/version: "1.5.0"
+     app.kubernetes.io/version: "1.6.0"
helm diff upgrade prometheus-node-exporter prometheus-community/prometheus-node-exporter --version 4.18.0 --suppress-output-line-regex "helm.sh/chart" --suppress-output-line-regex "version"
default, prometheus-node-exporter, DaemonSet (apps) has changed:
  # Source: prometheus-node-exporter/templates/daemonset.yaml
  apiVersion: apps/v1
  kind: DaemonSet
  metadata:
    name: prometheus-node-exporter
    namespace: default
    labels:
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: metrics
      app.kubernetes.io/part-of: prometheus-node-exporter
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
  spec:
    selector:
      matchLabels:
        app.kubernetes.io/name: prometheus-node-exporter
        app.kubernetes.io/instance: prometheus-node-exporter
    updateStrategy:
      rollingUpdate:
        maxUnavailable: 1
      type: RollingUpdate
    template:
      metadata:
        annotations:
          cluster-autoscaler.kubernetes.io/safe-to-evict: "true"
        labels:
          app.kubernetes.io/managed-by: Helm
          app.kubernetes.io/component: metrics
          app.kubernetes.io/part-of: prometheus-node-exporter
          app.kubernetes.io/name: prometheus-node-exporter
          app.kubernetes.io/instance: prometheus-node-exporter
      spec:
        automountServiceAccountToken: false
        securityContext:
          fsGroup: 65534
          runAsGroup: 65534
          runAsNonRoot: true
          runAsUser: 65534
        serviceAccountName: prometheus-node-exporter
        containers:
          - name: node-exporter
-           image: quay.io/prometheus/node-exporter:v1.5.0
+           image: quay.io/prometheus/node-exporter:v1.6.0
            imagePullPolicy: IfNotPresent
            args:
              - --path.procfs=/host/proc
              - --path.sysfs=/host/sys
              - --path.rootfs=/host/root
              - --path.udev.data=/host/root/run/udev/data
              - --web.listen-address=[$(HOST_IP)]:9100
            securityContext:
              readOnlyRootFilesystem: true
            env:
              - name: HOST_IP
                value: 0.0.0.0
            ports:
              - name: metrics
                containerPort: 9100
                protocol: TCP
            livenessProbe:
              failureThreshold: 3
              httpGet:
                httpHeaders:
                path: /
                port: 9100
                scheme: HTTP
              initialDelaySeconds: 0
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            readinessProbe:
              failureThreshold: 3
              httpGet:
                httpHeaders:
                path: /
                port: 9100
                scheme: HTTP
              initialDelaySeconds: 0
              periodSeconds: 10
              successThreshold: 1
              timeoutSeconds: 1
            volumeMounts:
              - name: proc
                mountPath: /host/proc
                readOnly:  true
              - name: sys
                mountPath: /host/sys
                readOnly: true
              - name: root
                mountPath: /host/root
                mountPropagation: HostToContainer
                readOnly: true
        hostNetwork: true
        hostPID: true
+       nodeSelector:
+         kubernetes.io/os: linux
        tolerations:
          - effect: NoSchedule
            operator: Exists
        volumes:
          - name: proc
            hostPath:
              path: /proc
          - name: sys
            hostPath:
              path: /sys
          - name: root
            hostPath:
              path: /
default, prometheus-node-exporter, Service (v1) has changed:
  # Source: prometheus-node-exporter/templates/service.yaml
  apiVersion: v1
  kind: Service
  metadata:
    name: prometheus-node-exporter
    namespace: default
    labels:
      app.kubernetes.io/managed-by: Helm
      app.kubernetes.io/component: metrics
      app.kubernetes.io/part-of: prometheus-node-exporter
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
    annotations:
+     prometheus.io/scrape: "true"
      prometheus.io/scrape: "true"
  spec:
    type: ClusterIP
    ports:
      - port: 9100
        targetPort: 9100
        protocol: TCP
        name: metrics
    selector:
      app.kubernetes.io/name: prometheus-node-exporter
      app.kubernetes.io/instance: prometheus-node-exporter
default, prometheus-node-exporter, ServiceAccount (v1) has changed:

@jkroepke jkroepke force-pushed the ignore-diff branch 2 times, most recently from e4d73c1 to e1d52fc Compare June 28, 2023 21:27
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
@jkroepke jkroepke marked this pull request as ready for review June 28, 2023 21:57
@jkroepke jkroepke changed the title feat: Allow suppressing diff output by regex feat: Allow suppress diff line output by regex Jun 28, 2023
@jkroepke
Copy link
Contributor Author

jkroepke commented Jul 5, 2023

@yxxhero @databus23 Do you have the time to look into it?

@jkroepke
Copy link
Contributor Author

jkroepke commented Jul 17, 2023

@mumoshu @yxxhero @databus23 Please let me know, if I can assists here.

@jkroepke
Copy link
Contributor Author

@mumoshu @yxxhero @databus23 I would appreciate a review here.

diff/diff.go Outdated Show resolved Hide resolved
diff/diff.go Outdated Show resolved Hide resolved
jkroepke and others added 4 commits July 25, 2023 20:00
Co-authored-by: Yusuke Kuoka <ykuoka@gmail.com>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
Signed-off-by: Jan-Otto Kröpke <mail@jkroepke.de>
@jkroepke jkroepke requested a review from mumoshu July 25, 2023 18:32
@jkroepke
Copy link
Contributor Author

jkroepke commented Aug 4, 2023

@mumoshu @yxxhero @databus23 I would appreciate a additional review here.

@jkroepke
Copy link
Contributor Author

Hi @mumoshu @yxxhero @databus23

I would appreciate a additional review here.

Copy link
Collaborator

@mumoshu mumoshu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Thanks a lot for your patience and contribution @jkroepke!!

@mumoshu mumoshu merged commit 94d90a5 into databus23:master Aug 21, 2023
10 checks passed
@jkroepke jkroepke deleted the ignore-diff branch August 21, 2023 09:45
@jkroepke
Copy link
Contributor Author

Thanks a lot! @mumoshu do you plan a release wich include this change?

@mumoshu
Copy link
Collaborator

mumoshu commented Aug 21, 2023

@jkroepke Indeed! For transparency, the last thing we need before cutting the next release is to modify another feature we merged recently #458 somehow to make the dry-run=server an optional feature. That's to address @dudicoco's great insight shared in #449 (comment)

@jkroepke
Copy link
Contributor Author

@yxxhero @mumoshu When do you plan the next release?

@timmilesdw
Copy link

when this feature will be released approximately? desperately waiting for it

@0xStarcat
Copy link

I too will sing your praises when this is released

@dudicoco
Copy link

Hi @jkroepke.

Can you please document the feature within the readme?
It's unclear if it can suppress multiple lines, for example a regex to find from ports to selector would suppress the entire ports block? Or can it only be used for single lines?

@jkroepke
Copy link
Contributor Author

jkroepke commented Apr 18, 2024

Hi @dudicoco

in our setup, we are using this:

--suppress-output-line-regex="chart: [0-9]" \
--suppress-output-line-regex="app.kubernetes.io/version: [0-9]"

it omits single lines.

About multiple lines and regex, you have to be a regex pro, maybe this can work for you: https://regex101.com/r/OHEFVb/1

@dudicoco
Copy link

Thanks @jkroepke.

The example you have provided did not work, I have even tried a more simple example which just captures the first two lines of the ports block and it also didn't work: (?m)ports:\s*-\s.*.

So is the issue with the regex or just the fact that the new feature does not support multi lines regex?

@jkroepke
Copy link
Contributor Author

You are right, multi line is not supported.

Reason: The underlaying library generated a line-by-line diff and the regex will be matched to each line. Technically, multi line regex is supported, but since its a line-by-line diff, it a regex across multiple lines will never work.

The chances to support multiple are supper low because helm-diff have to merge the single lines first which would result into a too complex situation.

@dudicoco
Copy link

thanks for info @jkroepke

@rbq
Copy link

rbq commented May 29, 2024

Does this affect only visible output or status codes/api results as well? I'm asking because Helmfile uses helm-diff under the hood to determine if a release is outdated and some Charts will always be re-deployed due to suboptimal design choices (e.g. random db passwords with no way to configure them via values).

@jkroepke
Copy link
Contributor Author

jkroepke commented May 29, 2024

only visible output

and some Charts will always be re-deployed due to suboptimal design choices (e.g. random db passwords with no way to configure them via values).

but maybe HELM_DIFF_USE_INSECURE_SERVER_SIDE_DRY_RUN=true can help to avoid that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants