Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kustomize 5 build returns error when labels on a resource is null #5050

Closed
pavansokkenagaraj opened this issue Feb 16, 2023 · 12 comments
Closed
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.

Comments

@pavansokkenagaraj
Copy link

pavansokkenagaraj commented Feb 16, 2023

What happened?

kustomize build(V5.0.0) with patches and transformer on a resource having labels as null returns error:

Error: considering field 'metadata/labels' of object Job.v1.batch/security-job.[noNs]: wrong node kind: expected MappingNode but got ScalarNode: node contents:
null

What did you expect to happen?

expected: kustomize build to

merge the resources and resulting yaml to be returned.

kustomize v4.5.7 build returned the results when labels: null [as expected]

If I remove labels: null from resources the kustomize build works [but should work with labels:null]

How can we reproduce it (as minimally and precisely as possible)?

# kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resources.yaml
transformers:
- label-transformer.yaml
patches:
- path: pullsecrets.yaml
# patchesStrategicMerge:
#   - pullsecrets.yaml
# resources.yaml
apiVersion: batch/v1
kind: Job
metadata:
  labels: null
  name: security-job
spec:
  backoffLimit: 3
  template:
    metadata:
      name: security-test
    spec:
      containers:
      - name: hook-test
# label-transformer.yaml
apiVersion: builtin
kind: LabelTransformer
metadata:
  name: label-transformer
labels:
  test.io/name: label-transformer
fieldSpecs:
- path: metadata/labels
  create: true
# pullsecrets.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: security-job
spec:
  template:
    spec:
      imagePullSecrets:
      - name: multi-namespace-yeti-registry

Expected output

➜ kustomize4 build

apiVersion: batch/v1
kind: Job
metadata:
  labels:
    test.io/name: label-transformer
  name: security-job
spec:
  backoffLimit: 3
  template:
    metadata:
      name: security-test
    spec:
      containers:
      - name: hook-test
      imagePullSecrets:
      - name: multi-namespace-yeti-registry

Actual output

➜ kustomize build

Error: considering field 'metadata/labels' of object Job.v1.batch/security-job.[noNs]: wrong node kind: expected MappingNode but got ScalarNode: node contents:
null

Kustomize version

V5.0.0

Operating system

Linux

@pavansokkenagaraj pavansokkenagaraj added the kind/bug Categorizes issue or PR as related to a bug. label Feb 16, 2023
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Feb 16, 2023
@KnVerey
Copy link
Contributor

KnVerey commented Feb 21, 2023

What is the intention of including labels: null in the base resource? This change is very likely caused by a fix we made to the handling of null values in base resources: #4890 . Previously, they were incorrectly treated as a deletion directive. Now, they are retained, and in this case are not the correct type for the field (empty value should be {} for a map).

/triage needs-information

@k8s-ci-robot k8s-ci-robot added triage/needs-information Indicates an issue needs more information in order to work on it. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Feb 21, 2023
@sgalsaleh
Copy link

sgalsaleh commented Feb 27, 2023

@KnVerey here's another example where Kustomize 5 isn't able to process null values that Kustomize 4 and Kubernetes (e.g. kubectl apply) are okay with, which is initContainers: null:

deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: nginx
  name: nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      initContainers: null
      containers:
      - name: nginx
        image: nginx

pull-secret.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  template:
    spec:
      imagePullSecrets:
      - name: example-pull-secret

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
patchesStrategicMerge:
- pull-secret.yaml
resources:
- deployment.yaml

Running kustomize build:

$ kustomize build kustomize-example/
# Warning: 'patchesStrategicMerge' is deprecated. Please use 'patches' instead. Run 'kustomize edit fix' to update your Kustomization automatically.
Error: updating name reference in 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' field of 'Deployment.v1.apps/nginx.[noNs]': considering field 'spec/template/spec/initContainers/env/valueFrom/configMapKeyRef/name' of object Deployment.v1.apps/nginx.[noNs]: expected sequence or mapping node

@sgalsaleh
Copy link

sgalsaleh commented Feb 27, 2023

For example, there are helm charts out there that produce null values depending on the result of templating.

@sgalsaleh
Copy link

@KnVerey any plans to address this issue? this unfortunately breaks some of the Helm charts our customers are using in production and it's not easily workaround-able without breaking existing functionality.

@obfuscurity
Copy link

Any updates?

@obfuscurity
Copy link

Is anyone able to look at this issue? This remains a blocker to our upgrading to 5.x.

@cbodonnell
Copy link

cbodonnell commented Apr 7, 2023

Hi @KnVerey, I have some more interesting findings as it relates to this issue. Apologies in advance for the lengthy comment...

It appears that kustomize will actually add null to a field if it is empty. For example, some resources in community Helm charts may look like the following after rendering templates (notice the empty labels):

resource.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp
  labels:
data:
  key: value

If I simply run this resource through kustomize without any transformations:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml

The build output is the following:

$ kustomize build
apiVersion: v1
data:
  key: value
kind: ConfigMap
metadata:
  labels: null
  name: myapp

If I then try to apply a label transformer and patch to this output, it will fail with the aforementioned error. However, it will not if I use the original resource with the empty labels field. Additionally, it seems to work fine if I leave out either the label transformer or the patch, which also seems odd.

new-resource.yaml:

apiVersion: v1
data:
  key: value
kind: ConfigMap
metadata:
  labels: null
  name: myapp

kustomization.yaml:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- new-resource.yaml
transformers:
- label-transformer.yaml
patches:
- path: patch.yaml

label-transformer.yaml:

apiVersion: builtin
kind: LabelTransformer
metadata:
  name: label-transformer
labels:
  test.io/name: label-transformer
fieldSpecs:
- path: metadata/labels
  create: true

patch.yaml:

apiVersion: v1
kind: ConfigMap
metadata:
  name: myapp
data:
  newkey: newvalue

Result:

$ kustomize build
Error: considering field 'metadata/labels' of object ConfigMap.v1.[noGrp]/myapp.[noNs]: wrong node kind: expected MappingNode but got ScalarNode: node contents:
null

After swapping out new-resource.yaml for the original resource.yaml with the empty field:

$ kustomize build
apiVersion: v1
data:
  key: value
  newkey: newvalue
kind: ConfigMap
metadata:
  labels:
    test.io/name: label-transformer
  name: myapp

@obfuscurity
Copy link

Any updates from maintainers?

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 19, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 18, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Mar 19, 2024
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

No branches or pull requests

7 participants