Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize doesn't support metadata.generateName #641

Open
shimmerjs opened this issue Dec 17, 2018 · 78 comments
Open

Kustomize doesn't support metadata.generateName #641

shimmerjs opened this issue Dec 17, 2018 · 78 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@shimmerjs
Copy link

shimmerjs commented Dec 17, 2018

I am trying to use kustomize with https://github.com/argoproj/argo.

Example spec:

apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
  generateName: hello-world-
spec:
  entrypoint: whalesay
  templates:
  - name: whalesay
    container:
      image: docker/whalesay:latest
      command: [cowsay]
args: ["hello world"]

Argo Workflow CRDs don't require or use metadata.name, but I am getting the following error when I try to run kustomize build on an Argo Workflow resource:

Error: loadResMapFromBasesAndResources: rawResources failed to read Resources: Missing metadata.name in object {map[args:[hello world] kind:Workflow metadata:map[generateName:hello-world-] spec:map[entrypoint:whalesay templates:[map[container:map[command:[cowsay] image:docker/whalesay:latest] name:whalesay]]] apiVersion:argoproj.io/v1alpha1]}

Is there a way for me to override where kustomize looks for a name to metadata.generateName?

@Liujingfang1
Copy link
Contributor

Similar issues #627, #586

@monopole
Copy link
Contributor

monopole commented Dec 30, 2018

#627 is about names, but currently i see it as a feature request.

This bug and #586 are noting that kustomize doesn't recognize the kubernetes API directive generateName, which is indeed a bug.

This directive is a kustomize-like feature introduced before kustomize... (complicating our life).

We might try to allow it and work with it - or disallow it and provide an alternative mechanism.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 27, 2019
@confiq
Copy link

confiq commented Jun 21, 2019

/remove-lifecycle rotten

I wanted to use generateName with kustomize but I can't :(

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 21, 2019
@anarcher
Copy link

anarcher commented Jul 4, 2019

I wanted to use generateName with kustomize too.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 2, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 1, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@wpbeckwith
Copy link

This issue should be reopened unless it has been solved and the docs don't show it.

@jarednielsen
Copy link

Agreed, let's re-open and solve the issue.

@confiq
Copy link

confiq commented Feb 8, 2020

anybody can open with @k8s-ci-robot robot command. I've already opened, I don't want to flood it :)

@haimberger
Copy link

/reopen

I've just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole above).

@k8s-ci-robot
Copy link
Contributor

@haimberger: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

I've just stumbled across this issue as well, and would appreciate a fix or an alternative mechanism (as mentioned by monopole above).

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Datamance
Copy link

/reopen

sigh.

@k8s-ci-robot
Copy link
Contributor

@Datamance: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

sigh.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@stpierre
Copy link

Can someone with The Power reopen this? Still outstanding AFAICT.

@Liujingfang1
Copy link
Contributor

/remove-lifecycle rotten

@Liujingfang1 Liujingfang1 reopened this Apr 22, 2020
@anthonyserious
Copy link

I don't want to get anyone's hopes up, but here's a PR that works in local testing: #4981.

I don't understand kustomize enough to know if this is really sufficient, but I look forward to comments pointing me in the right direction.

@hrobertson
Copy link

Looks like these questions on #4838 need answering and comprehensive tests need implementing.

It's not just a case of changing the validation to allow it.

@anthonyserious
Copy link

Indeed, my PR was too naive there, oh well. I closed it and hope #4838 moves forward.

@tmsquill
Copy link

I also have just run into this issue while attempting to use generateName on Jobs. Would be nice to see some fix or official workaround communicated.

@natasha41575
Copy link
Contributor

I am pasting my comment from #4838 (comment) for visibility:

I think before we can accept this PR, we need to agree on several details. Again, some more things that come to mind:

  • How generateName interacts with namePrefix/nameSuffix. I see above you suggest that generateName should not interact with namePrefix or nameSuffix. That is a valid opinion, and allowing name transformations on generateName would complicate name references, but I need to think a bit more about pros/cons.

  • How generateName interacts with patches? How does a patch target such a resource? For example, some options that I can think of:

    • Add a new field to the patch targets that allow selection based on the generateName field.
    • Keep patch targets as is, and only allow such resources to be targeted by their GVK, labels, and/or annotations.
    • We will also need to think about if we should allow the patch to change the value of generateName. Patches are allowed to change the name field, so it may be expected that we eventually support this too.
  • Same as the above, but with replacements.

  • What should kustomize do if there are multiple objects with the same generateName? Should kustomize allow this, and if so, how can we differentiate between the identical resource IDs? For example, reswrangler doesn't let you add two resources with the same ID. I haven't looked carefully at your tests yet, but we should make sure that we have tests covering this.

  • This PR doesn't seem to touch reswrangler or resource at all. That surprises me, as that is where a lot of resource-identifying code is. Maybe you're right that we actually don't need to touch it at all, but I think it would be helpful to take a closer look at that code and see if it makes sense to add support for generateName there.

I plan to bring this PR up with the other kustomize maintainers but if you have thoughts on these points feel free to share here.

@KnVerey can you think of anything else we need to make sure we think about?

Edited to add: I talked with the other maintainers and I think we need to see tests that show the use of generateName with all the other generators/transformers fields of kustomize so that we can see how it would behave.

I think to move this issue forward we would need a mini in-repo KEP to fully flesh out all of these details.

@isarns
Copy link
Contributor

isarns commented Aug 30, 2023

Any updates for that?

The patch workaround works neither for op: move nor op: remove.

- op: remove
  path: /metadata/name
- op: move
  from: /metadata/name
  path: /metadata/generateName

this leads to:
panic: number of previous names, number of previous namespaces, number of previous kinds not equal

I'm using kustomize v5

@shuheiktgw
Copy link

/assign

@shuheiktgw
Copy link

Sorry, the issue is a bit too complicated for me. I may come back later, but please take it if anyone else can 🙇
/unassign

@kasvith
Copy link

kasvith commented Oct 12, 2023

I also ran into same issue with argocd

@liskl
Copy link

liskl commented Oct 31, 2023

this same issue is happening if you try to kustomize the knative-operator as it requires a job

apiVersion: batch/v1
kind: Job
metadata:
generateName: storage-version-migration-operator-

this does limit the usability of kustomize for this usecase unfortunately, and with this being a built in for kubectl i would expect it to at least match the current rules defined for the builtin spec's

is there anything "chop wood, carry water" that i can do to help move this forwards?

@liskl
Copy link

liskl commented Oct 31, 2023

How generateName interacts with namePrefix/nameSuffix. I see above you suggest that generateName should not interact with namePrefix or nameSuffix. That is a valid opinion, and allowing name transformations on generateName would complicate name references, but I need to think a bit more about pros/cons.

it should be able to interact with namePrefix
it should not be able to interact with nameSuffix as generateName handles that internally.

but I'm 100% ok with being ignored by both if we can get the functionally working.

How generateName interacts with patches? How does a patch target such a resource? For example, some options that I can think of:

Add a new field to the patch targets that allow selection based on the generateName field.

This ^

Keep patch targets as is, and only allow such resources to be targeted by their GVK, labels, and/or annotations.

This is also a valid usecase

patches:
  - path: <relative path to file containing patch>
    target:
      group: batch
      version: v1
      kind: Job
      name: <optional name or regex pattern> # OR
      generateName: <the given .metadata.generateName prefix> # but not BOTH, and only for the kinds that accept them
      namespace: <optional namespace>
      labelSelector: <optional label selector>
      annotationSelector: <optional annotation selector>

We will also need to think about if we should allow the patch to change the value of generateName. Patches are allowed to change the name field, so it may be expected that we eventually support this too.

we should be able to modify the /metadata/generateName field in the same way we could the /metadata/name field using patches

Same as the above, but with replacements.

we should be able to modify the /metadata/generateName field in the same way we could the /metadata/name field using replacements.

What should kustomize do if there are multiple objects with the same generateName? Should kustomize allow this, and if so, how can we differentiate between the identical resource IDs? For example, reswrangler doesn't let you add two resources with the same ID. I haven't looked carefully at your tests yet, but we should make sure that we have tests covering this.

kustomize should not allow multiple like-kind resources example jobs.batch/v1 objects with the same generateName field as that would be abnormal for how the spec is used; the generateName field for the same Kind should be unique within the kustomize output.

This PR doesn't seem to touch reswrangler or resource at all. That surprises me, as that is where a lot of resource-identifying code is. Maybe you're right that we actually don't need to touch it at all, but I think it would be helpful to take a closer look at that code and see if it makes sense to add support for generateName there.

I don't know enough to state my opinion on this ^

@markhv-code
Copy link

Also ran into this issue.

@ricardo-s-ferreira-alb
Copy link

Also ran into this issue.

And 5 years later we have no solution.

@ashrafguitoni
Copy link

Not sure what's the best way to get developer's attention to this... Maybe opening a discussion? Is there an official slack?

@nicl-dev
Copy link

nicl-dev commented Nov 17, 2023

We also just ran into this. All related PR's got closed so what's the plan now? What exactly can we do to bring this forward?

@mcharb
Copy link

mcharb commented Nov 23, 2023

I'm working on a project where Kustomize was selected for ArgoCD. We were hoping to extend its use to our Argo Workflows but this is a significant impediment. Deleting previous workflow instances as was suggested in one comment does not align with our operational requirements. I'm sure there are plenty of usecases outside of Argo Workflow that are also affected by this issue.

@robinhuiser
Copy link

Please provide a solution or how we can support the kustomize team closing this issue - it is blocking our pipeline development in Argo Workflow at this moment as we want to kustomize pipelines.

@tkblack
Copy link

tkblack commented Dec 21, 2023

I ran into this. I want found the solution but failed. I tried like below and be OK!

patches: 
- patch: | 
    - op: replace
      path: /metadata
      value: 
        generateName: data-migration-
  target: 
    kind: Job

@tkblack
Copy link

tkblack commented Dec 21, 2023

I ran into this. I want found the solution but failed. I tried like below and be OK!

patches: 
- patch: | 
    - op: replace
      path: /metadata
      value: 
        generateName: data-migration-
  target: 
    kind: Job

Oh. No, it just ok for kubectl kustomize, but failed with argocd. And, i run with kustomize build would be failed too.
kustomize version: v4.4.0

@bluebrown
Copy link

I ran into this. I want found the solution but failed. I tried like below and be OK!

patches: 
- patch: | 
    - op: replace
      path: /metadata
      value: 
        generateName: data-migration-
  target: 
    kind: Job

Oh. No, it just ok for kubectl kustomize, but failed with argocd. And, i run with kustomize build would be failed too. kustomize version: v4.4.0

Kubectl kustomize is literally kustomize. And the one in argocd as well. There is no difference, except for the version that is used.

@natasha41575
Copy link
Contributor

Repeating what I wrote in #641 (comment):

To move this issue forward we would need a mini in-repo KEP to fully flesh out all the design and implementation details of how this feature would be handled.

@sanmai-NL
Copy link

The interesting part is, who writes that proposal. And why would, for example, the community write this one and the maintainers some other?

@shadiramadan
Copy link

shadiramadan commented Mar 2, 2024

For people looking for a solution- this utilizes an exec krm plugin.
It depends on yq and openssl rand.

Folder structure

base/
  job-name-generator.yaml
  job.yaml
  kustomization.yaml
plugins/
  job-name-generator.sh

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources: []
# The job is added by the generator
# - job.yaml
generators:
- job-name-generator.yaml

job.yaml

apiVersion: batch/v1
kind: Job
metadata:
  generateName: schema-migrate-
spec:
  template:
    spec: {}

job-name-generator.yaml

apiVersion: kustomize.example.com/v1
kind: JobNameGenerator
metadata:
  name: schema-migrate
  annotations:
    config.kubernetes.io/function: |
      exec:
        path: ../plugins/job-name-generator.sh
spec:
  resourcePath: ./job.yaml

job-name-generator.sh

#!/usr/bin/env bash

# read the `kind: ResourceList` from stdin
resourceList=$(cat)

# extract the resource path
export resourcePath=$(echo "${resourceList}" | yq e '.functionConfig.spec.resourcePath' - )

# generate the job hash
export job_hash=$(openssl rand -hex 3 | cut -c 1-5)

# dump the job into the output ResourceList, add name from generateName + the job hash, and delete generateName
echo "
kind: ResourceList
items: []
" | yq e 'load(env(resourcePath)) as $resource | .items += $resource | .items[0].metadata.name = .items[0].metadata.generateName + env(job_hash) | del(.items[0].metadata.generateName)' -

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

Successfully merging a pull request may close this issue.