Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support generateName for application resources #1639

Open
jessesuen opened this issue May 22, 2019 · 15 comments
Open

Support generateName for application resources #1639

jessesuen opened this issue May 22, 2019 · 15 comments
Labels
component:core Syncing, diffing, cluster state cache enhancement New feature or request type:usability Enhancement of an existing feature workaround There's a workaround, might not be great, but exists

Comments

@jessesuen
Copy link
Member

jessesuen commented May 22, 2019

A common request is to support generateName in resources. Although kubectl apply does not work with generateName, Argo CD could have behavior that when it sees a resource with generateName instead of name, it could decide to perform create instead.

Note that resources created in this manner, would immediately cause the application to be OutOfSync, since Argo CD would consider these as "extra" resources that need to be pruned. To mitigate this, the user could use this feature to prevent the extra resource from contributing to the overall OutOfSync condition of the application as a whole.

With this feature, Argo CD could be used to trigger job runs by simply performing a sync.

Some areas of concern:

  1. how would auto-sync behave with this feature
  2. if the resource is a Job or Workflow, the sync operation should probably not wait until those resources complete (unlike resource hooks).
  3. how would this work with hook-weights ?
  4. diffing will not work on generateName objects.

Also to note: it is already possible to have Argo CD create resources using generateName, but those resources need to use the argocd.argoproj.io/hook annotation. e.g.:

metadata:
  generateName: my-job-
  annotations:
    argocd.argoproj.io/hook: Sync

However, using resource hooks has the following limitations:

  1. Live hook objects are not considered part of the application, and thus not candidates for pruning. They will still be presented in the UI.
  2. With resource hooks, Job/Workflow/Pod objects will block a Sync operation from completing until the Job/Workflow/Pod completes. So very long lived jobs/pods/workflows would prevent new argocd app sync from occurring. This would be undesirable for someone who just wants to kick off the job asynchronously
@jessesuen jessesuen added the enhancement New feature or request label May 22, 2019
@kwladyka
Copy link

What are reasons kubectl apply not work with generateName? I mean while it is not supported natively maybe there is good reason for it.

My use case for it:
I want to create argocd Application which will have only 1 job. This job will be GitOps pipelines for Concourse (cicd tool). So after each change in pipelines and push to git it will run and be sure to update all pipelines configurations in Concourse. In this way I can achieve GitOps for pipelines.

Alternatively I can run this in Concourse to update Concourse pipelines ;) The border of where it should be done is abstractive ;)

@jessesuen
Copy link
Member Author

What are reasons kubectl apply not work with generateName? I mean while it is not supported natively maybe there is good reason for it.

You can read the discussion here: kubernetes/kubernetes#44501

The resolution was to document this limitation, rather than have kubectl apply handle generateName

@jessesuen
Copy link
Member Author

I want to create argocd Application which will have only 1 job. This job will be GitOps pipelines for Concourse (cicd tool). So after each change in pipelines and push to git it will run and be sure to update all pipelines configurations in Concourse. In this way I can achieve GitOps for pipelines.

@kwladyka I think you can achieve your use case even today, by specifying a single Job with the Sync hook annotation, and no "normal" application resources.

@jessesuen
Copy link
Member Author

Another important point for users interested in this feature, is that if you are using kustomize to manage configs, kustomize does not support generateName well. See:

kubernetes-sigs/kustomize#586

@wmedlar
Copy link
Contributor

wmedlar commented May 22, 2019

kustomize does not support generateName well

I've been able to workaround this behavior, at least in Kustomize v1, by patching in generateName with the patchesJson6902 field:

# kustomization.yaml
resources:
- job.yaml

patchesJson6902:
- path: patches/job-generate-name.yaml
  target:
    group: batch
    version: v1
    kind: Job
    name: foo
# job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: foo
spec: ...
# patches/job-generate-name.yaml
- op: move
  from: /metadata/name
  path: /metadata/generateName

and finally the compiled manifests:

$ kustomize build
apiVersion: batch/v1
kind: Job
metadata:
  generateName: foo
spec: ...

Works like a charm so long as you don't try to modify the Job spec after the patch.

@jessesuen
Copy link
Member Author

Great tip! I'm going to reference your workaround in the original kustomize bug I filed

@stale
Copy link

stale bot commented Aug 13, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Aug 13, 2019
@alexmt alexmt removed the wontfix This will not be worked on label Aug 19, 2019
@alexec alexec added the workaround There's a workaround, might not be great, but exists label Oct 2, 2019
@alexec alexec removed the other label Oct 10, 2019
@so0k
Copy link

so0k commented Dec 12, 2019

as far as I can tell, ArgoCD supports jobs with generateName only if they have the special annotation, which tells Argo not to use kubectl apply but kubectl create instead and it seems the only way to add jobs as part of your application

NissesSenap added a commit to NissesSenap/argocd-ocp that referenced this issue Feb 23, 2020
@jannfis jannfis added component:core Syncing, diffing, cluster state cache type:usability Enhancement of an existing feature labels May 14, 2020
@lallinger-tech
Copy link

For anybody stumbling across this and wondering which annotation you have to set refere to this: https://argoproj.github.io/argo-cd/user-guide/resource_hooks/

@klausroo
Copy link

klausroo commented May 4, 2021

kustomize does not support generateName well

I've been able to workaround this behavior, at least in Kustomize v1, by patching in generateName with the patchesJson6902 field:

This doesn't seem to work for me, I still get
resource name may not be empty

I verified that my config is similar with yours.

@huang195
Copy link

huang195 commented May 27, 2021

@jessesuen Is there any updates on this issue? I just tried to create a deployment using generateName, and using your suggestion of adding the following annotations:

  annotations:
    argocd.argoproj.io/hook: Sync

I see Argo CD is able to correctly using kubectl create to create the deployment in the cluster, but like what you said, it's not a candidate for pruning, which is problematic. When we delete this deployment resource in the repo, the expected behavior is to kubectl delete the deployment from the cluster as well. I wonder if there's a workaround for this problem?

Instead of the above annotation, I've also tried the following pair

metadata:
  generateName: fortio-
  annotations:
    argocd.argoproj.io/sync-options: Replace=true
    argocd.argoproj.io/compare-options: IgnoreExtraneous

The deployed was created in the cluster, but Argo CD was treating these as separate entities so the IgnoreExtraneous annotation probably didn't take any effect. However, I don't fully understand what these options do. Will any combinations of these annotations solve the problem?

@dobesv
Copy link

dobesv commented Jan 3, 2022

The workaround above works in kustomize 3.8.6 but not in the latest version 4.4.1, so I guess something changed in kustomize to break this.

@queil
Copy link

queil commented Nov 15, 2022

@24601
Copy link

24601 commented Nov 16, 2022

We have a workaround that we are using with relatively good success that works on all versions of kustomize that support namesuffix and Argo. Check out kubernetes-sigs/kustomize#641 (comment) for details.

@alfsch
Copy link

alfsch commented Nov 22, 2024

Any progress on this?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:core Syncing, diffing, cluster state cache enhancement New feature or request type:usability Enhancement of an existing feature workaround There's a workaround, might not be great, but exists
Projects
None yet
Development

No branches or pull requests