Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do not add namespace to cluster-scoped CRD objects #552

Closed
mgoodness opened this issue Nov 15, 2018 · 12 comments
Closed

Do not add namespace to cluster-scoped CRD objects #552

mgoodness opened this issue Nov 15, 2018 · 12 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@mgoodness
Copy link
Contributor

CRDs can be namespace- or cluster-scoped by setting spec.scope to Namespaced or Cluster. If a CRD is cluster-scoped, kustomize should not add metadata.namespace to objects of that kind.

Note that while there's no harm in adding metadata.namespace to a cluster-scoped CRD object (Kubernetes ignores it), it does show up as a "diff" in GitOps-style CD systems like Argo.

@jessesuen
Copy link

This is an interesting problem. Kustomize does not currently talk to a Kubernetes cluster and has no facility to determine whether or not it is appropriate to add namespace to a custom resource object. I'm not sure how kustomize can decide this unless it sees the custom resource definition as part of the deployed manifests.

If kustomize is unable to determine whether or not to omit namespace during manifest generation, then diffing tools like Argo CD will probably need add some special case logic for metadata.namespace to basically ignore mismatches on empty string vs. non-empty strings.

@mgoodness
Copy link
Contributor Author

mgoodness commented Nov 16, 2018

Good point - it hadn't occurred to me that kustomize can't look into a CRD's spec to determine its scope. Hopefully transformer configurations can be extended to support not adding a namespace to cluster-scoped resources.

E.g.

namespace:
- path: metadata/namespace
  create: false
  kind: ClusterIssuer

@primeroz
Copy link

Any update on this ?

We are having the same problem with a cluster scoped CRD

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 23, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 22, 2019
@jbrette
Copy link
Contributor

jbrette commented Sep 4, 2019

@mgoodness @primeroz @jessesuen

Please have a look at:

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@redbaron
Copy link

can someone reopen this issue? it is still relevant

@psalaberria002
Copy link

We just hit this issue. Any reason why it is not opened?

@gkarthiks
Copy link

Hi, any updates on this scenario? I am facing this with the custer-scoped CR with Kustomize and ArgoCD, being kustomize adding the namespaces to it, the ArgoCD is not ignoring the namespace on that but throwing an error.

@k8s-ci-robot
Copy link
Contributor

@sigwinch28: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants