Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic creation for kustomization.yaml var section and varReference #1216

Closed
jbrette opened this issue Jun 20, 2019 · 10 comments
Closed

Automatic creation for kustomization.yaml var section and varReference #1216

jbrette opened this issue Jun 20, 2019 · 10 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@jbrette
Copy link
Contributor

jbrette commented Jun 20, 2019

Creating the entries in the var section can be a quite long process if the number of variable increases and get the kustomization.yaml hard to read:

For instance:

- name: SoftwareVersions.software-versions.spec.images.mysql.tag
  objref:
    apiVersion: my.group.org/v1alpha1
    kind: SoftwareVersions
    name: software-versions
  fieldref:
    fieldpath: spec.images.mysql.tag
- name: CommonAddresses.common-addresses.spec.dns.upstream_servers[2]
  objref:
    apiVersion: my.group.org/v1alpha1
    kind: CommonAddresses
    name: common-addresses
  fieldref:
    fieldpath: spec.dns.upstream_servers[2]

On the varReference side of the process, the process make complicated very quickly:
For K8s standard objects (if not part of the default configuration)

varReference:
- kind: Deployment
  path: spec/template/spec/containers/image
- kind: Deployment
  path: metadata/labels
- kind: Deployment
  path: spec/template/metadata/labels
- kind: Deployment
  path: spec/selector/matchLabels
- kind: Deployment
  path: spec/template/spec/initContainers

or

For CRD:

varReference:
- kind: Chart
  path: spec/values/endpoints/messaging/auth/user/password
- kind: Chart
  path: spec/source
- kind: Chart
  path: spec/values/images
- kind: Chart
  path: spec/values/labels

This proposal aims at the automatic creation of the var and varReference section by simply scanning for the resources:

If we take the following example:

apiVersion: my.group.org/v1alpha1
kind: Chart
metadata:
  name: wordpress
spec:
  source: $(SoftwareVersions.software-versions.spec.charts.wordpress)
  values:
    ......
    pod:
      replicas:
        api: 1

we can conclude that we need to create the following varReference

varReference:
- kind: Chart
  path: spec/source
...

and the following var

vars:
- name: SoftwareVersions.software-versions.spec.charts.wordpress
  objref:
    apiVersion: GROUP/VERSION
    kind: SoftwareVersions
    name: software-versions
  fieldref:
    fieldpath: spec.charts.wordpress

By adding turning on the autoconfig feature, the user would only have to create the entries for which the detection process would have failed.

@monopole
Copy link
Contributor

ack, will circulate -

@Liujingfang1 Liujingfang1 added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 26, 2019
@jbrette
Copy link
Contributor Author

jbrette commented Jun 26, 2019

@monopole @Liujingfang1 @ian-howell

  1. For info, we did not really change anything in a week to that code. Just rebasing on regular bases to ensure the the PR is still working as expected. Going through the issues we found multiple people who had the issue or were confused because they forgot to change the varReference even for standard objects See this issue

  2. A key concept to this PR is that it assumes that the user knows better, that what was loaded through the varReference kustomizeconfig.yaml, varReference.go as well as the kustomization.yaml is correct.
    This was really useful when automatic discovery algorithm is failing, the user just has to add it manually like he always did till here.. This also ensure backward compatibility with the current kustomize 2.x

  3. The real conceptual problem is that even without the current PR the varReference.go is not really up to date:
    We started to update it like here to test int as a variable, but conceptually every field in every K8s object could end up in that file. That file would become huge. With the current PR, we don't need to add anything to it. Understanding the syntax of the VarReference is also really difficult especially when nagivating map/slice/map structures.

  4. We always check that those PR work together one top of the latest version of kustomize because we merge them into the allinone branch at regular base. We added a lot of test in examples/allionone and examples/issues...but we have a test that pushes kustomize much more that that all the other test: In treasuremap run "make deploy-airsloop".

  5. Finally the Airship community which owns Treasuremap really likes what kustomize can do. The organization of the structure of treasuremap is really like a tree, which fits perfectly current kustomize code.
    But during the latest meeting discussing the organization of those folders, the idea of facets/services....to compose the overall document came on the table. We are really close of those multiple inheritences issues with C++, Java or Go and how you are supposed to use interfaces. Still it explains our interest in proposing a solution which addresses a solution to compose on CRD or K8 native object from multiple kustomize base folders See

@huguesalary
Copy link

Is there any plan to merge this PR? That's a really valuable feature.

@tkellen
Copy link
Contributor

tkellen commented Sep 11, 2019

If not a merge, some confirmation that this feature is out of scope or otherwise undesirable to the kustomize team would be useful for folks who are presently depending on the fork that adds it. At the moment I'm perhaps foolishly depending on/socializing this feature with my team (with fingers crossed it will someday appear in kubectl proper).

I'm sensitive to the fact that my needs are far from a driving motivation in the development of this free software (thank you, by the way!) but it seems worth sharing if it might help nudge a review of this functionality into being.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 10, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 9, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Datamance
Copy link

@tkellen looks like they abandoned all the work done for this #1217

:sadface:

@afirth
Copy link
Contributor

afirth commented Feb 10, 2021

this closed a year ago, is there some other functionality that replaces it? cc @monopole ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

9 participants