Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More extensible kustomize merging support #2339

Closed
pwittrock opened this issue Apr 6, 2020 · 16 comments
Closed

More extensible kustomize merging support #2339

pwittrock opened this issue Apr 6, 2020 · 16 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@pwittrock
Copy link
Contributor

pwittrock commented Apr 6, 2020

Support merging kustomize patches / resources for extension types by 1) merging off OpenAPI definitions, and 2) enabling OpenAPI definitions to be defined for extension types.

Support both of the following techniques:

  • Allow publishing OpenAPI definitions for extension types as a file
  • Allow publishing OpenAPI definitions directly on the resource configuration

Note: This requires migrating to the kyaml libraries for processing resources (from apimachinery)

Depends on #2340

@pwittrock
Copy link
Contributor Author

cc @lukadante

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 6, 2020
@jessesuen
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2020
@qcastel
Copy link

qcastel commented Sep 4, 2020

Any idea of when this will be fixed? This is currently blocking us for using Argo Rollout with Kustomize.

@monopole
Copy link
Contributor

The merge function in kyaml is being improved at the same time as the migration
alluded to in #2506

openapi work is not blocked by this issue.

@qcastel
Copy link

qcastel commented Dec 4, 2020

Still no news on that one? very interested on that one, as business starts to evaluate migrating to helm instead.

@natasha41575
Copy link
Contributor

This is being worked on.

@asmirnoff
Copy link

Still does not work for Argo Rollouts, can anyone give an ETA?

@natasha41575
Copy link
Contributor

You can now specify your own OpenAPI schema files with all your custom resource definitions to use with kustomize, see example: https://github.com/kubernetes-sigs/kustomize/blob/master/examples/customOpenAPIschema.md

@sshiba
Copy link

sshiba commented May 12, 2021

The example provided in https://github.com/kubernetes-sigs/kustomize/blob/master/examples/customOpenAPIschema.md works but when I tried to implement on a CR, it didn't work. Generated the schema as described, i.e., kustomize openapi fetch then tried to apply our patchesStrategicMerge manifest and it seems the schema was ignored. Instead of appending new elements to a list, it just replace it.

Similar issue was reported in #3852.

@natasha41575
Copy link
Contributor

natasha41575 commented May 13, 2021

@sshiba you have to edit the document provided by kustomize openapi fetch with the correct merge keys and patch strategy, see the extensions x-kubernetes-patch-merge-key and x-kuberenetes-patch-strategy in the docs here: https://kubectl.docs.kubernetes.io/references/kustomize/kustomization/openapi/

@sshiba
Copy link

sshiba commented May 14, 2021

@natasha41575 I update the schema as recommended (by adding x-kubernetes-patch-merge-key and x-kuberenetes-patch-strategy) and it works for one case (i.e., appends new elements to the array) but in another field it just overrides previous array contents. I purged a bit the example below so it focus on the issues I am facing.

Btw, let me know what I am missing to get this working?

kustomization.yaml

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
  - kubeadm-controlplane.yaml

openapi:
  path: kubeadm-apischema.json

patchesStrategicMerge:
- kubeadm-patch-strategic.yaml

The manifest that I am trying to kustomize is kubeadm-controlplane.yaml

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    files:
    - content: |
        [Service]
        Environment="HTTP_PROXY="
        Environment="HTTPS_PROXY="
        Environment="NO_PROXY="
      path: /etc/systemd/system/docker.service.d/http-proxy.conf
    - contentFrom:
        secret:
          key: tls.key
          name: ca-certificate-secret
      owner: root:root
      permissions: "0644"
      path: /etc/kubernetes/certs/ca.key
    preKubeadmCommands:
    - export HOME=/root
    - systemctl daemon-reload
    - systemctl restart docker
    - systemctl enable --now keepalived
    - systemctl restart keepalived

And the patch used for the kustomization is kubeadm-patch-strategic.yaml

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    files:
    - contentFrom:
        secret:
          key: tls.crt
          name: ca-certificate-secret
      owner: root:root
      permissions: "0644"
      path: /etc/kubernetes/certs/ca.pem
    preKubeadmCommands:
      - echo '10.23.25.101 test.function1.local' | tee -a /etc/hosts
      - echo '10.23.25.102 test.function2.local' | tee -a /etc/hosts

And below you will find the openapi schema for the CR (kubeadm-apischema.json).

{
    "definitions": {
      "io.x-k8s.cluster.controlplane.v1alpha3.KubeadmControlPlane": {
        "description": "KubeadmControlPlane is the Schema for the KubeadmControlPlane API.",
        "properties": {
          "apiVersion": {
            "description": "APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources",
            "type": "string"
          },
          "kind": {
            "description": "Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds",
            "type": "string"
          },
          "metadata": {
            "$ref": "#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta",
            "description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata"
          },
          "spec": {
            "description": "KubeadmControlPlaneSpec defines the desired state of KubeadmControlPlane.",
            "properties": {
              "kubeadmConfigSpec": {
                "description": "KubeadmConfigSpec is a KubeadmConfigSpec to use for initializing and joining machines to the control plane.",
                "properties": {
                  "files": {
                    "description": "Files specifies extra files to be passed to user_data upon creation.",
                    "items": {
                      "description": "File defines the input for generating write_files in cloud-init.",
                      "properties": {
                        "content": {
                          "description": "Content is the actual content of the file.",
                          "type": "string"
                        },
                        "contentFrom": {
                          "description": "ContentFrom is a referenced source of content to populate the file.",
                          "properties": {
                            "secret": {
                              "description": "Secret represents a secret that should populate this file.",
                              "properties": {
                                "key": {
                                  "description": "Key is the key in the secret's data map for this value.",
                                  "type": "string"
                                },
                                "name": {
                                  "description": "Name of the secret in the KubeadmBootstrapConfig's namespace to use.",
                                  "type": "string"
                                }
                              },
                              "required": [
                                "key",
                                "name"
                              ],
                              "type": "object"
                            }
                          },
                          "required": [
                            "secret"
                          ],
                          "type": "object"
                        },
                        "encoding": {
                          "description": "Encoding specifies the encoding of the file contents.",
                          "enum": [
                            "base64",
                            "gzip",
                            "gzip+base64"
                          ],
                          "type": "string"
                        },
                        "owner": {
                          "description": "Owner specifies the ownership of the file, e.g. \"root:root\".",
                          "type": "string"
                        },
                        "path": {
                          "description": "Path specifies the full path on disk where to store the file.",
                          "type": "string"
                        },
                        "permissions": {
                          "description": "Permissions specifies the permissions to assign to the file, e.g. \"0640\".",
                          "type": "string"
                        }
                      },
                      "required": [
                        "path"
                      ],
                      "type": "object"
                    },
                    "x-kubernetes-patch-merge-key": "files",
                    "x-kubernetes-patch-strategy": "merge",
                    "type": "array"
                  },
                  "postKubeadmCommands": {
                    "description": "PostKubeadmCommands specifies extra commands to run after kubeadm runs",
                    "items": {
                      "type": "string"
                    },
                    "x-kubernetes-patch-merge-key": "postKubeadmCommands",
                    "x-kubernetes-patch-strategy": "merge",
                    "type": "array"
                  },
                  "preKubeadmCommands": {
                    "description": "PreKubeadmCommands specifies extra commands to run before kubeadm runs",
                    "items": {
                      "type": "string"
                    },
                    "x-kubernetes-patch-strategy": "merge",
                    "type": "array"
                  }
                },
                "type": "object"
              }
            },
            "required": [
              "kubeadmConfigSpec"
            ],
            "type": "object"
          }
        },
        "type": "object",
        "x-kubernetes-group-version-kind": [
          {
            "group": "controlplane.cluster.x-k8s.io",
            "kind": "KubeadmControlPlane",
            "version": "v1alpha3"
          }
        ]
      }
    }
  }

NOTE: when I add "x-kubernetes-patch-merge-key": "preKubeadmCommands" to preKubeadmCommands in the schema, it returns an error, which I was not able to decipher.

When executing kustomize build <path/to/folder> the rendered manifest did not work for spec.kubeadmConfigSpec.files but did for spec.kubeadmConfigSpec.preKubeadmCommands as shown below.

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    files:
    - contentFrom:
        secret:
          key: tls.key
          name: ca-certificate-secret
      owner: root:root
      path: /etc/kubernetes/certs/ca.key
      permissions: "0644"
    preKubeadmCommands:
    - echo '10.23.25.101 test.function1.local' | tee -a /etc/hosts
    - echo '10.23.25.102 test.function2.local' | tee -a /etc/hosts
    - export HOME=/root
    - systemctl daemon-reload
    - systemctl restart docker
    - systemctl enable --now keepalived
    - systemctl restart keepalived

Another strange behavior is when removing the spec.kubeadmConfigSpec.files from the patch, the manifest was not correctly rendered.

Updated patch kubeadm-patch-strategic.yaml

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    preKubeadmCommands:
      - echo '10.23.25.101 test.function1.local' | tee -a /etc/hosts
      - echo '10.23.25.102 test.function2.local' | tee -a /etc/hosts

The rendered manifest is missing some elements in spec.kubeadmConfigSpec.files . See the rendered manifest below that is missing an element from the original manifest.

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    files:
    - contentFrom:
        secret:
          key: tls.key
          name: ca-certificate-secret
      owner: root:root
      path: /etc/kubernetes/certs/ca.key
      permissions: "0644"
    preKubeadmCommands:
    - echo '10.23.25.101 test.function1.local' | tee -a /etc/hosts
    - echo '10.23.25.102 test.function2.local' | tee -a /etc/hosts
    - export HOME=/root
    - systemctl daemon-reload
    - systemctl restart docker
    - systemctl enable --now keepalived
    - systemctl restart keepalived

Instead, the correct rendering should be as shown below.

kind: KubeadmControlPlane
metadata:
  name: controlplane
spec:
  kubeadmConfigSpec:
    files:
    - content: |
        [Service]
        Environment="HTTP_PROXY="
        Environment="HTTPS_PROXY="
        Environment="NO_PROXY="
      path: /etc/systemd/system/docker.service.d/http-proxy.conf
    - contentFrom:
        secret:
          key: tls.key
          name: ca-certificate-secret
      owner: root:root
      path: /etc/kubernetes/certs/ca.key
      permissions: "0644"
    preKubeadmCommands:
    - echo '10.23.25.101 test.function1.local' | tee -a /etc/hosts
    - echo '10.23.25.102 test.function2.local' | tee -a /etc/hosts
    - export HOME=/root
    - systemctl daemon-reload
    - systemctl restart docker
    - systemctl enable --now keepalived
    - systemctl restart keepalived

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 12, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 11, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

10 participants