-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kustomize handling of CRD inconsistent with k8s handling starting in 1.16 #1510
Comments
/assign @Liujingfang1 @monopole |
We are highly interested in a solution for kustomize to be able to perform strategic merge patching for CRDs. We are relying more and more on CRDs and are constantly fighting kustomize to be able to do what we want. A stop-gap solution we are starting to employ is using a plugin to perform the strategic merge patching for specific CRDs, but we are finding limits to the plugin approach. Given that kubernetes itself is moving in a direction where CRDs are being used more heavily in kubernetes proper, I think this will become more and more important even for vanilla kubernetes. I think it's great news that Kubernetes v1.16 is able to look at a CRD definition alone and figure out how to perform strategic merge patching. If the K8s API server is able to perform strategic merge patching given on the CRD definition alone, I see no reason why kustomize wouldn't be able to do the same. It could leverage the same library/techniques that the API server is doing, even enabling a client-side solution provided only the CRD definitions. Our ideal solution would look something like: apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
crds:
- github.com/argoproj/argo-rollouts//manifests/crds?ref=release-0.12
resources:
- my-rollout.yaml
patchesStrategicMerge:
- add-environment-variable.yaml Notice that CRDs section would support pointing to a remote, as well as local files. It is desirable to support remote references because the CRD definitions are often centrally defined/controlled, as they are tied/upgraded with to the clusters. Another thing to note, is that we feel the
And then have the kustomization.yaml have the crd section reference the generated file: crds:
- rollout-crd.yaml Is my proposal something Kustomize would consider? |
@jessesuen @dthomson25 @ian-howell @monopole @Liujingfang1 This is a begin of fix: As a result kustomize is using SMP for instance here The output seems to be correct: apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollout
spec:
template:
spec:
containers:
- env:
- name: test
value: bar
image: gcr.io/heptio-images/ks-guestbook-demo:0.1
name: guestbook The example is can be reproduce here : https://github.com/keleustes/kustomize/tree/armadacrd/examples/crds Don't forget to clone the right branch, and rerun |
Note that |
Great point and thanks for highlighting this. Although, I think kustomize's ability to SMP for the purposes of resource composition vs. In other words, I don't necessary agree that we must have consistent behavior for SMP in kustomize vs. SMP as part of kubectl apply, especially if it means waiting to change kustomize's behavior to support SMP only once kubectl apply (client-side) gets it. |
Nice. What a clever way of doing this! However, I feel that a plugin approach to solving this, still does not make for a good experience. It would require either getting the plugin upstreamed into kustomize core (which isn't scalable from a kustomize perspective), or having an external go plugin and dealing with a distribution problem on clients, which will need to build different versions because of the requirement of matching golang library dependencies. IMO, the ideal experience would be to simply have a Would love to hear from @Liujingfang1 @monopole about this, since we would be eager to work on this. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Any updates on this? |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle rotten |
Any updates on this? |
In the release where kustomize has been restructured into a library and before the kstatus/kyaml/crawl/blackfriday code has been dumped into that library code, it is actually pretty simple to register CRD in such as way that SMP is supported. Have a look at: Seems that another solution has been proposed here: #2105 but it has not been merged in two months....so who knows if it will ever be merged. |
/remove-lifecycle rotten |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Until Kubernetes 1.15, kustomize merge patch strategy was aligned with the kubernetes.
When applying using mergePatch, kustomize and kubernetes were quite aligned:
The side effect that when using a CRD for instance argo/Rollout instead of a Deployment, the merging behavior was different also the fields were identical. But at least kustomize and kubectl were behaving the same way.
Starting 1.16, an major improvement has been done in CRD handling:
Server-side apply will now use the openapi provided in the CRD validation field to help figure out how to correctly merge objects and update ownership
. See kubernetes PR.This means that kubectl apply/patch will end up kind of using SMP for K8s native objects and CRDs, but kustomize will still use JMP, hence will be misleading the user.
Using kustomize with CRD (Istio, Argo, Prometheus, upcoming kubeadm) was already a quite tedious process to create the configuration even with the auto discovery of the CRD.json schema. But it now looks like the merge will be inconsistent with the k8s one.
Will most likely have to write a KEP to help the kustomize user:
1, Potential solution: Create a procedure based on the kustomize external go plugin, plugin attempting to register the crd.go code into overall scheme registry used by kustomize to figure if it needs to use SMP or JMP.
2. Potential solution: Follow kubectl pattern which runs most of the code on the server side.
The text was updated successfully, but these errors were encountered: