Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kustomize 3.8.0 builds an invalid CRD manifest in K8s 1.17 #201

Closed
Zerpet opened this issue Jul 13, 2020 · 12 comments
Closed

Kustomize 3.8.0 builds an invalid CRD manifest in K8s 1.17 #201

Zerpet opened this issue Jul 13, 2020 · 12 comments
Assignees

Comments

@Zerpet
Copy link
Collaborator

Zerpet commented Jul 13, 2020

make deploy finished with error to deploy the CRD manifest:

kustomize build config/crd | kubectl apply -f -
# error: error validating "STDIN": error validating data: ValidationError(CustomResourceDefinition.status): missing required field "storedVersions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus; if you choose to ignore these errors, turn validation off with --validate=false

However, kubectl deploy -k config/crd works as expected. Tested this with kustomize 3.7.0 and it doesn't returns this error during apply.

We should check if there are any breaking changes or any action required from our side to use the latest kustomize version, or document the limitation if there is no action.

@Zerpet Zerpet changed the title Kustomize builds an invalid CRD manifest Kustomize 3.8.0 builds an invalid CRD manifest Jul 13, 2020
@coro
Copy link
Contributor

coro commented Jul 14, 2020

Kubectl packages kustomize, and is pinned at v2.0.3.

As for v3.8.0, the release notes claim that the output of kustomize will change due to them switching from apimachinery to kyaml, but that this should only break brittle yaml validation tests. Without looking into this further (yet), this could be an issue with 3.8.0.

@coro
Copy link
Contributor

coro commented Jul 22, 2020

Worth checking if 3.8.1 fixed this.

@coro
Copy link
Contributor

coro commented Jul 22, 2020

I've confirmed that this issue is fixed in 3.8.1:

$ kustomize version
{Version:3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-22T10:58:21+00:00 GoOs:linux GoArch:amd64}

$ makenv deploy
...
kustomize build config/crd | kubectl apply -f -
error: error validating "STDIN": error validating data: ValidationError(CustomResourceDefinition.status): missing required field "storedVersions" in io.k8s.apiextensions-apiserver.pkg.apis.apiextensions.v1beta1.CustomResourceDefinitionStatus; if you choose to ignore these errors, turn validation off with --validate=false
Makefile:43: recipe for target 'deploy-manager' failed
make: *** [deploy-manager] Error 1

$ brew switch kustomize 3.8.1
Cleaning /home/linuxbrew/.linuxbrew/Cellar/kustomize/3.8.1
Cleaning /home/linuxbrew/.linuxbrew/Cellar/kustomize/3.8.0
1 links created for /home/linuxbrew/.linuxbrew/Cellar/kustomize/3.8.1

$ makenv deploy
...
kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created
kustomize build config/default/base | kubectl apply -f -
deployment.apps/rabbitmq-cluster-operator created

@coro coro closed this as completed Jul 22, 2020
@coro coro reopened this Jul 22, 2020
@coro
Copy link
Contributor

coro commented Jul 22, 2020

It seems that 3.8.1 still causes issues - while the deploy stage succeeds on 3.8.1, the operator hits an exception when attempting to create a RabbitmqCluster:

Reconciler error	{"controller": "rabbitmqcluster", "request": "rabbitmq-system/rabbitmqcluster-sample", "error": "RabbitmqCluster.rabbitmq.com \"rabbitmqcluster-sample\" is invalid: status.conditions: Invalid value: \"null\": status.conditions in body must be of type array: \"null\""}

@coro
Copy link
Contributor

coro commented Jul 22, 2020

Here's the output of the kustomize build command on 3.7.0 and 3.8.0. It's clear that the storedVersions property disappeared in 3.8.0, hence why we saw failures. This field returns in 3.8.1.

$ diff /tmp/kustom37.yaml /tmp/kustom380.yaml
6d5
<   creationTimestamp: null
28,29c27
<   subresources:
<     status: {}
---
>   subresources: {}
4377,4378d4374
<   conditions: []
<   storedVersions: []

It seems that the issue that we're hitting now is apparently around the status subresource that disappears between 3.7.0 and 3.8.1:

$ diff /tmp/kustom37.yaml /tmp/kustom381.yaml
6d5
<   creationTimestamp: null
28,29c27
<   subresources:
<     status: {}
---
>   subresources: {}

@coro
Copy link
Contributor

coro commented Jul 22, 2020

Looks like others are hitting the same issue as us - kustomize 3.8+ is stripping out values that are nil or {}. So our template:

spec:
  subresources:
    status: {}

is getting stripped to:

spec:
  subresources: {}

Note it's not recursive. If I add a child key:value to status:

spec:
  subresources:
    status:
      foo: {}

It only removes the bottommost resource:

spec:
  subresources:
    status: {}

@monopole
Copy link

monopole commented Jul 23, 2020

The information here is good; tracking in kubernetes-sigs/kustomize#2734

@coro
Copy link
Contributor

coro commented Aug 7, 2020

Looks like there's a PR now to fix this: kubernetes-sigs/kustomize#2805

@ferozjilla
Copy link
Contributor

kubernetes-sigs/kustomize#2805: Has been merged in master but not released yet.

@MirahImage MirahImage added the blocked Waiting on other work or on 3rd party. label Aug 18, 2020
@coro
Copy link
Contributor

coro commented Sep 1, 2020

Kustomize 3.8.2 has been released, with our bug fix in: https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv3.8.2

@ansd ansd removed the blocked Waiting on other work or on 3rd party. label Sep 1, 2020
@mkuratczyk
Copy link
Collaborator

Please remember to remove the 1.18 incompatibility notice once this is delivered: https://www.rabbitmq.com/kubernetes/operator/install-operator.html#compatibility

@ansd ansd self-assigned this Sep 9, 2020
@ansd
Copy link
Member

ansd commented Sep 9, 2020

I can reproduce this issue with kustomize 3.8.0 against K8s 1.17.

However, kustomize 3.8.0 works fine against K8s 1.18:

$ kustomize version
{Version:3.8.0 GitCommit:6a50372dd5686df22750b0c729adaf369fbf193c BuildDate:2020-07-05T17:55:53+01:00 GoOs:darwin GoArch:amd64}

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.8", GitCommit:"9f2892aab98fe339f3bd70e3c470144299398ace", GitTreeState:"clean", BuildDate:"2020-08-14T11:09:22Z", GoVersion:"go1.14.7", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.2", GitCommit:"52c56ce7a8272c798dbc29846288d7cd9fbae032", GitTreeState:"clean", BuildDate:"2020-04-30T20:19:45Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

$ kustomize build config/crd | kubectl apply -f -
customresourcedefinition.apiextensions.k8s.io/rabbitmqclusters.rabbitmq.com created

Given that this is fixed in kustomize 3.8.2, since we don't specify the kustomize version in this repo, I don't see anything to be done in this repo. I therefore close this issue. @Zerpet @coro @ferozjilla @mkuratczyk let me know if you think something needs to be done here.

In future we should all use the same kustomize version. I therefore update #306 to include kustomize in the go tools.

@ansd ansd closed this as completed Sep 9, 2020
@ansd ansd changed the title Kustomize 3.8.0 builds an invalid CRD manifest Kustomize 3.8.0 builds an invalid CRD manifest in K8s 1.17 Sep 9, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

7 participants