Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kfctl generated yaml fails validation #2653

Closed
kkasravi opened this issue Mar 7, 2019 · 7 comments
Closed

kfctl generated yaml fails validation #2653

kkasravi opened this issue Mar 7, 2019 · 7 comments

Comments

@kkasravi
Copy link
Contributor

kkasravi commented Mar 7, 2019

in kfctl (golang), the generate method will create a /ks_app/default.yaml.
It then calls kubectl apply -f default.yaml. If you run kubectl apply -f default.yaml --validate --dry-run
the following errors occur:

deployment.apps/ml-pipeline-ui created (dry run)
deployment.apps/ml-pipeline-viewer-controller-deployment created (dry run)
deployment.apps/mysql created (dry run)
deployment.extensions/ambassador created (dry run)
unable to recognize "default.yaml": no matches for kind "CompositeController" in version "metacontroller.k8s.io/v1alpha1"
error validating "default.yaml": error validating data: ValidationError(Deployment.spec.template.spec): unknown field "readinessProbe" in io.k8s.api.core.v1.PodSpec; if you choose to ignore these errors, turn validation off with --validate=false
@jlewi
Copy link
Contributor

jlewi commented Mar 8, 2019

What is the default.yaml file?

Is this a dump of all the YAML via metacontroller?

Will we get rid of default.yaml when we fix #2391 and apply all the components directly?

@kkasravi
Copy link
Contributor Author

kkasravi commented Mar 8, 2019

@jlewi

What is the default.yaml file?

Earlier behavior was to call ksonnet.Apply for [metacontroller, application]. I think @gabrielwen switched to calling ksonnet.Show with output being dumped to <deployment>/ks_app/default.yaml, then exec'ing kubectl apply -f default.yaml. (see below for what we'll move to) There were some validation errors that surfaced in ambassador.libsonnet, argo.libsonnet, and pipeline as a result. (fixed in #2548).

Is this a dump of all the YAML via metacontroller?

Basically, It's the dump of all YAML via ksonnet in the application component. The reason (as we've discussed before) is to label all namespace'd components.

Will we get rid of default.yaml when we fix #2391 and apply all the components directly?

kustomize will let us do this via a common label for ported components. For ksonnet - which is dumping to default.yaml, we should not call kubectl apply -f ... but also have kustomize apply a common label to the ksonnet output and then call the REST Api to submit to the k8 server. @kunmingg defined a function to do this within the UI which I had been using in kfctl (see bootstrap/pkg/utils/k8utils.go CreateResourceFromFile). In short we dump to yaml for ksonnet, apply a common label using kustomize and then process all yaml (both ksonnet and kustomize) within kfctl rather than exec'ing kubectl.

@gabrielwen - did you have any additional comments?

@swiftdiaries - any thoughts on processing the ksonnet generated yaml through kustomize to apply a common label needed by the Application CR? This way we could drop the ksonnet processing done in application.libsonnet.

@jlewi do we need a design doc for kfctl in the kubeflow drive? We have been using the README.md under bootstrap/cmd/kfctl but a design doc may be a better place to solicit feedback.

@jlewi
Copy link
Contributor

jlewi commented Mar 8, 2019

do we need a design doc for kfctl in the kubeflow drive? We have been using the README.md under bootstrap/cmd/kfctl but a design doc may be a better place to solicit feedback.

Sounds like a good idea

@jlewi
Copy link
Contributor

jlewi commented Mar 8, 2019

@swiftdiaries - any thoughts on processing the ksonnet generated yaml through kustomize to apply a common label needed by the Application CR? This way we could drop the ksonnet processing done in application.libsonnet.

This seems a little inelegant.

In line with #2391 I'd really like to have the deployment process follow standard processes. For ksonnet that means using ks apply. Using the metacontroller and jsonnet to generate a yaml file and apply a common label to all resources is very unique.

I'd like to follow common processes because lots of developers are working on this code and so the simpler we can make it the better.

Likewise combining ksonnet and kustomize is very non-standard.

Per the discussion about Application CRs (https://bit.ly/2BukRoW). We are moving to a world where we will have a tree of applications.

  • 1 top level Kubeflow App CR's
  • Individual Apps CR's e.g. 1 for TFJob controller, 1 for Juppyter controller etc...
  • A label selector will be used to link parent application to child app

As a work around for ksonnet can we just hardcode the label e.g. set the label

kf-app=dummy

Since ksonnet is going away why spend time figuring out how to consistently set the label to

kf-app=${KFAPPNAME}

where KFAPPNAME will be unique for each Kubeflow deployment.

So my suggested priorities would be

  1. Fix kfctl should try and apply all components rather than just the application component #2391 - call ks apply directly rather than using metacontroller do generate YAML

  2. Create App CRs for each micro-application

    • Do this in kustmize packages rather than the ksonnet version of the packages
  3. Link child Apps to parent KF App via labels

With the exception of #2391 I might focus all the App CR improvements on kustomize rather than continuing to invest in the ksonnet versions of the packages.

@kkasravi
Copy link
Contributor Author

kkasravi commented Mar 8, 2019

@jlewi sure - I think we then drop the ksonnet application component then since it was only doing client-side labeling and reparenting resources to the metacontroller. All 3 steps above can be done in kustomize w/o requiring the sigs.k8s.io/application controller. Though we'll eventually need the controller for reparenting and health.

Should we have all components ported to kustomize for 0.5?

@jlewi
Copy link
Contributor

jlewi commented Mar 9, 2019

I think if we can have an option in 0.5 to deploy with kustomize that would be great but I don't want to plan on dropping ksonnet support until we have kustomize working and I don't know that will make it for 0.5

@kkasravi
Copy link
Contributor Author

kkasravi commented Mar 9, 2019

Understood, @swiftdiaries and I will see what we can do.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants