-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WG ownership of kubeflow/manifests - relation to application owners #400
Comments
Issue-Label Bot is automatically applying the labels:
Please mark this comment with 👍 or 👎 to give our bot feedback! |
KFServing has been maintaining standalone installations https://github.com/kubeflow/kfserving/tree/master/install. As I understand it (needs hard data), the majority of our production customers install KFServing in this way. In general, I think this practice helps applications decouple from a monolithic Kubeflow release process. I do see some drawbacks regarding shared dependencies (istio) and expect that the majority of integration work will require compatibility testing issues due to these dependencies. |
@jlewi that would be nice, because I understand that app teams already need to define some form of manifests for installation, e.g., for testing. |
Issue-Label Bot is automatically applying the labels:
Please mark this comment with 👍 or 👎 to give our bot feedback! |
The details of how testing should be done should be the perogative of the owners. So we need to figure out who owns and maintains kubeflow/manifests before we can resolve any question about how testing should work. I can think of two possible paths forward
|
Once #401 goes in, I can draft a WDYT? |
cc @yanniszark |
I would considering this WG as The point is how can we
@swiftdiaries Feel free to bring it up in the community meetings and we can get some feedbacks from folks. |
I'm a fan of this. In my world, kfctl and manifests are indeed closely related and it definitely makes sense to have some consistency in their maintenance. |
Will the WG also handle releases? All applications and platforms need to be ready before we can cut a release in manifests repo. |
@Bobgy I think so, WG need to pick up the release responsibility. Otherwise, it will make the version release and code contribution separately and make WG responsibility unclear. |
I would like to have a |
+1 And Istio+Dex combination as well. |
Proposal for |
@swiftdiaries each workgroup involves meetings, syncup infra, slack channels et all. More we have, less attendance we have. Control plane covers the key things (kfctl/manifests/istio+dex) as key repos, with deployment and management of deployed kubeflow as key goal, and also gets a critical mass of attendees. We can leave notebooks+UI dashboard out of the scope of this wg. |
I think the That means, we need to first take a small scope of areas that Per say, we can start from:
After that, we can gradually move forward with large scope, such as taking care of |
I don't think jupyter should be in scope for a controlplane or deployment wg. I believe there is thinking around having a wg focused either on notebooks or more broadly the datascientist user experience see #379 |
+1 on To @animeshsingh and @PatrickXYS comments, I was under the impression that since Istio+Dex is maintained underneath the |
other than the name, I am fine with scope, which is kfctl/manifests/istio+dex. The reason 'deployment' doesn't cover the nuance of istio+dex is because that's a common service for load balancing/ingress/authentication/authz, hence the name 'control-plane' would be more conducive. If it's only kfctl/manifests, then the name captures it fine. |
other point to note is that kfctl also includes Operator now, which is not only 'deployment', but lifecycle management of deployed Kubeflow. Which means it's watching the deployed Kubeflow, and taking corrective actions when things go south. |
@swiftdiaries I wouldn't overindex on current code locations. Where code is located is often due to organic evolution and may not be a good indication of appropriate owner/governance. We should figure out appropriate ownership and then fix code location when needed. ISTIO and dex is similar to PodPresets(#381) in that is fairly generic K8s infrastructure and not clear what the proper WG to own it would be. One option is to move some of these application upstream or downstream of Kubeflow so that Kubeflow can focus more on AI specific aspects.
|
@swiftdiaries |
In order to deploy kubeflow, I think following are minimum assets needed to maintain in long term. We need to have WG to take all of them. Manifests (kubeflow doesn't own codes, they are upstream or third party components):
Components (with corresponding manifest)
Jupyter is out of scope. I assume there's separate WG taking care of
|
Here is link to my proposal for |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Which working group will own the kubeflow/manifests repo?
How will it relate to application owners?
Right now kubeflow/manifests doesn't cleanly separate the responsibilities of platform and application owners. For example, kubeflow/manifests#1498 who decides how testing is done for the repository?
One option would be to allow/encourage application owners to host the source of truth for the kustomize manifests inside their own repositories e.g.
The platform owners could then aggregate and collect these manifests and build automation to do that.
I believe a lot of applications (e.g. pipelines, katib, kfserving) are already storing/developing their manifests inside their repositories.
cc @kubeflow/kfserving-owners
cc @yanniszark @swiftdiaries @animeshsingh
cc @Bobgy
The text was updated successfully, but these errors were encountered: