-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Configuring the default artifact repository in argo.libsonnet #513
Configuring the default artifact repository in argo.libsonnet #513
Conversation
/assign @IronPan |
I don't think assign default artifact store to ml-pipeline specific would be the best option here, since argo is a shared component in the kubeflow cluster, not pipeline specific. https://github.com/kubeflow/kubeflow/tree/master/kubeflow |
executorImage: argoproj/argoexec:v2.2.0 | ||
artifactRepository: | ||
s3: | ||
bucket: mlpipeline |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think assigning mlpipeline as default for argo is an good option, given argo is not pipeline specific. Kubeflow users could use argo for various purpose.
also, if you make further change to ksonnet config, make sure do it in kubeflow ks registry here first,
https://github.com/kubeflow/kubeflow/tree/master/kubeflow
before making any change to our repo. the ks registry here is only used by bootstrapper image, which is not officially supported anymore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we only add this if there is no existing default argo artifact repo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LOL. We've been debating whether to add this config for Pipelines-only while someone just went and added this configMap for all Argo users. kubeflow/kubeflow#2238
Yes, all Kubeflow Argo installations since January have default artifact storage which is slightly pipeline-specific =)
We can make the names more generic and only install the default repo if there is no existing one. |
The minio instance is pipeline specific. the name of the artifact store can be anything but the endpoint will point to pipeline specific minio instance. can't we inject the artifact manifest somewhere else, e.g. backend? |
We install both Argo instance and Minio instance. Shouldn't our Argo instance point to our Minio instance? |
the argo instance is not pipeline specific unfortunately. |
This is still relevant. |
@Ark-kun: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
As discussed offline, the argo would support per-job configmap in v2.4 or later. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: IronPan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: IronPan The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Closing this issue since the same change was implemented in kubeflow/kubeflow#2238 which got picked up in our |
…beflow#513) * Auto deployed clusters are no longer recycling names; instead each auto deployed cluster will have a unique name * Use regexes to identify the appropriate auto deployed cluster * Only consider clusters with a minimum age; this is a hack to ensure clusters are properly setup. * Related to: kubeflow/testing#444
* KFP 1.5.0-rc.0 Rebase * Resolve backend and API conflicts * Resolve UI conflicts * Apply changes from KFP 1.5.0-rc.1 * Resolve backend and API conflicts from RC1 * Resolve UI conflicts from RC1 * Apply changes from KFP 1.5.0-rc.2 * Resolve backend conflicts from RC2 * Build SDK based on kfp RC2 from Github instead of PyPI * Regenerate unittest's Golden YAML files
This is part of the work to clean-up the explicit artifact location specs from compiled workflows.
Right now the workflows are not portable. Even the namespace and service name:port is hard-coded:
Before:
After:
This change is