Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Configuring the default artifact repository in argo.libsonnet #513

Conversation

Ark-kun
Copy link
Contributor

@Ark-kun Ark-kun commented Dec 11, 2018

This is part of the work to clean-up the explicit artifact location specs from compiled workflows.

Right now the workflows are not portable. Even the namespace and service name:port is hard-coded:

Before:

  - container:
      args:
      - echo exit!
      command:
      - sh
      - -c
      image: python:3.5-jessie
    name: exiting
    outputs:
      artifacts:
      - name: mlpipeline-ui-metadata
        path: /mlpipeline-ui-metadata.json
        s3:
          accessKeySecret:
            key: accesskey
            name: mlpipeline-minio-artifact
          bucket: mlpipeline
          endpoint: minio-service.kubeflow:9000
          insecure: true
          key: runs/{{workflow.uid}}/{{pod.name}}/mlpipeline-ui-metadata.tgz
          secretKeySecret:
            key: secretkey
            name: mlpipeline-minio-artifact
      - name: mlpipeline-metrics
        path: /mlpipeline-metrics.json
        s3:
          accessKeySecret:
            key: accesskey
            name: mlpipeline-minio-artifact
          bucket: mlpipeline
          endpoint: minio-service.kubeflow:9000
          insecure: true
          key: runs/{{workflow.uid}}/{{pod.name}}/mlpipeline-metrics.tgz
          secretKeySecret:
            key: secretkey
            name: mlpipeline-minio-artifact

After:

  - container:
      args:
      - echo exit!
      command:
      - sh
      - -c
      image: python:3.5-jessie
    name: exiting
    outputs:
      artifacts:
      - name: mlpipeline-ui-metadata
        path: /mlpipeline-ui-metadata.json
      - name: mlpipeline-metrics
        path: /mlpipeline-metrics.json

This change is Reviewable

@Ark-kun
Copy link
Contributor Author

Ark-kun commented Dec 11, 2018

/assign @IronPan

@IronPan
Copy link
Member

IronPan commented Dec 11, 2018

I don't think assign default artifact store to ml-pipeline specific would be the best option here, since argo is a shared component in the kubeflow cluster, not pipeline specific.

https://github.com/kubeflow/kubeflow/tree/master/kubeflow

executorImage: argoproj/argoexec:v2.2.0
artifactRepository:
s3:
bucket: mlpipeline
Copy link
Member

@IronPan IronPan Dec 11, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think assigning mlpipeline as default for argo is an good option, given argo is not pipeline specific. Kubeflow users could use argo for various purpose.

also, if you make further change to ksonnet config, make sure do it in kubeflow ks registry here first,
https://github.com/kubeflow/kubeflow/tree/master/kubeflow
before making any change to our repo. the ks registry here is only used by bootstrapper image, which is not officially supported anymore.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if we only add this if there is no existing default argo artifact repo?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LOL. We've been debating whether to add this config for Pipelines-only while someone just went and added this configMap for all Argo users. kubeflow/kubeflow#2238

Yes, all Kubeflow Argo installations since January have default artifact storage which is slightly pipeline-specific =)

@Ark-kun
Copy link
Contributor Author

Ark-kun commented Dec 11, 2018

We can make the names more generic and only install the default repo if there is no existing one.
What do you think?

@IronPan
Copy link
Member

IronPan commented Dec 11, 2018

The minio instance is pipeline specific. the name of the artifact store can be anything but the endpoint will point to pipeline specific minio instance. can't we inject the artifact manifest somewhere else, e.g. backend?

@Ark-kun
Copy link
Contributor Author

Ark-kun commented Dec 13, 2018

The minio instance is pipeline specific. the name of the artifact store can be anything but the endpoint will point to pipeline specific minio instance.

We install both Argo instance and Minio instance. Shouldn't our Argo instance point to our Minio instance?

@IronPan
Copy link
Member

IronPan commented Dec 21, 2018

the argo instance is not pipeline specific unfortunately.

@Ark-kun
Copy link
Contributor Author

Ark-kun commented Apr 30, 2019

This is still relevant.

@Ark-kun Ark-kun reopened this Apr 30, 2019
@k8s-ci-robot
Copy link
Contributor

@Ark-kun: The following test failed, say /retest to rerun them all:

Test name Commit Details Rerun command
kubeflow-pipeline-e2e-test b2d4281 link /test kubeflow-pipeline-e2e-test

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

@IronPan
Copy link
Member

IronPan commented May 7, 2019

As discussed offline, the argo would support per-job configmap in v2.4 or later.
Before that's becoming available, it's ok to check in this change as temporary solution, as we don't expect people use argo extensively outside of pipeline in near future.

@IronPan
Copy link
Member

IronPan commented May 7, 2019

/lgtm
/approve

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: IronPan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: IronPan

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@Ark-kun
Copy link
Contributor Author

Ark-kun commented May 14, 2019

Closing this issue since the same change was implemented in kubeflow/kubeflow#2238 which got picked up in our v0.1.9 release.

@Ark-kun Ark-kun closed this May 14, 2019
@Ark-kun Ark-kun deleted the Deployment---Default-artifact-repository branch May 14, 2019 00:15
Linchin pushed a commit to Linchin/pipelines that referenced this pull request Apr 11, 2023
…beflow#513)

* Auto deployed clusters are no longer recycling names; instead each
  auto deployed cluster will have a unique name

* Use regexes to identify the appropriate auto deployed cluster

* Only consider clusters with a minimum age; this is a hack to ensure
  clusters are properly setup.

* Related to: kubeflow/testing#444
HumairAK pushed a commit to red-hat-data-services/data-science-pipelines that referenced this pull request Mar 11, 2024
* KFP 1.5.0-rc.0 Rebase

* Resolve backend and API conflicts

* Resolve UI conflicts

* Apply changes from KFP 1.5.0-rc.1

* Resolve backend and API conflicts from RC1

* Resolve UI conflicts from RC1

* Apply changes from KFP 1.5.0-rc.2

* Resolve backend conflicts from RC2

* Build SDK based on kfp RC2 from Github instead of PyPI

* Regenerate unittest's Golden YAML files
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants