-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for non-docker based deployments #1654
Comments
AFAIK, you can configure Argo to use other executors (e.g. k8sapi, kubelet or pns) in the configmap: https://github.com/argoproj/argo/blob/ca1d5e671519aaa9f38f5f2564eb70c138fadda7/docs/workflow-controller-configmap.yaml#L78. Then pipelines should just work. |
Thanks for the help. I edited the configmap and also restarted the workflow controller pod (which seems not necessary). The config looks like this now:
apiVersion: v1
data:
config: |
{
executorImage: argoproj/argoexec:v2.3.0,
artifactRepository:
{
s3: {
bucket: mlpipeline,
keyPrefix: artifacts,
endpoint: minio-service.kubeflow:9000,
insecure: true,
accessKeySecret: {
name: mlpipeline-minio-artifact,
key: accesskey
},
secretKeySecret: {
name: mlpipeline-minio-artifact,
key: secretkey
}
},
containerRuntimeExecutor: k8sapi
}
}
kind: ConfigMap
metadata:
creationTimestamp: "2019-07-22T13:56:32Z"
labels:
kustomize.component: argo
name: workflow-controller-configmap
namespace: kubeflow
resourceVersion: "1181725"
selfLink: /api/v1/namespaces/kubeflow/configmaps/workflow-controller-configmap
uid: 3144d234-101f-4031-94ce-b1aa258bfafd I also tried
The cluster runs on top of Kubernetes 1.15 and CRI-O 1.15 as container runtime. Is there anything else I can try? |
Your |
Ah thanks for the hint 🤦♂️, now I'm encountering a different set of issues when running the example pipelines: With
With
|
You should probably look at the workflow controller logs and the Wait container logs.
This is inconvenient, but can you try to satisfy that requirement? Mount an pipelines/sdk/python/kfp/gcp.py Lines 44 to 58 in a4813ff
|
Okay, If I run the
Whereas the exit handler logs contain: Pod 1
Pod 2
Pod 3
Unfortunately I can't find anything helpful in there, do you? 🤔
Hm, I tried to create my own pipeline but the big question is where to mount that empty dir? For now I have something like this, which causes the same issue as mentioned: #!/usr/bin/env python3
import kfp
from kfp import dsl
def echo_op(text):
return dsl.ContainerOp(name='echo',
image='library/bash:4.4.23',
command=['sh', '-c'],
arguments=['echo "$0"', text])
@dsl.pipeline(name='My pipeline', description='')
def pipeline():
from kubernetes import client as k8s_client
echo_task = echo_op('Hello world').add_volume(
k8s_client.V1Volume(
name='volume',
empty_dir=k8s_client.V1EmptyDirVolumeSource())).add_volume_mount(
k8s_client.V1VolumeMount(name='volume', mount_path='/output'))
if __name__ == '__main__':
kfp.compiler.Compiler().compile(pipeline) |
It should have been mounted to the folder where you're storing the outputs you produce. But in the last example you're not producing any, so there should have been no issues. Ah. I forgot about the auto-added artifacts (#1422). Can you try the following two things:
Here we override the paths for the auto-added output artifacts so that they're stored under the |
So I applied the example via
Alright, this seems to work now, the pipeline succeeds. |
I've looked at Argo source code. Maybe you do not even need the |
Tracking the original issue |
Hm, no then I get this error message:
|
Hmm. Maybe it would work if the paths are in an existing base image dir like |
Yes I tried with |
I also ran into this issue, and the above fix worked for me for the regular, op.add_volume(k8s_client.V1Volume(name='outputs', empty_dir=k8s_client.V1EmptyDirVolumeSource()))
op.container.add_volume_mount(k8s_client.V1VolumeMount(name='outputs', mount_path='/tmp/outputs')) |
Hey, I now run into similar issues. How to make it work with from kfp.components import func_to_container_op
OUT_DIR = '/tmp/outputs'
METADATA_FILE = 'mlpipeline-ui-metadata.json'
METRICS_FILE = 'mlpipeline-metrics.json'
METADATA_FILE_PATH = path.join(OUT_DIR, METADATA_FILE)
METRICS_FILE_PATH = path.join(OUT_DIR, METRICS_FILE)
BASE_IMAGE = 'my-image:latest'
def default_artifact_path() -> Dict[str, str]:
return {
path.splitext(METADATA_FILE)[0]: METADATA_FILE_PATH,
path.splitext(METRICS_FILE)[0]: METRICS_FILE_PATH,
}
def storage_op(func, *args):
op = func_to_container_op(func, base_image=BASE_IMAGE)(*args)
op.output_artifact_paths=default_artifact_path() # I'm not able to overwrite the artifact path here
op.add_volume(k8s.V1Volume(name='outputs',
empty_dir=k8s.V1EmptyDirVolumeSource()))\
.add_volume_mount(k8s.V1VolumeMount(name='outputs', mount_path=OUT_DIR))
return op |
Good news: The 'mlmetadata-*' artifacts are no longer automatically added to every single pipeline task. (There are still some components that explicitly produce those.) Side news: All outputs now produce artifacts. We need to investigate how to make Argo copy the artifacts when using PNS. They should be supporting this, otherwise it's a bug. I need to check the exact criteria for the "emptyDir" error. BTW, What would be the easiest way to set-up a temporary Docker-less Linux environment? |
Sounds good, thanks for the update. I guess an easy way would be the usage of kubeadm with some natively supported distribution, like ubuntu 18.04. Then you could use the project atomic PPA to install CRI-O and bootstrap the node with selecting the |
@Ark-kun: As far as setting up a Docker-less environment, I ran into this issue while using microk8s, which uses containerd. |
/reopen We need to stabilize PNS executor preparing for the next release. |
@Bobgy: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
For the next release you should update to argo 3.1 and use the emissary executor, which works everywhere rootless. https://argoproj.github.io/argo-workflows/workflow-executors/ i already tested it successfully with kubeflow 1.2 |
Please upvote #5718 if you want to have a proper solution to this bug. |
Documentation: https://www.kubeflow.org/docs/components/pipelines/installation/choose-executor/ We are now recommending the emissary executor (Alpha, released in KFP 1.7.0), welcome feedbacks! |
How would I do 2. for a functional component? |
@zacharymostowsky the instructions you read is outdated. #1654 (comment) is our current recommendations |
Do you think it would be possible to support non-docker based clusters as well? I'm currently checking out the examples and see that they want to mount the docker.sock into the container. We might achieve the same results when using crictl. WDYT?
The text was updated successfully, but these errors were encountered: