-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example is trying to mount hostPath for docker in docker #561
Comments
The docker socket is installed by argo for using "docker cp" to copy the artifact out from a container. I think this is the default behavior for openshift. User needs to relax the security constraint explicitly: https://docs.okd.io/latest/admin_guide/manage_scc.html#use-the-hostpath-volume-plugin |
Thanks @hongye-sun. Does pipelines depend on this behavior of copying out the artifact using docker cp? Could pipelines instead just use a volume (e.g. emptyDir) to share data between containers. |
Yes, we highly rely on this behavior to get component outputs and upload pipeline artifacts. Currently, argo doesn't support other ways to copy file content from the main container. We might consider to use k8s API to copy the file content by implementing the copy methods in argo's k8s API executor. It requires non-trivial work. Does it only affect openshift? From a web search, I don't see other providers (aws and azure) have similar issues. /cc @Ark-kun |
This is a more relevant bug in argo: argoproj/argo-workflows#970 |
This also breaks all workflows which should be executed on a k8s cluster which doesnt use docker. My current usecase is running argo inside k3s which uses containerd a pod executer. |
We've now upgraded to Argo 2.3. AFAIK there are many improvements to different executors. Let's check whether switching the executor fixes the problem. |
I'm running Kubeflow v0.6.2. Pipelines still trying to mount hostPath: |
What Kubernetes environment do you use? Does this Argo sample work for you? https://github.com/argoproj/argo/blob/master/examples/artifact-passing.yaml If you're using a Docker-less environment the first step would be to change Argo workflow controller configuration to non-Docker executor. See this thread: #1654 |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it. |
Hi @Ark-kun I just had a look at this and the referenced argo issue. Is my assumption correct, that this ticket is not solved yet? We are currently deploying KFP 1.0 and it seems that
We are using k8s 1.14 with docker.
Thanks in advance! |
/reopen |
@Jeffwan: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
The argo version in v1.1 still have the issue. This blocks one use case in EKS that we can not deploy kubeflow pipeline on EKS Fargate since Fargate doesn't support HostPath yet. |
I am running a local cluster using kind and getting the same error. Here is what I get when I describe my pod using
|
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
I think I've run into this issue as well with Kubeflow 1.2 on Kubernetes 1.20 using containerd. Considering the deprecation of the dockershim that was announced, I think it might be a good idea to switch the on-prem kdef to use |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
This issue has been automatically closed because it has not had recent activity. Please comment "/reopen" to reopen it. |
…be updated. (kubeflow#561) * update_kf_apps.py should create pipelineruns for images that need to be updated. * Determine whether an image is already up to date by comparing the desired image to the image listed in the manifest * If the image needs to be updated submit create the PipelineRun to update the image. * Related to kubeflow#450 * Remove commented out code. * Address comments.
User reported this problem in this thread.
https://groups.google.com/forum/#!topic/kubeflow-discuss/5Y_7lhoQLIo
Example is failing because it is trying to mount the docker socket via hostPath.
They are running this example:
https://github.com/kubeflow/pipelines/blob/master/samples/notebooks/Lightweight%20Python%20components%20-%20basics.ipynb
The pod spec is below. The spec shows that it is trying to mount the docker socket. I'm guessing this is for docker in docker to build containers.
I'm not sure where this is coming from. The example in the notebook isn't explicitly building containers so not sure why it would need to do docker in docker.
Are Kubeflow pipelines always doing docker in docker?
The text was updated successfully, but these errors were encountered: