-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Volumes Instead of Sidecars for the Artifact Repository #1024
Comments
/cc @jlewi |
If I understand your question correctly, the sidecar ensures that the specified files are stored to a specific location in the artifact repository, and that specific files are fetched to a specific location in the container. Without a sidecar it would not be possible to do this as configuration. It would be up to the step logic to do this. For example, if step 1 wires It would also be possible for steps to modify/delete the artifact of another step. That removes what I believe to be a key feature of any workflow/pipeline manager, which is data provenance. |
The inputs volume can be mounted in read-only mode.
You can mount any inputs/outputs volume subpath to any container location. E.g. for Mount Mount Mount
Ideally, containers should only use paths received from the command line arguments. |
Let me reformulate a bit. Instead of the artifact repository being GCS/S3/MinioServer, would it be possible to have an option to store the data in a volume? Given the large number of volume implementations (NFS, GCP Cloud Filer, etc.), it seems that this would support a large number of use cases beyond object stores. |
FEATURE REQUEST: Volumes Instead of Sidecars to upload/download data to the default Artifact Repository
Hi, I was wondering why Argo decided to use a sidecar to download/upload data to GCS/S3/etc when using the Default Artifact Repository.
Did we consider using the Volume abstraction in Kubernetes? It looks that there are types of volumes for many kinds of storage and that it would make it easy to add a new kind of storage for the Default Artifact Repository by implementing a new kind of volume.
https://ai.intel.com/kubernetes-volume-controller-kvc-data-management-tailored-for-machine-learning-workloads-in-kubernetes/
https://kubernetes.io/docs/concepts/storage/volumes/
The text was updated successfully, but these errors were encountered: