mount "data volumes" from "generic nodes defaults" on KFP components #2760
Labels
component:pipeline-editor
pipeline editor
component:pipeline-runtime
issues related to pipeline runtimes e.g. kubeflow pipelines
kind:enhancement
New feature or request
Milestone
Background
Currently, we support mounting data volume on "generic nodes" while in a Kubeflow Pipeline.
I think we should also mount these volumes to the "KFP component nodes".
This will allow data to be exchanged between "generic" and "KFP" components through the mounted volumes, (at least partially solving #1765), and would also facilitate data exchange between "KFP" components themselves.
Implementation
For "generic nodes" we currently pass
volume_mounts
to theExecuteFileOp
constructor, which are looped-over and added withself.add_volume()
.(NOTE:
self.add_volume()
works onExecuteFileOp
because it extends fromkfp.dsl.ContainerOp
)Similarly, we can add these volumes to the
ContainerOps
generated from KFP component definitions by looping over the list withadd_volume()
.Considerations
container.add_env_variable()
The text was updated successfully, but these errors were encountered: