-
Notifications
You must be signed in to change notification settings - Fork 475
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Mount another image's filesystem to a container #322
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jwforres The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@mrunalp this is the proposal we chatted about, if there are any gaps / better detail that you want to fill in |
I will make a pass tomorrow. Thanks!
… On May 12, 2020, at 7:31 PM, Jessica Forrester ***@***.***> wrote:
@mrunalp this is the proposal we chatted about, if there are any gaps / better detail that you want to fill in
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
|
||
## Open Questions [optional] | ||
|
||
1. Should it be possible to swap out an image mount while a container is running like we do for ConfigMaps and Secrets when their data changes? Example, my large file changed and now I have a new image available and I want to hot swap that file. Unlike a ConfigMap or Secret who's reference doesn't change on the container spec, to make this possible for image mounts the container would have to now reference a new image pullspec. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this depends on how we envision exposing this in a pod spec and the trigger that we can use to update the mount for the image. I think we want something like this:
apiVersion: v1
kind: Pod
metadata:
name: test-image-volume
spec:
containers:
- image: quay.io/fedora:32
name: test-container
volumeMounts:
- mountPath: /data
name: test-volume
volumes:
- name: test-volume
imagePath:
image: quay.io/mydata:1.3.0
# optional field that specifies what subpath to mount.
subPath: /image/subpath/to/mount
There could be a controller that periodically watches the image and then updates it as needed. What kind of latency will be acceptable for an update for our use cases?
|
||
## Motivation | ||
|
||
There are many situations where it is beneficial to ship the main runtime image separately from a large binary file that will be used by the application at runtime. Putting this large binary inside another image makes it easy to use existing image pull/push semantics to move content around. This pattern is used frequently, but in order to make the content available to the runtime image it must be copied from an initContainer into the shared filesystem of the Pod. For very large files this creates a significant startup cost while copying. It also requires needlessly running the image containing the binary content for the sole purpose of moving the data. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add a more concrete example, please? :)
|
||
For the CSI driver there is some previous work in this space that it may be possible to build on: https://github.com/kubernetes-csi/csi-driver-image-populator | ||
|
||
The CSI driver must not pull images into the same image filesystem as the one the kubelet uses, otherwise the image will be garbage collected by the kubelet even though its filesytem is in use by a container. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could instantiate a separate image store but then we will probably need some controller to gc the images in that store. If we end up going that path, it will be useful for the use case of a buildah build cache as well.
cc: @nalind
|
||
### Implementation Details/Notes/Constraints [optional] | ||
|
||
For the CSI driver there is some previous work in this space that it may be possible to build on: https://github.com/kubernetes-csi/csi-driver-image-populator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is interesting.
Issues go stale after 90d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. Mark the issue as fresh by commenting If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. Reopen the issue by commenting /close |
@openshift-bot: Closed this PR. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
No description provided.