Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add WCOW sandbox mount support #1087

Merged
merged 2 commits into from
Sep 24, 2021
Merged

Conversation

dcantah
Copy link
Contributor

@dcantah dcantah commented Jul 28, 2021

This change adds sandbox mount like support for WCOW. Sandboxmounts were our LCOW
solution similar to a k8s empty dir mount. We create a directory that will house
various other subdirectories that a user can specify by supplying a sandbox:// prefix
for the host path of a mount. In the usual case the hostpath of a mount is in the context of the
physical host (e.g. I want C:\path on my machine mapped to C:\containerpath in my container),
however for sandbox mounts the host path is treated as relative to this directory we have
made in the VM. The root directory for these sandbox mounts I defined as C:\SandboxMounts

Example:

"mounts": [
    {
    "container_path": "C:\\test",
    "host_path": "sandbox:///test"
    }
]

The above will make a directory at C:\SandboxMounts\test in the Utility VM that will be mapped to
C:\test in the container. If another container in the same pod supplied the same mount then
you would end up "sharing" this directory with the other container, meaning you would
both see anything placed/modified in this directory.

The backing storage for these mounts will be the UVMs scratch disk, which currently is always 10GB
(8.5 actually usable) as that's what's hardcoded in HCS for the call we use that generates the vhd.
For some reason the expand vhd function from HCS doesn't function for the VM scratch disk which needs
some investigation :(

Signed-off-by: Daniel Canter dcanter@microsoft.com

@dcantah dcantah requested a review from a team as a code owner July 28, 2021 09:07
@dcantah
Copy link
Contributor Author

dcantah commented Jul 28, 2021

Should we set a certain mount type for wcow? We set the type to "bind" for Linux, but not sure if we want to use that for wcow. Right now I'm just checking the prefix instead.

@katiewasnothere
Copy link
Contributor

Should we set a certain mount type for wcow? We set the type to "bind" for Linux, but not sure if we want to use that for wcow. Right now I'm just checking the prefix instead.

You don't need this

@dcantah
Copy link
Contributor Author

dcantah commented Jul 28, 2021

Should we set a certain mount type for wcow? We set the type to "bind" for Linux, but not sure if we want to use that for wcow. Right now I'm just checking the prefix instead.

You don't need this

I know I don't need it, seeming as how what I'm doing here works :P I was asking as for the other mount types we check against whether the type is x, y, or z but for these I'm checking whether the source has a prefix instead. Mainly wanted to know if anyone thought we might want to set a type like we do for physical-disk, virtual-disk etc. to keep it more in line. Sounds like no though

@dcantah dcantah force-pushed the sandboxmount-wcow branch from 8f03188 to 7bf7421 Compare July 28, 2021 21:26
@dcantah
Copy link
Contributor Author

dcantah commented Sep 7, 2021

@katiewasnothere Added the test you recommended, but going to wait for an answer on this #1087 (comment) before addressing that. PTAL again when you get a chance

@katiewasnothere katiewasnothere self-assigned this Sep 7, 2021
@dcantah
Copy link
Contributor Author

dcantah commented Sep 21, 2021

@katiewasnothere @ambarve Here's the CRI pr kevpar/cri#1 to add a condition to check for these. Does this look alright to check in if anyone can give another look real quick?

@dcantah
Copy link
Contributor Author

dcantah commented Sep 22, 2021

@katiewasnothere Updated the cri PR also kevpar/cri#1, ptal.

Copy link
Contributor

@katiewasnothere katiewasnothere left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

This change adds sandbox mount like support for WCOW. Sandboxmounts are our LCOW
solution similar to a k8s empty dir mount. We create a directory that will house
various other subdirectories that a user can specify by supplying a sandbox:// prefix
for the host path of a mount. In the usual case the hostpath of a mount is in the context of the
physical host (e.g. I want C:\path on my machine mapped to C:\containerpath in my container),
however for sandbox mounts the host path is treated as relative to this directory we have
made in the VM. The root directory for these sandbox mounts I defined as C:\SandboxMounts

Example:
"mounts": [
    {
    "container_path": "C:\\test",
    "host_path": "sandbox:///test"
    }
]

The above will make a directory at C:\SandboxMounts\test in the Utility VM that will be mapped to
C:\\test in the container. If another container in the same pod supplied the same mount then
you would end up "sharing" this directory with the other container, meaning you would
both see anything placed/modified in this directory.

The backing storage for these mounts will be the UVMs scratch disk, which currently is always 10GB
(8.5 actually usable) as that's whats hardcoded in HCS for the call we use that generates the vhd.
For some reason the expand vhd function from HCS doesn't function for the VM scratch disk which needs
some investigation :(

Signed-off-by: Daniel Canter <dcanter@microsoft.com>
This change adds two cri-containerd tests to test WCOW sandbox mounts.
One test verifies the general functionality by having two containers
supply the same sandbox://<path> mount and validating that each container
can see whats in the mount. Another tests verifies that if we don't supply
the sandbox:/// mount for another container in the pod, it doesn't have access
to the mount.

Signed-off-by: Daniel Canter <dcanter@microsoft.com>
@dcantah
Copy link
Contributor Author

dcantah commented Sep 24, 2021

Squashed and rebased, thanks for review!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants