-
-
Notifications
You must be signed in to change notification settings - Fork 372
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adds support for Docker Desktop and rootless Docker #1331
Conversation
2973ffa
to
f06f299
Compare
f06f299
to
215068b
Compare
After thinking about this for a bit, I've moved some things around. Would appreciate any feedback, @freakboy3742, before I finish tests, etc. tldr
Removed
|
Also...a failure mode I was thinking about...but probably not too important... Since it is only reasonable for Briefcase to assess Docker's operating mode when it's going to be using Docker, it would still be possible to roll out a template with an incompatible Dockerfile. This would happen in three situations I've considered:
#472 might be able to help with these but I'm planning to leave them as "known issues" in the implementation....although, maybe something should be done to detect this and alert the user..... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like it's on the right track to me - a couple of minor notes inline, but the general approach to the refactor makes sense.
src/briefcase/integrations/docker.py
Outdated
containers. If the owning user is the host user, root should be used. | ||
""" | ||
write_test_filename = "container_write_test" | ||
host_write_test_dir_path = Path.cwd() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using temp filename in a temp location would be preferable here - or, at the very least, using the build
directory so there's no chance of dirtying the project root.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using the build
directory probably won't be possible here since this can run before it exists.
I considered using a temporary file....but it presents a few of its own questions I think.
For starters, we can't use Python's tempfile
to create the file....because we need the file to be created inside the container.
However, we can use tempfile.gettempdir()
as the host directory to mount in to the container perform the write test. First, this doesn't confirm a Docker container will have write permissions inside the project....but that'll be problematic no matter what. Second, this still leaves the possibility that we leave a root
-owned file in the user's tmp directory....and to avoid issues with the file persisting from a previous Briefcase run, we'll need to effectively recreate a method to manufacture a random and unique filename.
These aren't insurmountable....but is why I went with this obvious approach. I can consider the temporary directory more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FWIW, I think creating the build directory if it doesn't exist would be reasonable. I can't think of any edge case where we'd end up with a build directory in a weird location - worst case is we get a build directory that is empty because it's only used to create a temporary permission check file. All the other locations that use the build directory are set up as "create if doesn't exist"; this would just be one more usage of that pattern.
src/briefcase/integrations/docker.py
Outdated
) from e | ||
|
||
# if the file is not owned by `root`, then Docker is mapping usernames | ||
self.is_userns_remap = 0 != self.tools.os.stat(host_write_test_file_path).st_uid |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Two notes here:
- the naming here is unclear to me -
userns
is... user namespace? - Where we're setting a stateful variable on the Docker instance, but there's no fallback - we're just assuming that
_determine_docker_mode()
has been invoked. Which it will be, because it's invoked right after the Docker() constructor... but something doesn't quite sit right about this.
I get the reason why this happens - the base Docker instance isn't bound to a particular image, but we need an image to evaluate whether Docker is running in rootless mode. I wonder if perhaps the Docker constructor should take a the image as a "hint", which would mean that we can guarantee that every Docker instance will have is_userns_remap
set.
Regardless - the docs for verify_install
should probably also highlight that the image is a hint, not anything that will result in the base Docker tool being bound.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the naming here is unclear to me - userns is... user namespace?
Yeah, sorry; I've renamed this variable 10 times. userns_remap
is the name of the Docker daemon setting for this. I may yet consider a different name.
Where we're setting a stateful variable on the Docker instance, but there's no fallback - we're just assuming that
_determine_docker_mode()
has been invoked.
I waffled on where to call _determine_docker_mode()
. Putting it in Docker
's constructor works as well for me. I'll clarify the docs as well.
Agreed that radio silence during startup isn't great, especially when startup takes time. I had mentally filed this lag as "CPython interpreter sometimes takes time to start up, especially when there's new byte code to compile", but if you've got evidence that there's a significant lag caused by the tool checks themselves, then I'm not opposed to addressing this. My preferred solution would be "make it faster" :-) - but given that some of the required checks Just Take Time, I think a tasteful "Checking tools..." waitbar might be called for.
I agree that using Alpine as a fallback is a reasonable approach. However, as I flagged in my review, I think we might be able to address this by treating the image as a "hint" - explicitly flagging it as something that isn't required, and falls back to Alpine, but if provided, any checks will use that image, as an optimization avoiding unnecessary images. AFAICT, In practice, all uses of Docker will know the image they're going to use at the point they're instantiating Docker - an AppImage will be using the configured manylinux (or ubuntu 18.04 fallback); System packages will be using an explicitly provide image. I can't think of a situation where we need to configure Docker for use, but we don't have an in-content image that will be used as soon as Docker has been configured.
Agreed that these edge cases all exist; however, as you say, short of addressing #472, I'm not sure there's much we can do about them. I'm comfortable putting these into "future work/#472" territory. |
Unfortunately, this is only true for runs for Linux System since it determines the For Linux AppImage, it only ever actually derives the Furthermore, though, we can't completely assume that This is why I was defaulting to |
Darn users... always messing up perfectly good plans... 😝
Given the alpine image is 10MB, I don't think this is worth extraordinary gymnastics. Unless using the "right" image falls out reliably and easily, I'm completely OK with an alpine fallback. Falling back to ubuntu:18.04 when the user actually has a custom base image would be much worse. |
a fun twist....arch's |
- Perform check in constructor so the setting is always available - Perform write test in `build` directory to help prevent project pollution
6ee5205
to
5ed3010
Compare
This is in its final form. However, I still want to do testing on the different platforms to make sure I didn't miss anything. still not sure what to do about the inability to run |
) from e | ||
|
||
try: | ||
self.tools.subprocess.run( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An optimization that I've considered by didn't implement is leveraging docker exec
.
Here, as in other places, we're spinning up an entire container for one quick command...just to immediately destroy the entire thing. Alternatively, we could start the container at the beginning and just call docker exec
to run commands ad hoc and then destroy the container when the command is finishing.
It may even be possible to allow multiple commands to use the same container.
I haven't done much assessment on what this would take or consequences from reusing the same container....but in my virtualized environments and my Pis, all this docker setup and teardown definitely slows things down.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed this might be an optimisation worth exploring; but it seems like it's opening up a whole "Docker session" worldview that we'd want to get right. docker run
isn't prohibitively slow, I'm happy to live with it for now at least.
- Arch's `makepkg` cannot be run as root
3a331b2
to
b31e51f
Compare
if app.target_vendor_base == ARCH and self.tools.docker.is_users_mapped: | ||
raise BriefcaseCommandError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This'll provide feedback when Arch package builds aren't feasible....but it also better illuminates that macOS can no longer build Arch packages...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well that's annoying... if I'm reading this right, this branch will also catch Linux users whose Docker config doesn't require user mapping.
Would a better approach be to always turn on the step-down user for arch builds, and only raise an error if the local configuration will break with a step down user?
(Mostly asking for self-serving purposes - I can't easily build Arch packages without this capability)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(chatted a lot off line but answering here for posterity)
Well that's annoying... if I'm reading this right, this branch will also catch Linux users whose Docker config doesn't require user mapping.
This will catch Linux users using Docker Desktop or rootless Docker since they can't use the step-down user and would be running as root
in the container now. Existing users with Docker Engine wouldn't be affected since user mapping isn't (normally) enabled.
Would a better approach be to always turn on the step-down user for arch builds, and only raise an error if the local configuration will break with a step down user?
(Mostly asking for self-serving purposes - I can't easily build Arch packages without this capability)
We can only use the step-down user in containers when Docker is not performing user mapping. This is because of how user mapping works. If Docker is performing user mapping and we use a user in the container with ID 1000, then the owner of the files in the host file system end up with an owner ID that doesn't even exist; however, this user ID that shows up is tied to how Linux user namespaces work.
Nonetheless, this also reveals another strategy that could allow us to accommodate this. We could set the GID of all the files to a GID the host user is also a part of. In that way, while the files are owned by a "non-existent" user, the host user would still have permission to interact with them. I ultimately didn't pursue this, though, because it presents a lot of opportunity for issues one way or another. In general, a lot of the strategies to mitigate these file permissions problems required strong cooperation between the host environment and containers that I thought would be onerous for users.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This all looks good; I've tweaked the pluralisation of is_user_mapped
, and snuck in a cookiecutter version pin bump.
The only real issue I can see is the problem with macOS Arch builds.
) from e | ||
|
||
try: | ||
self.tools.subprocess.run( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed this might be an optimisation worth exploring; but it seems like it's opening up a whole "Docker session" worldview that we'd want to get right. docker run
isn't prohibitively slow, I'm happy to live with it for now at least.
src/briefcase/integrations/docker.py
Outdated
at all bound to the instance. | ||
""" | ||
super().__init__(tools=tools) | ||
self.is_users_mapped = self._is_user_mapping_enabled(image_tag) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The pluralisation on this grates me the wrong way.... I'll likely tweak this before landing.
if app.target_vendor_base == ARCH and self.tools.docker.is_users_mapped: | ||
raise BriefcaseCommandError( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well that's annoying... if I'm reading this right, this branch will also catch Linux users whose Docker config doesn't require user mapping.
Would a better approach be to always turn on the step-down user for arch builds, and only raise an error if the local configuration will break with a step down user?
(Mostly asking for self-serving purposes - I can't easily build Arch packages without this capability)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've just pushed an update to enable the step-down user on macOS; with that, I'm happy with this PR.
try: | ||
context["use_non_root_user"] = not self.tools.docker.is_user_mapped | ||
context["use_non_root_user"] = self.use_docker and ( | ||
self.tools.host_os == "Darwin" or not self.tools.docker.is_user_mapped | ||
) | ||
except AttributeError: | ||
pass # ignore if not using Docker |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we check self.use_docker
here, then use_non_root_user
will be False
if rolling out a template without using Docker. While I suppose this setting is arbitrary in this case, this will reverse the current default of True
from the templates.
It also effectively mitigates the need for the try/except AttributeError
since self.tools.docker
will exist if Docker is being used.
I'm not sure this setting is especially important....but throwing this out there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good point - the template default is "use root user", so it's safer to fall back to that than set it to false because there's no Docker. I'll tweak the logic.
(Of course the real issue here is that briefcase create is sensitive to whether you used Docker, but that's a much bigger can of worms)
GitHub won't let me approve my own PR...but I'm on board with your changes 👍🏼 |
Changes
root
or a dedicated userDependencies
PR Checklist: