-
Notifications
You must be signed in to change notification settings - Fork 231
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
profile.d and entrypoint.d #19
Comments
On shell startup scripts: The shell startup scripts shouldn't be sourced in a nested way (that would be a bug). They can be sourced multiple times non-nested and need to support that (the suggested flag doesn't detect that case). On using SUID binaries: I'm not sure that improves the security over running the container as root. Using rootless might be the best option, although Docker's documentation used to say that a container is not a security boundary, we need to check how that changes with rootless. (Interesting read on why shell scripts don't support the SUID bit: https://unix.stackexchange.com/a/2910) |
What running as non-root does in combination with not providing sudo is prevent the container OS from being modified or otherwise changed. What this approach does is isolate root access to well known scripts that themselves were set as root - much like /etc/init.d scripts. It then takes on the process start capabilities of something like systemd. The Docker security boundary consideration is about the host rather than affecting the containers themselves. If you run the engine as root on Linux (directly, not in a VM or something), and you run in the container as root, there's risks of leaking to the host. As you said, you can run rootless there (though that also has issues - namely bind mounts), but that alone does not solve any code or executables being able to otherwise modify the OS in the container. There's also an element of "falling into the pit of success" here with default behaviors. That said, for local scenarios on Linux - using rootless with volumes instead of bind mounts helps with the host concerns.
How so? Sourcing the script would bring any exported variables into the process being started up. So, if they're run in sequence like say Effectively, these are scripts that are intended to run from either interactive or login shells. Zsh provides a mechanism with zshenv for this kind of thing, but bash does not, and sh only has profile - and for consistency, putting things in zshenv can result in them being fired in different places than if you are using the other shells (namely non-interactive, non-login). The issue with sourcing twice is that this has overhead. |
SUID binaries need to be secured themselves. E.g., they need to sanitize
Startup scripts I have seen are written in a way that allows either
It seems |
There is no universal file that fires for either -i or -l for bash/sh/ash/dash. The goal here is a bit different and isn't about personal profile/rc files really at all. What we need a pure image-based way to include environment variables and sourced scripts in That said, love to hear other ideas if you have them! Either way, not having to have each feature/script go manually check and modify the profile/RC files would be a good thing even if we just lock it into /etc/bash.bashrc and /etc/zsh/zsrc and ignore the login problem.
Yeah, this is fired during the entrypoint only. So, we should for sure check for PID 1 (added that above). We'd have to think about whether we could mandate a clean environment for execution that wouldn't take the local env in place should someone go execute it manually as PID 1 (this is why sudo has different PATH settings than non-sudo). |
My intuition here is that we should try to use the same approach shells are using. E.g., should we support both login and interactive startup scripts and source a feature's The situation with shell startup scripts seems overly complicated, but I'm not sure we can improve it by introducing our own scheme. E.g., sourcing a feature's In which situation would we have an interactive shell without having run the login scripts in one of its parent processes? That seems counterintuitive. E.g., the login scripts typically start an ssh-agent (if not running already) and set SSH_AUTH_SOCK. |
Yeah that's fair - and since we're talking about files, you can symlink when you want both - the main challenge is the flag ends up getting added to the script instead in those cases.
You can also change the default terminal setting in VS Code if you're used to a login shell. I know of a couple of major instances of this. Part of the problem is the Linux default for the terminal is pure Any external use that is not interactive ( The core issue is tools that require sourcing - which is a frustrating but common practice. |
I see an env variable set in If this is always the case, we could simplify our approach by only sourcing the feature setup scripts from the shell profiles |
@chrmarti In Remote - Containers? We have userEnvProbe set to login interactive there, so yes, it does get it from the parent process. If userEnvProbe was set to none, you wouldn't see login variables because VS Code's default terminal is |
I broke out the UID/GID update aspect that drives the need for |
A persistent challenge in creating containers has been how to simply and easily add environment variables or source scripts as a part of various user shells (bash, zsh) and firing things on container startup - particularly if the thing that is starting requires root access.
Today we've dealt with this by:
userEnvProbe
tologinInteractiveShell
to ensure any tool gets these variablesThere are several challenges here.
overrideCommand
to be set to false which has similar challenges - though the label metadata proposal could help here (Dev container metadata in image labels #18).Proposal
Rather than all of this, we can have a well defined pattern when features are used. There could even be a simple feature that does nothing more than set up execution of this pattern. Instead of everything described above, we can introduce two well known folder locations:
A
profile.d
folder (or a similar name). If a feature places a script in this location, it will be sourced by /etc/bash.bashrc, /etc/zsh/zshrc, /etc/profile.d, and /etc/zsh/zprofile. A single bootstrapper can then be sourced to ensure these scripts are executed with code similar to what you'd find in /etc/profile. However, it would check to see if the bootstrapper has already run to ensure it doesn't fire more than once like in the case of a login interactive shell.An
entrypoint.d
folder where you can place a scripts that will automatically be executed as a part of the container startup. This eliminates the need for an explicit entrypoint property indevcontainer-features.json
. However, execution is a bit different thanprofile.d
. It should:/etc/profile.d
, you can add numeric prefixes to indicate when it should happen.exec
for any arguments passed into it so that the PID for the processor is replaced by the existing entrypoint commands.Features then follow a pattern of placing contents in these folders rather than attempting to manually update configuration.
Alternatives
Part of the reason for
entrypoint.d
is that most containers do not have systemd or another init system that supports process startup. (The tiny init system that the --init argument adds does not manage process startup). The challenge is systemd is heavy and requires more permissions to the host than is ideal. So while it is an option, its not something we could count on. An alternative would be to use something like supervisord. However, if the desire is just to fire something once at startup rather than keep a process running, its overkill as well. We can also easily document how to wire in supervisord usingentrypoint.d
if not provide a dev container feature for it.The text was updated successfully, but these errors were encountered: