Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support ~/.config/toolbox with TOOLBOX_CREATE_ARGS #100

Closed
wants to merge 1 commit into from

Conversation

cgwalters
Copy link
Collaborator

@cgwalters cgwalters commented Apr 1, 2019

This way I can inject TOOLBOX_CREATE_ARGS="-v /var/srv:/srv:rslave".
Most of my data (e.g. git repos) is in /var/srv.

Bigger picture toolbox needs to be more configurable.

This way I can inject `TOOLBOX_CREATE_ARGS="-v /var/srv:/srv:rslave"`.
Most of my data (e.g. git repos) is in `/var/srv`.

Bigger picture toolbox needs to be more configurable.
@@ -685,6 +689,7 @@ create()
--volume /media:/media:rslave \
--volume /mnt:/mnt:rslave \
--volume /run/media:/run/media:rslave \
${TOOLBOX_CREATE_ARGS:-} \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this has been requested before. I wonder if we should use a command line flag instead of an environment variable - in case we rewrite the script as a proper Go frontend to libpod.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also on this topic I don't agree a CLI flag is right (and either was my original change) - this stuff should "Just Work" really.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I agree that it should just work. What else do you think should be bind mounted in from the host?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also on this topic I don't agree a CLI flag is right (and either was my original change) - this stuff should "Just Work" really.

FWIW, I'm interested in this issue because I had to bind-mount some crazy stuff to get a toolbox that allows me to run gnome-shell/mutter from jhbuild.

That's a very specialized use case, so I don't think those mounts are generally useful.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What kind of things did you need?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/run/systemd, /run/udev, /dev/input and (for some reason) /tmp.

I also threw in /var/lib/flatpak to keep my apps in the jhbuild environment.

There may be some point in exposing the latter by default, but the mutter/shell specific bits are really rather specific.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I filed https://github.com/debarshiray/toolbox/pull/256 for the /var/lib/flatpak bind mount.

/dev/input should already be available for any containers with a non-ancient Toolbox because we have been exposing the entire /dev for the past few releases.

Why did you need /run/systemd and /run/udev?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't remember the exact details/errors, but neither systemd nor udev were terribly surprising considering that running mutter/gnome-shell as wayland compositor requires logind (session/seat management) and udev (input device hotplug).

I may have some time later this week to recreate the toolbox from scratch, to check what customizations are still required.

@cgwalters
Copy link
Collaborator Author

I never replied here - my bad. Basically what happened is that:

  1. The functionality here was critical for my use case - I try to keep my homedir separate from "public" data like public git repositories, VM images etc. that I store in /srv/walters (where /srv is a separate partition)
  2. The suggested change of having a CLI flag would have been a lot more work, and notably parsing arguments in bash is painful
  3. When I started looking at the design of toolbox at the time I felt it just needed a fundamental rewrite. In particular, the approach I took in https://github.com/cgwalters/coretoolbox is to propagate in a lot of /host from the host and symlink things, which is way more flexible (particularly for device nodes)
  4. As noted in that project's README at the time there were other design flaws and I wanted a place to try them out without breaking toolbox as it existed at the time

So I'm still interested in de-forking.

@debarshiray
Copy link
Member

debarshiray commented Jun 25, 2019

2. The suggested change of having a CLI flag would have been a lot more work,

and notably parsing arguments in bash is painful

I wasn't really expecting you to do all the legwork. :) I was mostly thinking out aloud.

3. When I started looking at the design of toolbox at the time I felt it just needed a

fundamental rewrite. In particular, the approach I took in https://github.com/cgwalters/coretoolbox
is to propagate in a lot of /host from the host and symlink things, which is way more flexible
(particularly for device nodes)

4. As noted in that project's README at the time there were other design flaws and I wanted

a place to try them out without breaking toolbox as it existed at the time

The coretoolbox README sometimes feels like it's reading too much into things. :)

For example...

The part about starting from a Dockerfile is really just about absorbing podman build, right? It has always been possible to replace the base toolbox image, you can already do this but you'd need two steps instead of one. We can just do this if it's considered important for certain workflows.

The bit about having a derived image versus using the container's entry-point has a lot less to do with any explicit design objective than it has to do with historical reasons. Note that we started prototyping a toolbox for Silverblue literally six months after rootless Podman was even a thing. Nobody had a clue.

Rootless Podman barely worked. Most of the time you'd have to use it with root to figure out what was going on. There was an inherent desire to keep things simple - to use the minimum amount of Podman flags to get the job done without running into half-baked code. Nobody was thinking about design. :)

This meant that the its earliest incarnation, the toolbox used to have a separate PID namespace. This meant that the entry-point process acted as PID 1 within the container. For a while we were discussing what that PID 1 should be - dumb-init and catatonit were discussed. While all this was up in the air, I didn't want to rely on the entry-point for anything important, and having various aspects of the toolbox container match the host is/was one of the most crucial things for Toolbox. Hence the use of Buildah to do the customization as an intermediate image.

It wasn't ideal. People used to be confused by the presence of the second image and it was hard to explain. But it worked. It also didn't prevent anybody from replacing the base image or using a Dockerfile. You'd just get a corresponding customized image from which the container would finally be created.

We finally replaced it before Fedora 30 to simplify the documentation, and for that the credit goes to you!

Same with the mount points.

As for the programming language, well, CoreOS was pretty clear about Python not being acceptable, and if our primary API is a bunch of executables, and we're stuck screen scraping output, then a shell isn't that bad a choice.

Go and Rust were bad choices when we started. We'd just get an inflated statically linked binary acting as a shell script, and we already had people complaining about requiring two fat Go binaries in /usr/bin/buildah and /usr/bin/podman. Also, unless we have a real API to work with, having a better language to fork and exec doesn't get us very far. We'd still be stuck scraping screen output (notice how fragile our list implementation is) and won't be able to improve the user experience because the API is so primitive (notice how we can't show meaningful progress bars for image downloads).

That API literally came into fruition in lock step with Toolbox and has a direct bearing on the choice of programming language. It's not like I actually like writing POSIX shell. :)

@cgwalters
Copy link
Collaborator Author

, CoreOS was pretty clear about Python not being acceptable

We need a more nuanced terminology about that.

We'd just get an inflated statically linked binary acting as a shell script, and we already had people complaining about requiring two fat Go binaries in /usr/bin/buildah and /usr/bin/podman

$ ls -alh ~walters/.cargo/bin/coretoolbox
-rwxrwxr-x. 1 walters walters 1.3M Jun 25 15:59 /var/home/walters/.cargo/bin/coretoolbox*
$

For sure Rust's lack of dynamic linking doesn't make tiny binaries either but it's a big difference from

$ ls -alh /usr/bin/podman
-rwxr-xr-x. 4 root root 50M Dec 31  1969 /usr/bin/podman*

Because...hm, I would be curious to dig into what goes into that size difference.

Also, unless we have a real API to work with, having a better language to fork and exec doesn't get us very far.

That I completely disagree with. podman is happy to output JSON, and parsing JSON in Rust with Serde is awesome.

(notice how we can't show meaningful progress bars for image downloads).

Eh, I don't think it's worth reimplementing what podman already does. I don't understand why you chose to hide/reimplement it in this project.

@debarshiray
Copy link
Member

debarshiray commented Jun 25, 2019

(notice how we can't show meaningful progress bars for image downloads).

Eh, I don't think it's worth reimplementing what podman already does. I don't understand
why you chose to hide/reimplement it in this project.

Ultimately we want to control the standard output. Back when we used to have Buildah configure the intermediate image, the spew from Podman and Buildah was just too much. Also, note how over time the messages, both in case of and without errors, have evolved to guide the user as to what the next step might be. That tends to become less effective if you have lots of spew preceding them.

Ideally, toolbox enter ought to be like bash with all the container stuff hidden away unless you explicitly went looking for it. Sadly reality gets in the way.

Oh, this reminds me - error handling. Which is just horrendous with binaries as API. All we get is a single exit code and every failure looks just the same. Threading every major error condition inside /usr/bin/podman into a separate exit code is a mammoth undertaking.

In some ways this is a case of people running with rootless Podman and Toolbox before they were in any sense ready for prime time. However, early Silverblue adopters were desperate for something and here we are.

The good thing is that I am told the Go libpod API is now ready for non-Podman use.

@cgwalters
Copy link
Collaborator Author

Oh, this reminds me - error handling. Which is just horrendous with binaries as API. All we get is a single exit code and every failure looks just the same.

Dunno, I think parsing JSON can work pretty well here. A lot of projects do this.

@cgwalters
Copy link
Collaborator Author

If it'd help convince you I'm happy to investigate starting to use varlink for coretoolbox!

@debarshiray
Copy link
Member

If it'd help convince you I'm happy to investigate starting to use varlink for coretoolbox!

If the goal is to use Rust, because it's Rust, then well, I don't know what to say. :)

As I have mentioned elsewhere, but it doesn't look I said it here, I continue to prefer Go because that's what the entire OCI ecosystem is written in. Given how important things like Podman and runc are for Toolbox, regardless of the implementation details, nobody can meaningfully work on Toolbox without digging into libpod.git, runc.git, etc..

Therefore it makes a ton of sense to me to stay aligned to the surrounding OCI and Podman community and the programming language is an important aspect of that. If you want to play with Toolbox, you need to know Go, and off you go.

Otherwise, we are requiring potential contributors to know Go, Rust and Varlink, which significantly raises the barrier to entry.

I also don't think we use Varlink anywhere in Workstation / Silverblue. So that will be an entirely new dependency for something that's pretty critical for the Silverblue experience.

Over the last one year of Toolbox's existence, the lion's share of Toolbox issues have been Podman issues. We have had bugs in the /usr/bin/toolbox script itself but they have been relatively few and far between. Given that reality, the prospect of being able to vendor in a specific libpod version and knowing that it won't change underneath is very very tempting.

On the flip side, vendoring will lead to a bigger binary, but given the other benefits, I am happy to live with it. Bigger binaries aren't a show stopper for Silverblue, but robustness is. Ironically, the people who really care about smaller sizes (ie., the container world) also chose Go as their de-facto language, so it's not the first hill I want to die on.

Anyway, we have more or less settled on Go and libpod for the time being. We have been discussing this on and off with Podman team for a few months now - most recently at Flock last month. Toolbox is a young project, and if this choice doesn't work out then we can always re-evaluate and try something else.

@cgwalters
Copy link
Collaborator Author

On the flip side, vendoring will lead to a bigger binary, but given the other benefits,

A lot bigger binary. Personally, I also think it's not a generally good idea to have two different versions of libpod operating on the same storage. I doubt it's even tested by upstream CI. Any time a "migration" needs to happen it would break an older vendored one.

@cgwalters
Copy link
Collaborator Author

I understand how we ended with e.g. cri-o and podman sharing an underlying vendored library. crio wants to do a lot of things there that probably aren't exposed by varlink. I guess it's actually somewhat intentional that the "crio world" is distinct from the "podman world".

But it's not like a toolbox command has deep, sophisticated dependencies on podman. And having a separate "toolbox world" would seem like an explicit anti-feature.

@returntrip
Copy link

@debarshiray this PR has so much potential. Can it be merged as is or waiting for the toolbox rewrite is mandatory?

Base automatically changed from master to main March 25, 2021 22:25
@debarshiray
Copy link
Member

Closing.

Support for configuration files was added in #828

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants