-
Notifications
You must be signed in to change notification settings - Fork 794
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
build: Cleanup transient mount destinations with every RUN step #3525
build: Cleanup transient mount destinations with every RUN step #3525
Conversation
/hold |
/run
directory after every RUN step/run
directory with every RUN step
I think there are other RUN artifacts then just /run. We add some inodes in /etc as well. |
@rhatdan I think |
cc @nalind |
This gets us closer to cleaning up everything we leave behind during RUN, but it doesn't get us all the way there. |
So, just to make sure, let's assume we have a secret that contains database credentials. In the Containerfile there are three or more RUN statements that each relies on those credentials. The credentials will be available for each instance of RUN? I'm not sure we have a test for this, it would be nice to add one if not. |
@TomSweeneyRedHat I believe each RUN is independent and will recreate the /run/secrets. This patch is just removing them when the container completes, so that a |
d1753e4
to
82ff213
Compare
@TomSweeneyRedHat so its exactly as @rhatdan stated every |
82ff213
to
b50d29b
Compare
My concern with this, without playing with it, is what happens if I want to add /run/foobar or /etc/resolv.conf to my image, can we differentiate between this. from fedora What happens in this case? |
b50d29b
to
ba01019
Compare
@rhatdan I have removed
@nalind @rhatdan what are some other dangling stuff which we leave behind since i can only see |
How about: |
@rhatdan this build fails for me with
So |
I guess the best test would be to create a statically linked c program put into an image Cat Containerfile Then see if the image has anything in it except for a.out. |
ps: discussed this with @nalind so we also have to take care of all the other mounts which are not getting cleanedup eg bind mounts |
I may be misunderstanding the intention behind this PR. I ran this command, and expected null output (or at least only directories, no files). Instead: $ printf "FROM quay.io/podman/stable:latest\nRUN podman pull quay.io/libpod/testimage:20210610\nRUN ls -lR /run\n" | ./bin/buildah bud -
STEP 1/3: FROM quay.io/podman/stable:latest
STEP 2/3: RUN podman pull quay.io/libpod/testimage:20210610
time="2021-09-21T20:22:17Z" level=warning msg="\"/\" is not a shared mount, this could cause issues or missing mounts with rootless containers"
Trying to pull quay.io/libpod/testimage:20210610...
Getting image source signatures
Copying blob sha256:9afcdfe780b4ea44cc52d22e3f93ccf212388a90370773571ce034a62e14174e
Copying blob sha256:9afcdfe780b4ea44cc52d22e3f93ccf212388a90370773571ce034a62e14174e
Copying config sha256:9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f
Writing manifest to image destination
Storing signatures
9f9ec7f2fdef9168f74e9d057f307955db14d782cff22ded51d277d74798cb2f
STEP 3/3: RUN ls -lR /run
/run:
total 0
drwxr-xr-x. 1 root root 0 Sep 21 08:06 console
drwx------. 1 root root 14 Sep 21 20:22 containers
drwxr-xr-x. 1 root root 0 Jan 26 2021 criu
drwx------. 1 root root 0 Sep 21 08:06 cryptsetup
drwxr-xr-x. 1 root root 0 Sep 21 08:06 faillock
drwxr-x--x. 1 root root 68 Sep 21 20:22 libpod
drwxr-xr-x. 1 root root 12 Sep 21 08:06 lock
drwxr-xr-x. 1 root root 0 Sep 21 08:06 log
-rw-r--r--. 1 root root 0 Sep 21 08:06 motd
....
/run/containers/storage/overlay-layers:
total 8
-rw-------. 1 root root 2 Sep 21 20:22 mountpoints.json
-rw-r--r--. 1 root root 64 Sep 21 20:22 mountpoints.lock
.... I tried with buildah @ main (without this patch) and the output looks identical. |
@edsantiago during the run the files and directories will be there. The PR is attempting to make sure they do not get committed to the image. |
$ printf "FROM quay.io/podman/stable:latest\nRUN podman pull quay.io/libpod/testimage:20210610\n" | ./bin/buildah bud -t foo -
.....
$ podman run --rm foo ls -laR /run|grep '^-r'
-rw-r--r--. 1 root root 0 Sep 21 20:45 .containerenv
-rw-r--r--. 1 root root 0 Sep 21 08:06 motd
-rw-------. 1 root root 2 Sep 21 20:40 mountpoints.json
-rw-r--r--. 1 root root 64 Sep 21 20:40 mountpoints.lock
-rw-r--r--. 1 root root 0 Sep 21 20:40 alive
-rw-r--r--. 1 root root 0 Sep 21 20:40 alive.lck
-rw-------. 1 root root 2 Sep 21 20:40 pause.pid
-rwx------. 1 root root 229 Sep 21 20:40 events.log
-rw-r--r--. 1 root root 0 Sep 21 20:40 events.log.lock |
Podman puts them back as well. We are just looking at what goes into the image. Not what a running container sees. |
ba01019
to
9576740
Compare
0f36cb6
to
2d759b4
Compare
If that's a volume mounted file, then I wouldn't expect that content to show up in the image. |
@rhatdan that is safe |
|
This PR makes it slightly better.
I see /etc/hosts and /etc/resolv.conf in the image. Also see other main mount points /proc, /sys, /run, /dev and /etc |
@rhatdan PR refrains from touching paths starting with |
No, we wouldn't do that. If L is the lower layer, and U is the upper layer in a mounted overlay filesystem M, the whiteout is used to mark for the kernel that the file was removed at U and thus shouldn't appear when looking at M. L can be used as a lower layer for other mounts at the very same time, or it can be on read-only storage, but in either case the design of overlay is such that modifications made in the overlay mount are always, and only, recorded in the upper layer (U). |
I am not sure we want to hit those other paths anyways, since readonly containers will rely on them |
This definitely needs a test. |
When testing, I've been doing e.g. |
7eb4c7e
to
22a1489
Compare
Added tests
Just for ref: I also experimented with how docker is handling cleanups for transient files in these paths looks like docker is also not doing cleanup at all for these paths as well. Infact first thing they do is create |
SGTM |
Following commit ensures that we cleanup dangling `/run` after every RUN command and make sure that it does not persists on physical image. Ensure parity with how docker behaves with `.dockerenv`. Signed-off-by: Aditya Rajan <arajan@redhat.com>
22a1489
to
4cb4396
Compare
I'm always going to be uneasy about deleting data from a rootfs, but the tests pass, so LGTM. |
/lgtm |
Following PR ensures that we cleanup dangling
/run
and dir after every RUNcommand and remount with every
RUN
inorder to make sure that it does notpersists on physical image. Ensureparity with how docker behaves with
.dockerenv
.Closes: #3523