Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dockerfiles should not be derived from a rocker image on Docker Hub #754

Open
eitsupi opened this issue Jan 24, 2024 · 7 comments
Open

Dockerfiles should not be derived from a rocker image on Docker Hub #754

eitsupi opened this issue Jan 24, 2024 · 7 comments
Labels
enhancement New feature or request help wanted Extra attention is needed pre-built images Related to pre-built images

Comments

@eitsupi
Copy link
Member

eitsupi commented Jan 24, 2024

From #750 (comment)

e.g.

FROM ubuntu:20.04

COPY scripts/install_R_source.sh /rocker_scripts/install_R_source.sh
RUN /rocker_scripts/install_R_source.sh

COPY scripts/install_rstudio.sh /rocker_scripts/install_rstudio.sh
RUN /rocker_scripts/install_rstudio.sh

instead of

FROM rocker/r-ver:4.3.2

RUN /rocker_scripts/install_rstudio.sh
@eitsupi eitsupi added enhancement New feature or request help wanted Extra attention is needed pre-built images Related to pre-built images labels Jan 24, 2024
@cboettig
Copy link
Member

@eitsupi Can you outline the advantages of this proposal?

I think this would have significant disadvantages.

  • The current practice is the standard way to extend images in Docker
  • The standard way means that images share layers. A user who pulls both rocker/r-ver and rocker/tidyverse does not have to download the contents of rocker/r-ver twice. The same goes for our CI in pushing to docker hub. Changing this would potentially increase disk space required from users, as well as pull and push times.
  • This approach adds substantial complexity and would increase the space for errors. Specifically, I think this could be fragile with respect to setting the appropriate ENV settings in each image, which influence important behaviors about the stack, including version locking. Maybe that would not be an issue given that the actual Dockerfiles could be generated from the same template stac .json files, but I have mixed feelings about our use of that approach. It serves a need, yes, and like you I don't have the capacity right now to refactor that, but it is not a standard practice and creates a barrier to contributing and testing. This change would create more complex and less easily understood Dockerfiles, which are for most of the community the natural entrypoint to understand what is in an image.

Yes, I share the frustration that it is difficult to test the changes to one image, especially a very downstream image like rocker/verse image, when you have to rebuild the stack manually (as I sadly demonstrated in my recent PR mistake!). But I think that can be solved with some local automation in the Makefile. I do think we could use a cleaner / simpler way to do local tests, but I think we can come up with something different than this to help contributors that has less impact on existing users.

@eitsupi
Copy link
Member Author

eitsupi commented Jan 24, 2024

@cboettig The layers are shared in my proposal as well.

# rocker/r-ver like base image
# Cached layers
FROM aaa
# New layers
COPY /bbb /bbb
RUN ./bbb
COPY /ccc /ccc

and

# rocker/rstudio like image
# Cached layers
FROM aaa
COPY /bbb /bbb
RUN ./bbb
# New layers
COPY /ddd /ddd
RUN ./ddd
COPY /ccc /ccc
# rocker/tidyverse like image
# Cached layers
FROM aaa
COPY /bbb /bbb
RUN ./bbb
COPY /ddd /ddd
RUN ./ddd
# New layers
COPY /eee /eee
RUN ./eee
COPY /ccc /ccc

The current implementation uses rocker scripts in rocker/r-ver and does not use rocker scripts in the repository.
This means that all layers of the image after rocker/r-ver are rebuilt just to build the downstream image.
In other words, to update the rocker/tidyverse via install_tidiverse.sh, all rocker/rstudio layers are discarded. My proposal does not cause that to happen.

# rocker/rstudio like image
# Cached layers
FROM rocker/r-ver
# New layers
RUN ./ddd

Even though only ./eee (install_tidyverse.sh) is updated here, the RUN ./ddd (RUN install_rstudio.sh) layer is not cached.

# rocker/tidyverse like image
# Cached layers
FROM rocker/rstudio
# New layers
RUN ./eee

@cboettig
Copy link
Member

@eitsupi Ha, okay I see now that my mental model of caching from old docker build is not quite up to date, buildx caching is more 'clever' in this respect and can identify that those lines are identical and thus can be used from the cache. That's only true if both images are built on the same machine of course, and I don't think used to be true at all on older docker.

  1. One issue regards when rocker_scripts are copied over into the image -- i.e. currently this occurs only in rocker/r-ver, and we do not use a COPY in derivative images, right? We could instead add COPY commands in downstream rocker files. i.e. one proposal regards adding additional COPY commands. Adding COPY in downstream repos would mean that they build from the scripts in the local scripts directory instead.

  2. Separately you propose a change to FROM lines to always be FROM ubuntu:20.04.

I think historically docker build with two different dockerfiles that started with some identical lines wasn't smart enough to re-use the cache from one dockerfile on the second, but testing I see it works now anyway.

So I agree this won't break caching, but it still seems like a substantial change. If we simply kept the FROM lines as they are, but had all images copy over only the rocker_scripts they need instead of copying over the entire directory in rocker/r-ver, wouldn't that accomplish precisely the same thing in terms of caching?

Either way, I think there are implications to think through. I see why from a build perspective it is rather annoying that changing install_tidyverse.sh breaks the cache on rocker/r-ver build, but it also serves a purpose that impacts downstream use. If that cache is not broken, users who derive their own images from rocker/r-ver and rely on that script subsequently in their own Dockerfiles will have the old cached version of install_tidyverse.sh.

The whole original idea of having a rocker_scripts in all images was to make it easy for downstream users to mix and match the order in their own builds. It seems like it gets very messy and hard for users to debug when suddenly different parts of the stack have very different versions of these scripts. I'd rather live with extra build times on GitHub Actions for us if it keeps things simple and consistent for downstream users, no?

@eitsupi
Copy link
Member Author

eitsupi commented Jan 24, 2024

That's only true if both images are built on the same machine of course

In fact, it is not. Since the bake file is written to pull the cache from the container registry (or can be specified in the CLI options), the cache is used across machines.

Like

"cache-from": [
"docker.io/rocker/rstudio:4.3.2"
],

docker buildx bake \
-f bakefiles/"${{ matrix.tag }}".docker-bake.json \
--set=*.platform="${{ matrix.platforms }}" \
--set=*.cache-from=docker.io/rocker/r-ver:"${{ matrix.tag }}" \
--set=*.cache-from=type=gha,scope=r-ver-"${{ matrix.tag }}" \
--set=*.cache-to=type=gha,scope=r-ver-"${{ matrix.tag }}" \
--set=r-ver.tags=r-ver-test-"${{ matrix.tag }}" \
--load \
r-ver

@cboettig
Copy link
Member

that's clever! buildx / bake system is still a whole new world to me. In any event, I agree that this preserves the cache (and thus also preserves the matching layers, which impacts the end user and not just the build times). I just also think that preserving the layering and cache would be accomplished by changing COPY pattern alone.

I'm still not sure that we want rocker/r-ver image to be shipping with a potentially different copy of a build script like install_tidyverse.sh than the tidyverse image ships with, (and so on), which seems like it could happen if we deviate from this current pattern of breaking build cache (but not breaking layering) in the whole stack any time any script changes.

@eitsupi
Copy link
Member Author

eitsupi commented Jan 24, 2024

My suggestion is to copy all of the rocker scripts directories at the very end of each Dockerfile.

COPY scripts/install_R_source.sh /rocker_scripts/install_R_source.sh
RUN /rocker_scripts/install_R_source.sh
ENV CRAN=https://p3m.dev/cran/__linux__/jammy/latest
ENV LANG=en_US.UTF-8
COPY scripts /rocker_scripts

Since this is only done in r-ver for now, rocker/r-ver is being rebuilt in minutes on CI if the unrelated scripts are updated and the base image is not updated.

Like in https://github.com/rocker-org/rocker-versioned2/actions/runs/7284214666/job/19849276138#step:7:8

docker buildx bake -f bakefiles/4.3.2.docker-bake.json --set=*.labels.org.opencontainers.image.revision=28085cc0b73cb4155574c16076cb60805ef149d2 --push r-ver
#0 building with "builder-33f74dfc-6add-4424-bec8-36237fe932dc" instance using docker-container driver

#1 [internal] load build definition from r-ver_4.3.2.Dockerfile
#1 transferring dockerfile: 681B done
#1 DONE 0.0s

#2 [auth] library/ubuntu:pull token for registry-1.docker.io
#2 DONE 0.0s

#3 [linux/arm64 internal] load metadata for docker.io/library/ubuntu:jammy
#3 DONE 0.6s

#4 [linux/amd64 internal] load metadata for docker.io/library/ubuntu:jammy
#4 DONE 0.6s

#5 [internal] load .dockerignore
#5 transferring context: 2B done
#5 DONE 0.0s

#6 [internal] load build context
#6 DONE 0.0s

#7 [linux/arm64 1/5] FROM docker.io/library/ubuntu:jammy@sha256:6042500cf4b44023ea1894effe7890666b0c5c7871ed83a97c36c76ae560bb9b
#7 resolve docker.io/library/ubuntu:jammy@sha256:6042500cf4b44023ea1894effe7890666b0c5c7871ed83a97c36c76ae560bb9b done
#7 DONE 0.0s

#8 [auth] rocker/r-ver:pull token for registry-1.docker.io
#8 DONE 0.0s

#9 importing cache manifest from docker.io/rocker/r-ver:4.3.2
#9 inferred cache manifest type: application/vnd.oci.image.index.v1+json done
#9 DONE 1.6s

#10 [linux/amd64 1/5] FROM docker.io/library/ubuntu:jammy@sha256:6042500cf4b44023ea1894effe7890666b0c5c7871ed83a97c36c76ae560bb9b
#10 resolve docker.io/library/ubuntu:jammy@sha256:6042500cf4b44023ea1894effe7890666b0c5c7871ed83a97c36c76ae560bb9b 0.0s done
#10 DONE 0.0s

#6 [internal] load build context
#6 transferring context: 83.97kB done
#6 DONE 0.0s

#11 [linux/amd64 2/5] COPY scripts/install_R_source.sh /rocker_scripts/install_R_source.sh
#11 CACHED

#12 [linux/arm64 2/5] COPY scripts/install_R_source.sh /rocker_scripts/install_R_source.sh
#12 CACHED

#13 [linux/amd64 3/5] RUN /rocker_scripts/install_R_source.sh
#13 sha256:86ec3535ca44847c05f338287f046b7c16708aacaa88518d0676d3d02d8c4515 1.81kB / 1.81kB 0.1s done
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 15.73MB / 269.42MB 0.2s
#13 sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb 19.92MB / 29.55MB 0.2s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 45.61MB / 269.42MB 0.5s
#13 sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb 22.02MB / 29.55MB 0.3s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 70.25MB / 269.42MB 0.6s
#13 sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb 29.55MB / 29.55MB 0.5s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 93.32MB / 269.42MB 0.8s
#13 sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb 29.55MB / 29.55MB 0.9s done
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 116.39MB / 269.42MB 0.9s
#13 extracting sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 136.31MB / 269.42MB 1.1s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 161.48MB / 269.42MB 1.2s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 177.21MB / 269.42MB 1.4s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 195.04MB / 269.42MB 1.5s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 218.10MB / 269.42MB 1.8s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 242.22MB / 269.42MB 2.0s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 265.29MB / 269.42MB 2.1s
#13 sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 269.42MB / 269.42MB 5.6s done
#13 ...

#14 [linux/arm64 3/5] RUN /rocker_scripts/install_R_source.sh
#14 sha256:dd44bae4bd2edd039b1f99bd72a2080e77b2fec1a0e101e397d9a8c550313b5c 260.90MB / 260.90MB 5.1s done
#14 sha256:005e2837585d0b391170fd9faf2e0c279d64ba0eb011cda8dedf28cb5839861e 27.36MB / 27.36MB 0.5s done
#14 extracting sha256:005e2837585d0b391170fd9faf2e0c279d64ba0eb011cda8dedf28cb5839861e 4.9s done
#14 DONE 5.8s

#13 [linux/amd64 3/5] RUN /rocker_scripts/install_R_source.sh
#13 extracting sha256:a486411936734b0d1d201c8a0ed8e9d449a64d5033fdc33411ec95bc26460efb 5.0s done
#13 extracting sha256:86ec3535ca44847c05f338287f046b7c16708aacaa88518d0676d3d02d8c4515 done
#13 extracting sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191
#13 ...

#14 [linux/arm64 3/5] RUN /rocker_scripts/install_R_source.sh
#14 extracting sha256:86ec3535ca44847c05f338287f046b7c16708aacaa88518d0676d3d02d8c4515 0.0s done
#14 extracting sha256:dd44bae4bd2edd039b1f99bd72a2080e77b2fec1a0e101e397d9a8c550313b5c 6.2s done
#14 DONE 12.0s

#15 [linux/arm64 4/5] COPY scripts /rocker_scripts
#15 ...

#13 [linux/amd64 3/5] RUN /rocker_scripts/install_R_source.sh
#13 extracting sha256:b6ae152ad9b011b2d37cb2fde1e3e440db035cc284969301980c02c08da33191 6.3s done
#13 DONE 12.2s

#16 [linux/amd64 4/5] COPY scripts /rocker_scripts
#16 DONE 3.3s

#15 [linux/arm64 4/5] COPY scripts /rocker_scripts
#15 DONE 3.4s

@cboettig
Copy link
Member

Ah, I follow, yes, with the COPY at the end of each they all get updated copies of all the scripts even if other layers are captured from cache. I hadn't followed that part before. I think that nicely avoids the images getting out-of-sync with what scripts they contain due to caching.

If the images inherit from each-other linearly (the current use of FROM), I guess that causes build cache to break. But using an identical FROM and repeated lines, buildx can leverage cache. That's really different, but I like it! In a way it is nice that Dockerfiles are more self-contained, a user looking at the Dockerfile for verse doesn't have to guess the inheritance chain either.

Ok, I'm convinced! Sorry I was so slow on this. buildx is more of an adjustment to my mental model than I anticipated!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed pre-built images Related to pre-built images
Projects
None yet
Development

No branches or pull requests

2 participants