-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix caching for multi-step builds. #441
Conversation
img, err := layerCache.RetrieveLayer(ck) | ||
if err != nil { | ||
logrus.Infof("No cached layer found for cmd %s", command.String()) | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry if I'm missing something, but won't the same problem from the issue happen here?
For this Dockerfile:
WORKDIR /dir
RUN something
we won't find the cached layer for WORKDIR so it'll break and never check the cache for RUN?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Left a question. Also, I think WORKDIR
also needs to return false
for MetadataOnly()
since a directory could be created.
5093ade
to
6a91162
Compare
Looks like the cache test failed in kokoro 😞 |
@@ -18,3 +18,5 @@ | |||
|
|||
FROM gcr.io/google-appengine/debian9@sha256:1d6a9a6d106bd795098f60f4abb7083626354fa6735e81743c7f8cfca11259f0 | |||
RUN apt-get update && apt-get install -y make | |||
COPY context/bar /context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kind of lurking here so apologies if it's a stupid suggestion but it may help to add a WORKDIR
command to this test Dockerfile
as it's something many Dockerfile
out in the wild will have.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Good idea.
Hello @dlorenc - first of all, thanks so much for the quick fix ❤️ I've checked out this branch and built the executor docker image locally and tried again with our template I tried both with the original I still see that it pushes more stuff to the To be clear, these are very small images (AWS ECR says "< 0.01 MiB") but over time it can hit the default AWS ECR limit of 1,000 images per repository, especially if using one single cache repository per class of applications - unless of kaniko has some kind of built-in cache cleanup mechanism which I'm not aware of. |
Thanks - I do think this is a further optimization we can make, and I need to dig into where it's coming from still. |
It looks like that was a bug with container-diff. I changed the Dockerfile to not triggger it. |
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
> kaniko is a tool to build container images from a Dockerfile [...] > [...] > kaniko doesn't depend on a Docker daemon and executes each > command within a Dockerfile completely in userspace. Kaniko can cache docker layers by pushing these to a special docker repository and reusing them later on which is ideal in the context of Concourse where each task runs in an independent container. There is a bug in the kaniko caching mechanism at the moment but there is a PR which seems to solve it (I tested it locally and it's now using the cached layers). This task is using a resource which was already there, we'll see if this works (resource code looks OK to me) Resourse: https://github.com/lxfontes/kaniko-resource Kaniko: https://github.com/GoogleContainerTools/kaniko Kanino cache bugfix: GoogleContainerTools/kaniko#441
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, just need to remove the -run from the tests and rerun kokoro :)
This change fixes that by properly "replaying" the Dockerfile and mutating the config when calculating cache keys. Previously we were looking at the wrong cache key for each command when there was more than one.
#397 broke caching for Dockerfiles that have more than a single command.
This change fixes that by properly "replaying" the Dockerfile and mutating the config when
calculating cache keys. Previously we were looking at the wrong cache key for each command
when there was more than one.
I think this should fix #410