Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kaniko unexpectedly reuses layers from unrelated image built by same executor #3310

Open
avaika opened this issue Sep 6, 2024 · 1 comment

Comments

@avaika
Copy link

avaika commented Sep 6, 2024

Actual behavior
Kaniko reuses cached layers from previous image built by same executor.

To shed a bit more light: jenkins pipeline spins up kaniko container and runs 2 sequential builds inside. image_1 and image_2.
image_1 builds goes as expected.
image_2 build appears to have layers from image_1 until the very end of the build. Then image_1 layers are dropped and image_2 is published to registry without layers from image_1.
image_1 and image_2 both run pip install gunicorn. That results that pip doesn't install gunicorn for image_2, since it's already available. However layer with gunicorn is not published to registry.

Expected behavior
image_2 does NOT use layers from image_1

To Reproduce
Steps to reproduce the behavior:

  1. start kaniko container similar to how jenkins does it: docker run --name kaniko -d --entrypoint sh gcr.io/kaniko-project/executor:v1.23.2-debug -c 'while true; do sleep 100 ; done'
  2. copy dockerfiles inside: docker cp Dockerfile-1 kaniko:/ & docker cp Dockerfile-2 kaniko:/
  3. build image_1: docker exec -it kaniko /kaniko/executor --context /kaniko --dockerfile /Dockerfile-1 --target base --cache=false --destination debug:latest --no-push
  4. build image_2: docker exec -it kaniko /kaniko/executor --context /kaniko --dockerfile /Dockerfile-2 --target base --cache=false --destination debug:latest --no-push
  5. See withing after last RUN command files from both image_1 and image_2:
INFO[0002] Running: [/bin/sh -c ls -l /tmp]
total 0
-rw-r--r--    1 root     root             0 Sep  6 21:00 image-1-layer-1
-rw-r--r--    1 root     root             0 Sep  6 21:00 image-1-layer-2
-rw-r--r--    1 root     root             0 Sep  6 21:00 image-2-layer-1
-rw-r--r--    1 root     root             0 Sep  6 21:00 image-2-layer-2
INFO[0002] Taking snapshot of full filesystem...

It doesn't change whether you enable cache or no. If you publish this image to registry, then layers with image-1 are excluded from the image.

Additional Information

  • Dockerfile for image_1:
FROM bash:5.2 as base
RUN ls -l /tmp
RUN  touch /tmp/image-1-layer-1
RUN ls -l /tmp
RUN  touch /tmp/image-1-layer-2
RUN ls -l /tmp
  • Dockerfile for image_2:
FROM bash:5.2 as base
RUN ls -l /tmp
RUN  touch /tmp/image-2-layer-1
RUN ls -l /tmp
RUN  touch /tmp/image-2-layer-2
RUN ls -l /tmp
  • Kaniko Image v1.23.2-debug

Triage Notes for the Maintainers

Description Yes/No
Please check if this a new feature you are proposing
  • - [No]
Please check if the build works in docker but not in kaniko
  • - [Yes]
Please check if this error is seen when you use --cache flag
  • - [Yes]
Please check if your dockerfile is a multistage dockerfile
  • - [No]

PS. I didn't fine any limitation in documentation that it is not supported to use the very same executor for multiple builds. I applied a workaround now to use separate containers for building different images. However this either has to be stated in documentation, either this has to be fixed.

@avaika avaika changed the title kaniko incorrectly reuses layers from unrelated image built by same executor kaniko unexpectedly reuses layers from unrelated image built by same executor Sep 6, 2024
@liiight
Copy link

liiight commented Sep 12, 2024

Interesting, I too have come across some weird behaviour that seems to match your (very detailed!) explanation.
If this is the case, I wonder if adding the --cleanup flag would solve this issue.

Any chance you can try that in your example?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants