Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where did the built multi-platform image go? #166

Closed
Rory-Z opened this issue Oct 21, 2019 · 41 comments
Closed

Where did the built multi-platform image go? #166

Rory-Z opened this issue Oct 21, 2019 · 41 comments
Labels
kind/question Further information is requested

Comments

@Rory-Z
Copy link

Rory-Z commented Oct 21, 2019

I really like buildx, I want to use it in my code, but I have a issue.
I executed the docker buildx build --platform=linux/amd64,linux/386 -f . --output=type=image command and output the following information.

 => exporting to image                                                                                                                                                                                          0.1s
 => => exporting layers                                                                                                                                                                                         0.0s
 => => exporting manifest sha256:f70f46db5ab5126060072198f9fe4056240cf5ab9f0819a60bc141501d5b1198                                                                                                               0.0s
 => => exporting config sha256:af6012ceb069e31c852bb3e509eaab5cfdc1fca82d336e1a014cc7672989bcf6                                                                                                                 0.0s
 => => exporting manifest sha256:b71684386f12acf835cf00fe1c1de7be104535d6debca08cf7789430c1d53456                                                                                                               0.0s
 => => exporting config sha256:0847da7ba34f27312e9e16eb015a309b290322c5d99ee64d56a172a75c906423                                                                                                                 0.0s
 => => exporting manifest list sha256:4eb73fed7ba678c004b851cefcf2c8d9e5b60ce8bfceb3df09f32c08fbdd0296                                                                                                          0.0s

But I can't find my image. Where did it go?

@tonistiigi
Copy link
Member

Add push=true to the output or use --output type=registry to push the image to registry during build to access it.

@tonistiigi tonistiigi added the kind/question Further information is requested label Oct 21, 2019
@Rory-Z
Copy link
Author

Rory-Z commented Oct 22, 2019

@tonistiigi For some reason I don't want to push the image to the registry. Is there any other way?

@tonistiigi
Copy link
Member

There are other outputs as well. https://github.com/docker/buildx#-o---outputpath-typetypekeyvalue I pointed you to registry because you are building a multi-platform image, therefore I assumed you need to distribute it to multiple machines.

@Rory-Z
Copy link
Author

Rory-Z commented Oct 22, 2019

@tonistiigi I tried to use --output=tar, but it outputs a directory structure instead of a tarball. What should I do?

$ tree tar -L 2
tar
├── linux_386
│   ├── bin
│   ├── dev
│   ├── etc
│   ├── home
│   ├── lib
│   ├── media
│   ├── mnt
│   ├── opt
│   ├── proc
│   ├── root
│   ├── run
│   ├── sbin
│   ├── srv
│   ├── sys
│   ├── tmp
│   ├── usr
│   └── var
└── linux_amd64
    ├── bin
    ├── dev
    ├── etc
    ├── home
    ├── lib
    ├── media
    ├── mnt
    ├── opt
    ├── proc
    ├── root
    ├── run
    ├── sbin
    ├── srv
    ├── sys
    ├── tmp
    ├── usr
    └── var

@tonistiigi
Copy link
Member

What way do you wish to access the image?

@Rory-Z
Copy link
Author

Rory-Z commented Oct 22, 2019

I want to be able to access multi-platform built-in images directly in the docker images. If not, I want to be able to export the image to a file, similar to the work of docker save. I guess --output = tar is equivalent to docker build && docker save, is that the case?

@tonistiigi
Copy link
Member

Docker does not support multi-platform images locally atm. Local image extracted in docker can only be for a single platform that the current node is based on. --output type=oci gives you the oci transport tarball with layers for all subimages.

@tonistiigi
Copy link
Member

I guess --output = tar is equivalent to docker build && docker save, is that the case?

No, that would be --output type=docker (with the limitations listed above)

@Rory-Z
Copy link
Author

Rory-Z commented Oct 22, 2019

I use --output=oci , but I don't see how it differs from --output=tar

$ tree oci/ -L 2
oci/
├── linux_386
│   ├── bin
│   ├── dev
│   ├── etc
│   ├── home
│   ├── lib
│   ├── media
│   ├── mnt
│   ├── opt
│   ├── proc
│   ├── root
│   ├── run
│   ├── sbin
│   ├── srv
│   ├── sys
│   ├── tmp
│   ├── usr
│   └── var
└── linux_amd64
    ├── bin
    ├── dev
    ├── etc
    ├── home
    ├── lib
    ├── media
    ├── mnt
    ├── opt
    ├── proc
    ├── root
    ├── run
    ├── sbin
    ├── srv
    ├── sys
    ├── tmp
    ├── usr
    └── var

@tonistiigi
Copy link
Member

Post full commands of what you are running.

@Rory-Z
Copy link
Author

Rory-Z commented Oct 22, 2019

docker buildx build --platform=linux/amd64,linux/386 -t emqx/emqx:test -f deploy/docker/Dockerfile . --output=oci

@tonistiigi
Copy link
Member

--output type=oci https://github.com/docker/buildx#-o---outputpath-typetypekeyvalue

@wohali
Copy link

wohali commented Dec 6, 2019

Hi @tonistiigi , I'm having the same issue, and may be trying to solve the same problem as @zhanghongtong .

Our goal is to export each of the separate built images locally to the docker daemon, and validate them locally (using https://github.com/multiarch/qemu-user-static and various test cases) before pushing them to the registry.

With --output type=oci I'm getting:

failed to solve: rpc error: code = Unknown desc = oci exporter cannot export named image

Is this possible at all without this kind of approach (given our Dockerfile is under a subdirectory named $VERSION):

for ARCH in amd64 arm64v8 ppc64le; do
  from="$(awk '$1 == toupper("FROM") { print $2 }' $VERSION/Dockerfile)"
  docker pull "$ARCH/$from"
  docker tag "$ARCH/$from" "$from"
  docker build -t apache/couchdb:$arch-$VERSION $VERSION
done

allowing access to each separate platform image as apache/couchdb:$arch-$version locally for validation, then assembling the manifest and pushing later?

Building all of the images at once, and bringing them over one at a time for testing would be acceptable, too.

@DannyBoyKN
Copy link

What would be the approach pushing the results of the docker buildx build (images + manifets) to a local registry, eg. Nexus ?
The --push argument allows only pushing to the Docker Hub.
It tried --output=type=registry,ref=localhost:5000 but that is not recognized, still requiring authentication at the Docker Hub:

failed to solve: rpc error: code = Unknown desc = server message: insufficient_scope: authorization failed

Or is there some other solution using intermediate folders or archives ?

@tonistiigi
Copy link
Member

localhost:5000 is not a valid image ref. it is probably interpreted as a hub image.

@DannyBoyKN
Copy link

You mean ref= exists for --output=type=registry ? This is not documented.
I was thinking of adding the registry destination URL, similar as if just giving the --push argument (shorthand for --output=type=registry, see documentation) which is using docker.io by default. localhost:5000 is the address of my local registry
The image ref. should be provided/added by the buildx build command.

@jl-massey
Copy link

@DannyBoyKN have you tried just tagging your image as if you were going to push to you r local repo? E.g.
docker buildx build -t localhost:5000/marchpkg:latest --platform linux/amd64,linux/ppc64le . --push

This is how docker knows the host. Admittedly, I tried to do this for a multi-arch image I'm trying to build, but I'm not getting far enough to push, yet. Good luck.

@DannyBoyKN
Copy link

Honestly, I don't remember if I tried this, I think I did ...

Unfortunately, I'm stuck, too!
The build process already fails when downloading because of the DNS. buildx uses by default google (8.8.8.8 and 8.8.4.4) which is not available behind my firewall. Giving the local DNS in the daemon.json file stops with 'connection refused'

I'll try further ...

@DannyBoyKN
Copy link

Well, just retried - was sure I did it already - with this Dockerfile:

FROM gcc:4.9
COPY main.c /usr/src/myapp/
WORKDIR /usr/src/myapp
RUN uname -m

and then

>$ docker buildx create --use --driver docker-container --name multiarch
...
>$  docker buildx build --platform linux/amd64,linux/arm64/v8,linux/arm/v7 --tag localhost:5000/multiarch:test --push  .

the error is

...
------
 > exporting to image:
------
failed to solve: rpc error: code = Unknown desc = failed to do request:
Head http://localhost:5000/v2/multiarch/blobs/sha256:39f6cc0761da5c1bc61d59c5cbe9188f22bc173d6f1038d6cccf1292f0b79594:
dial tcp localhost:5000: connect: connection refused

@tonistiigi
Copy link
Member

tonistiigi commented Feb 18, 2020

If you are pushing to localhost from a container driver you need to use host networking for the container https://github.com/docker/buildx#--driver-opt-options . Custom dns can be set with buildkitd config file.

@barcus
Copy link

barcus commented Feb 29, 2020

i need also to test my image and push them later

for arch in amd64 arm64 arm  ; do 
    docker buildx build \
    --platform $arch \
    --output type=docker \
    --tag me/myimage:${version}-${arch} \
    $version/
done

It works well ❤️

@DannyBoyKN
Copy link

@tonistiigi

Using --driver-opt network=host indeed works for pushing to localhost. But that's only for my local testing. The aim is to get them pushed to our Nexus registry, but still don't get the DNS configured.
With the "buildkitd config file" you mean in ~/.docker/config.json ?
Whatever DNS is set there it is correctly propagated into the conatiners /etc/resolv.conf. I did several DNS settings with and without network=host and didn't succeed ... don't have the exact error at hand at the moment ....

@barcus

That's interesting and tagging, pushing and running with localhost:5000/gcc-4.9:${arch} --push worked:

FROM gcc:4.9
RUN uname -m
$ docker run --rm localhost:5000/gcc-4.9:arm uname -a
Unable to find image 'localhost:5000/gcc-4.9:arm' locally
arm: Pulling from gcc-4.9
e925dd4ffa2a: Pull complete 
c9bfbf7dfc78: Pull complete 
015138dd660d: Pull complete 
d88b2b5023e5: Pull complete 
4d0d77a38079: Pull complete 
996bfab2b29c: Pull complete 
d27243b445c7: Pull complete 
2f949e025be6: Pull complete 
d55a5da9fec4: Pull complete 
3976cacabfa7: Pull complete 
Digest: sha256:b8dcfe0a3bbf2dbcb49a5117d8dee8fd412da31663a8c9be745eb6909bebf4d2
Status: Downloaded newer image for localhost:5000/gcc-4.9:arm
Linux c07606921642 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 armv7l GNU/Linux

@indraneelpatil
Copy link

indraneelpatil commented Jun 18, 2020

@DannyBoyKN were you able to figure this out? I have the same problem which is a multiplatform build using buildx but I am not able to specify the --push flag because I am trying to push to a private nexus registry

This is my command :
docker -D buildx build --platform linux/arm64,linux/amd64 -t private.repo.com/nav_2_0:multi_support_image --push .

@DannyBoyKN
Copy link

Unfortunately not. I had not time so far to dig into howto provide the correct DNS information as @tonistiigi pointed out above.

@ballerburg9005
Copy link

Why is this issue closed? How can you do a multi-arch build with buildx now, and save the image, without pushing it to any registry?

triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 10, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
triarius added a commit to buildkite/agent that referenced this issue Jan 12, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
@shayneoneill
Copy link

So what actually IS it doing if its not being pushed, and not being stored.

Wouldnt it be better to error out if the result is to be discared upon completion? Or at least inform the user "Hey, I threw the build artefact away, google the correct way to do this"?

@MichaelVoelkel
Copy link

MichaelVoelkel commented Jan 13, 2023

So, my 5 cents here...

Sometimes it makes sense to push later for various reasons:

  • you could be without internet.
  • it could be that you want to create it as a basis and build/add other images later to the manifest before pushing
  • a third use case is that you build first and care later where to push, maybe you are just setting up your environment and are not ready. It could still make sense to start a long-running build task at this point

Just in general, for any build/IT tool out there, building should be a distinct step from deploying. It's nice if it can be combined, but that is optional. It should not be that splitting it is the optional one. Maybe, I have some different background, but to me it seems that this is the general expectation to such tools, so breaking with it seems inconsistent, thus anyways violating "the principle of least surprise" (I might on that one stretching it a bit, but yeah, it surprised me)

Gladly take with a grain of salt, all just my opinion :)

Edit: That is, if storing it locally is hard to achieve than an error/warning message that the result will be unusable is definitely better than nothing! So maybe go with this first if it's still unclear on your side whether you will offer local storage?

@tonistiigi
Copy link
Member

You can try the multi-platform load with https://docs.docker.com/desktop/containerd/

matthewborden pushed a commit to buildkite/agent that referenced this issue Jan 16, 2023
There some complexity with buildx tagging for multiarch builds: docker/buildx#166
The upshot of it is that we have to invoke `docker buildx build` three
times, once to build both archs and another time to tag just the current
arch. We use this tag in the test function invoked later in the build script.
Finally, for pushing to ECR, we need to invoke it again with the `--push` argument.

Fortunately the docker layer cache should ensure the 2nd and 3rd builds
are rather quick.
@deeTEEcee
Copy link

What way do you wish to access the image?

I'm a bit surprised by this question. The older images didn't need to deal with multi-architecture. Now that we've moved into that direction, we want to confirm that images we've built are correctly supporting multiple architectures before pushing it up. (And I'm sure there's a lot of other reasons)

@tmm1
Copy link

tmm1 commented Apr 14, 2023

You can try the multi-platform load with https://docs.docker.com/desktop/containerd/

Doesn't make a difference after I enabled containerd beta feature:

ERROR: docker exporter does not currently support exporting manifest lists

@lesomnus
Copy link

What is default output? I forgot to add --push.
At the meantime, the base image is updated so it does not even consider the cache.
So it starts build from base again if I docker build .... It takes over 12h.
How can I push already built image? and how can I list the images built by builx if I didnt give --output??

@lmarchione-r7
Copy link

lmarchione-r7 commented Nov 10, 2023

If anyone finds this, hope this helps...

We're creating multiple multi-arch images using docker buildx bake --file docker-bake.hcl in CI/CD and want to test them in the pipeline BEFORE we push them to ECR. As per this comment, we have multiple platforms, which isn't compatible with --load.

The key for us was to create a buildx builder using the docker-container driver (first try to see if one already exists).

buildx_builder=multiarch
docker buildx use $buildx_builder > /dev/null 2>&1 || docker buildx create --name $buildx_builder --driver docker-container --driver-opt network=host --use > /dev/null 2>&1

Then start a local registry.

docker run -d -p 5000:5000 --rm --name registry registry:2

As long as the images are tagged with the localhost:5000/name/repo:tag, it will work. We're using crane catalog localhost:5000 to verify the images.

kaos pushed a commit to pantsbuild/pants that referenced this issue Nov 13, 2023
…sing BuildKit (#20154)

Currently, the `publish` goal doesn't work with docker images when
buildkit is enabled, as by [default buildkit doesn't save the build
output locally](docker/buildx#166), and
`publish` expects that the images were saved.

This PR adds support for setting the output type, and defaults it
to`docker`, which is the legacy docker build behavior, i.e. saves to the
local image store.

However, we only want to set that when buildkit is enabled. I thought it
better to add an explicit option for that at the subsystem level; this
allows for validation of buildkit-only options.

This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need
to update the docs on that actually.

I have validated that with this change, docker images can be published
to a registry.

---------

Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
WorkerPants pushed a commit to pantsbuild/pants that referenced this issue Nov 15, 2023
…sing BuildKit (#20154)

Currently, the `publish` goal doesn't work with docker images when
buildkit is enabled, as by [default buildkit doesn't save the build
output locally](docker/buildx#166), and
`publish` expects that the images were saved.

This PR adds support for setting the output type, and defaults it
to`docker`, which is the legacy docker build behavior, i.e. saves to the
local image store.

However, we only want to set that when buildkit is enabled. I thought it
better to add an explicit option for that at the subsystem level; this
allows for validation of buildkit-only options.

This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need
to update the docs on that actually.

I have validated that with this change, docker images can be published
to a registry.

---------

Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
kaos pushed a commit to pantsbuild/pants that referenced this issue Nov 15, 2023
…sing BuildKit (Cherry-pick of #20154) (#20185)

Currently, the `publish` goal doesn't work with docker images when
buildkit is enabled, as by [default buildkit doesn't save the build
output locally](docker/buildx#166), and
`publish` expects that the images were saved.

This PR adds support for setting the output type, and defaults it
to`docker`, which is the legacy docker build behavior, i.e. saves to the
local image store.

However, we only want to set that when buildkit is enabled. I thought it
better to add an explicit option for that at the subsystem level; this
allows for validation of buildkit-only options.

This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need
to update the docs on that actually.

I have validated that with this change, docker images can be published
to a registry.

Co-authored-by: riisi <rhysmadigan@gmail.com>
Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
@mjaggard
Copy link

One of the worst parts of this bug is that it all works locally because buildx seems to work differently on Mac - but then fails remotely on my linux servers. @tonistiigi is it really correct that this bug is closed?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/question Further information is requested
Projects
None yet
Development

No branches or pull requests