-
Notifications
You must be signed in to change notification settings - Fork 489
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Where did the built multi-platform image go? #166
Comments
Add |
@tonistiigi For some reason I don't want to push the image to the registry. Is there any other way? |
There are other outputs as well. https://github.com/docker/buildx#-o---outputpath-typetypekeyvalue I pointed you to registry because you are building a multi-platform image, therefore I assumed you need to distribute it to multiple machines. |
@tonistiigi I tried to use
|
What way do you wish to access the image? |
I want to be able to access multi-platform built-in images directly in the |
Docker does not support multi-platform images locally atm. Local image extracted in docker can only be for a single platform that the current node is based on. |
No, that would be |
I use
|
Post full commands of what you are running. |
|
Hi @tonistiigi , I'm having the same issue, and may be trying to solve the same problem as @zhanghongtong . Our goal is to export each of the separate built images locally to the docker daemon, and validate them locally (using https://github.com/multiarch/qemu-user-static and various test cases) before pushing them to the registry. With
Is this possible at all without this kind of approach (given our Dockerfile is under a subdirectory named $VERSION):
allowing access to each separate platform image as Building all of the images at once, and bringing them over one at a time for testing would be acceptable, too. |
What would be the approach pushing the results of the
Or is there some other solution using intermediate folders or archives ? |
|
You mean |
@DannyBoyKN have you tried just tagging your image as if you were going to push to you r local repo? E.g. This is how docker knows the host. Admittedly, I tried to do this for a multi-arch image I'm trying to build, but I'm not getting far enough to push, yet. Good luck. |
Honestly, I don't remember if I tried this, I think I did ... Unfortunately, I'm stuck, too! I'll try further ... |
Well, just retried - was sure I did it already - with this Dockerfile:
and then
the error is
|
If you are pushing to |
i need also to test my image and push them later
It works well ❤️ |
Using That's interesting and tagging, pushing and running with FROM gcc:4.9
RUN uname -m
|
@DannyBoyKN were you able to figure this out? I have the same problem which is a multiplatform build using buildx but I am not able to specify the --push flag because I am trying to push to a private nexus registry This is my command : |
Unfortunately not. I had not time so far to dig into howto provide the correct DNS information as @tonistiigi pointed out above. |
Why is this issue closed? How can you do a multi-arch build with buildx now, and save the image, without pushing it to any registry? |
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
So what actually IS it doing if its not being pushed, and not being stored. Wouldnt it be better to error out if the result is to be discared upon completion? Or at least inform the user "Hey, I threw the build artefact away, google the correct way to do this"? |
So, my 5 cents here... Sometimes it makes sense to push later for various reasons:
Just in general, for any build/IT tool out there, building should be a distinct step from deploying. It's nice if it can be combined, but that is optional. It should not be that splitting it is the optional one. Maybe, I have some different background, but to me it seems that this is the general expectation to such tools, so breaking with it seems inconsistent, thus anyways violating "the principle of least surprise" (I might on that one stretching it a bit, but yeah, it surprised me) Gladly take with a grain of salt, all just my opinion :) Edit: That is, if storing it locally is hard to achieve than an error/warning message that the result will be unusable is definitely better than nothing! So maybe go with this first if it's still unclear on your side whether you will offer local storage? |
You can try the multi-platform load with https://docs.docker.com/desktop/containerd/ |
There some complexity with buildx tagging for multiarch builds: docker/buildx#166 The upshot of it is that we have to invoke `docker buildx build` three times, once to build both archs and another time to tag just the current arch. We use this tag in the test function invoked later in the build script. Finally, for pushing to ECR, we need to invoke it again with the `--push` argument. Fortunately the docker layer cache should ensure the 2nd and 3rd builds are rather quick.
I'm a bit surprised by this question. The older images didn't need to deal with multi-architecture. Now that we've moved into that direction, we want to confirm that images we've built are correctly supporting multiple architectures before pushing it up. (And I'm sure there's a lot of other reasons) |
Doesn't make a difference after I enabled containerd beta feature:
|
What is default output? I forgot to add |
If anyone finds this, hope this helps... We're creating multiple multi-arch images using The key for us was to create a buildx builder using the
Then start a local registry.
As long as the images are tagged with the |
…sing BuildKit (#20154) Currently, the `publish` goal doesn't work with docker images when buildkit is enabled, as by [default buildkit doesn't save the build output locally](docker/buildx#166), and `publish` expects that the images were saved. This PR adds support for setting the output type, and defaults it to`docker`, which is the legacy docker build behavior, i.e. saves to the local image store. However, we only want to set that when buildkit is enabled. I thought it better to add an explicit option for that at the subsystem level; this allows for validation of buildkit-only options. This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need to update the docs on that actually. I have validated that with this change, docker images can be published to a registry. --------- Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
…sing BuildKit (#20154) Currently, the `publish` goal doesn't work with docker images when buildkit is enabled, as by [default buildkit doesn't save the build output locally](docker/buildx#166), and `publish` expects that the images were saved. This PR adds support for setting the output type, and defaults it to`docker`, which is the legacy docker build behavior, i.e. saves to the local image store. However, we only want to set that when buildkit is enabled. I thought it better to add an explicit option for that at the subsystem level; this allows for validation of buildkit-only options. This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need to update the docs on that actually. I have validated that with this change, docker images can be published to a registry. --------- Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
…sing BuildKit (Cherry-pick of #20154) (#20185) Currently, the `publish` goal doesn't work with docker images when buildkit is enabled, as by [default buildkit doesn't save the build output locally](docker/buildx#166), and `publish` expects that the images were saved. This PR adds support for setting the output type, and defaults it to`docker`, which is the legacy docker build behavior, i.e. saves to the local image store. However, we only want to set that when buildkit is enabled. I thought it better to add an explicit option for that at the subsystem level; this allows for validation of buildkit-only options. This eliminates the need to set `DOCKER_BUILDKIT=1` in env vars - I need to update the docs on that actually. I have validated that with this change, docker images can be published to a registry. Co-authored-by: riisi <rhysmadigan@gmail.com> Co-authored-by: Rhys Madigan <rhys.madigan@accenture.com>
One of the worst parts of this bug is that it all works locally because buildx seems to work differently on Mac - but then fails remotely on my linux servers. @tonistiigi is it really correct that this bug is closed? |
I really like buildx, I want to use it in my code, but I have a issue.
I executed the
docker buildx build --platform=linux/amd64,linux/386 -f . --output=type=image
command and output the following information.But I can't find my image. Where did it go?
The text was updated successfully, but these errors were encountered: