Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Default image output in buildx v0.10 cannot be pushed to Yandex Cloud Container Registry #1513

Open
prvshvnth opened this issue Jan 12, 2023 · 38 comments

Comments

@prvshvnth
Copy link

prvshvnth commented Jan 12, 2023

exporting to image
exporting manifest sha256:633baadaa74008615f4fd0d19eba63a696a847c6ec3b7058a883204418b68192 0.0s done
exporting config sha256:77db7689ffdb022a9a9d03ccf73f86aa17f2c4ecd7103e6440c5495df7fc20f6 done
exporting attestation manifest sha256:4d8160c1d02c9276d0a44f813f9ffe7006754e7b160da44c716b956d43b97a53 0.0s done exporting manifest list sha256:c16a5d9f9dabeaf910344ed6a032bc53841b88dc44a51869ff94204ae53488e done
pushing layers
pushing layers 14.4s done
pushing manifest for registry.url/namspace/image:bb8432e8cce2b213e21251732c7d3edcb7c58d3@sha256:c16a5d99dabeaf910344ed6a032bc53841b838dc44a51869ff94204ae53488e 0.1s done
ERROR: failed to push registry.url/namspace/image:bb8432e8cce2b213e21251732c7d3edcb7cb58d3: failed commit on ref "manifest-sha256:633baadaa74008615f4fd0d19eba63a696a847c6ec3b7058a883204418b68": unexpected status: 400 Bad Request
buildx failed with: ERROR: failed to solve: failed to push <registry_url>/<namespace>/<imagename>:341aa798b8365346c3e32b2024bb62d99652f4a6: failed commit on ref "manifest-sha256:d6d1f20cd9c061daec67cf71af16544bae42f4f1652bb0771d78c6ad8cc8b336": unexpected status: 400 Bad Request

Below are the steps:

  - name: Checkout repository
    uses: actions/checkout@v3
  - name: Set up Docker Buildx
    uses: docker/setup-buildx-action@v2
  - name: Login to DockerHub
    uses: docker/login-action@v2
    with:
      registry: ${{ env.DOCKER_REGISTRY }}
      username: ${{ secrets.DOCKER_USERNAME }}
      password: ${{ secrets.DOCKER_PASSWORD }}
  - name: Build and push
    uses: docker/build-push-action@v3
    with:
      context: .
      file: ./tools/path/Dockerfile
      push: true
      tags: ${{ env.DOCKER_REGISTRY }}/${{ env.DOCKER_NAMESPACE }}/${{ env.DOCKER_IMAGE }}:${{ env.GITHUB_SHA }}
      outputs: type=image,oci-mediatypes=true,push=true
@prvshvnth prvshvnth changed the title Unable to build and push images to registry: buildx failed with: ERROR unexpected status: 400 Bad Request Unable to build and push images to registry in GHActions: buildx failed with unexpected status: 400 Bad Request Jan 12, 2023
@jedevc
Copy link
Collaborator

jedevc commented Jan 12, 2023

Since you've redacted the <registry_url>, we can't tell where this error comes from - the 400 is coming from the registry, but it's not clear why.

I assume it's a self-hosted/internal registry? What registry software is it using?

@langovoi
Copy link

langovoi commented Jan 20, 2023

After GitHub Actions updated Ubuntu image (actions/runner-images/pull/6942, buildx was upped to v.0.10.0) — I get same error.

If I lock buildx version to v0.9.1 by action docker/setup-buildx-action@v2 it works well.

I use cr.yandex container registry. It is Yandex Cloud Container Registry.

@tapokshot
Copy link

@langovoi i use same cloud and get same error, but after your response i finally fix that problem, thanks

@jedevc jedevc changed the title Unable to build and push images to registry in GHActions: buildx failed with unexpected status: 400 Bad Request Default image output in buildx v0.10 cannot be pushed to Yandex Cloud Container Registry Jan 20, 2023
@pimvandenbroek
Copy link

Issue doesn't seem to be limited to Yandex alone, I'm having the same issue with GCR. @langovoi 's solution fixed it for me

@jedevc
Copy link
Collaborator

jedevc commented Jan 20, 2023

@pimvandenbroek do you have log messages from the push to GCR? We've been successfully pushing the new provenance attestations to GCR without too much of a problem 👀

@pimvandenbroek
Copy link

The initial error we received was: "registry cache exporter requires ref". I unfortunately don't have any other logs anymore.
After applying the version pin, everything worked again.

@jedevc
Copy link
Collaborator

jedevc commented Jan 20, 2023

@pimvandenbroek I think this is unrelated to this issue, looks like moby/buildkit@c5242ba is the source of the error message that was introduced in the newest BuildKit. I think this is related to a generic caching issue, instead of a GCR-specific issue; could you open a separate issue/discussion or share on #buildkit in the community slack?

@pimvandenbroek
Copy link

Sure, I will check that out.
The reason I replied here, is because up untill yesterday it was working fine. Then the Github Action Runner was updated, included buildx v.0.10.0, and it got broken.
When pinning the version back to 0.9.1 it started working again. Buildkit wasn't updated as far as I know.
The buildkit error message might be related or it might not, but the fact that reverting to buildx 0.9.1 fixed the issue tells me that I am in the right place.

@crazy-max
Copy link
Member

crazy-max commented Jan 20, 2023

@pimvandenbroek Doesn't just setting provenance: false with the build-push action (or --provenance false with buildx cmd) solves the issue instead of rolling back to 0.9.1?

@pimvandenbroek
Copy link

pimvandenbroek commented Jan 20, 2023

@crazy-max Not sure, I will have to test.
You mean the following right:

    -  name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
            provenance: false

@crazy-max
Copy link
Member

crazy-max commented Jan 20, 2023

@pimvandenbroek This is a build option so in build-push-action:

      -
        name: Build and push
        uses: docker/build-push-action@v3
        with:
          push: true
          provenance: false
          tags: user/app:latest

Or if you invoke buildx directly then docker buildx build --provenance false ....

Not sure, I will have to test.

Please that would help thx!

@pimvandenbroek
Copy link

Clear.
I'll try that block, as the buildx command is generated by code, and I currently don't have access to that part. I'll keep you posted

@langovoi
Copy link

@crazy-max I tested cr.yandex with provenance: false in docker/build-push-action@v3 and removed pin to v0.9.1 in docker/setup-buildx-action@v2.

It works.

@pimvandenbroek
Copy link

@crazy-max tested it, however apparently, the environment where the buildx command is generated is using an older version of buildkit and we received the following error:
buildx failed with: ERROR: attestations are not supported by the current buildkitd
I'll have to dive into that part early next week

@crazy-max
Copy link
Member

crazy-max commented Jan 20, 2023

@crazy-max tested it, however apparently, the environment where the buildx command is generated is using an older version of buildkit and we received the following error: buildx failed with: ERROR: attestations are not supported by the current buildkitd I'll have to dive into that part early next week

Interesting, if you could add docker buildx inspect before the build command in your script we could see why it's not supported. Maybe it's using BuildKit backed by the Docker Engine (default).

@violen
Copy link

violen commented Jan 24, 2023

Same Issue here:
buildx 0.9.1 works but the update by Docker Desktop to buildx 0.10.0 breaks Pushing to Sonatype Nexus Docker Registry (v3.38.1-01)

Error shown in Nexus is:

org.sonatype.nexus.repository.docker.internal.V2Handlers - Error: PUT /v2/imagename/manifests/latest
java.lang.NullPointerException: Cannot get property 'digest' on null object

buildx 0.10.0 seems to break the manifest generation.

Any Idea how to downgrade to buildx 0.9.1 ?

The Suggestion disabling the provenance via docker buildx --provenance false seems to work also with Sonartype Nexus.

Edit: @crazy-max here is the log output as requested.
buildx-out.log

I will try to get a nexus oss update to version 3.40.0+ as mentioned in the ticket you have linked.

@crazy-max
Copy link
Member

crazy-max commented Jan 24, 2023

@violen Can you post the logs of the build command please?

If you can also enable debug logs and post BuildKit logs that would be handy:

      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v2
        with:
          buildkitd-flags: --debug

It seems Nexus support OCI format since 3.23.0: https://issues.sonatype.org/browse/NEXUS-21087.

Looks also related to this issue using docker-app which build and push OCI compliant images: https://github.com/docker/app#sharing-the-app-on-the-hub.

Best guess is Nexus does not fully support OCI spec (Index spec in particular).

Edit: Yes can confirm Nexus does not fully support the OCI spec

phdelodder pushed a commit to deconz-community/deconz-docker that referenced this issue Jan 25, 2023
Workaround to opt-out of unsupported manifest until it's supported. Fix is from: docker/buildx#1509 (comment)

Watchtower autoupdater should work again: containrrr/watchtower#1529
devzwf added a commit to devzwf/pihole-dot-doh that referenced this issue Jan 27, 2023
@pimvandenbroek
Copy link

@crazy-max Finally had the time to do a bit of a deepdive, and got the following result.
docker buildx inspect returned the following:

pablo - Name:          pablo
pablo - Driver:        docker-container
pablo - Last Activity: 2023-01-31 14:02:37 +0000 UTC
pablo - Nodes:
pablo - Name:      pablo
pablo - Endpoint:  unix:///var/run/docker.sock
pablo - Status:    running
pablo - Buildkit:  v0.11.2
pablo - Platforms: linux/amd64, linux/amd64/v2, linux/amd64/v3, linux/amd64/v4, linux/386

Buildx / Buildkit:

BuildKit version
  builder-0f0b68c2-ebf1-440e-85e5-b8e5a95380060: moby/buildkit:buildx-stable-1 => buildkitd github.com/moby/buildkit v0.11.2 944939944ca4cc58a11ace4af714083cfcd9a3c7

Buildx version
  /usr/bin/docker buildx version
  github.com/docker/buildx 0.10.0+azure-1 876462897612d36679153c3414f7689626251501

We've added --provenance false to the buildx build command like so:
"docker" "buildx" "build" "--pull" "--output" "type=image,name=eu.gcr.io/REPO/IMAGE:TAG,push=true" "--provenance=false" "--build-arg" "COMMIT_HASH=c9253bd3a363134d9c45593e9d0ee2826885d552" "--cache-from" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest" "--cache-to" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest,mode=max" "-f" "k8s/Dockerfile" "."

However, unfortunately we are currently getting:
ERROR: failed to solve: error writing manifest blob: failed commit on ref "sha256:38d4502df1efa339d0b44820d3b561af63ea43fa353faad8229a0e81428a00e3": unexpected status: 400 Bad Request

Maybe it is really obvious, but I'm unable to find the solution

@crazy-max
Copy link
Member

crazy-max commented Jan 31, 2023

@pimvandenbroek I don't think this error is related to the image being pushed but the cache. Does it work if you remove

"--cache-to" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest,mode=max"?

@pimvandenbroek
Copy link

@crazy-max Yes, without cache-to everything is running fine.

@pimvandenbroek
Copy link

Does this mean that --cache-to isn't working in this version? Or do you perhaps know of a workaround so we can still use it?

@crazy-max
Copy link
Member

Looks related to issues with Quay moby/buildkit#1440 and Harbor moby/buildkit#2479 (comment)

If you could post the BuildKit logs too that would be handy but best guess is Yandex does not properly support OCI mediatypes. Can you try with:

"--cache-to" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest,mode=max,oci-mediatype=false"

@pimvandenbroek
Copy link

@crazy-max Unfortunately, that didn't work either:

"docker" "buildx" "build" "--pull" "--output" "type=image,name=eu.gcr.io/REPO/IMAGE:tag,push=true" "--provenance" "false" "--build-arg" "COMMIT_HASH=feece02abfb339131ac1821ccad437862d293e3b" "--cache-from" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest" "--cache-to" "type=registry,ref=eu.gcr.io/REPO/IMAGE:latest,mode=min,oci-mediatype=false" "-f" "k8s/Dockerfile" "k8s"
#1 [internal] booting buildkit
#1 pulling image moby/buildkit:buildx-stable-1
#1 pulling image moby/buildkit:buildx-stable-1 0.5s done
#1 creating container buildx_buildkit_tag
#1 creating container buildx_buildkit_tag 0.5s done
#1 DONE 1.0s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 29B
#2 transferring dockerfile: 65B done
#2 DONE 0.0s
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s
#4 [internal] load metadata for docker.io/library/alpine:latest
#4 ...
#5 [auth] library/alpine:pull token for registry-1.docker.io
#5 DONE 0.0s
#4 [internal] load metadata for docker.io/library/alpine:latest
#4 DONE 0.8s
#6 [1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a
#6 resolve docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a done
#6 DONE 0.0s
#7 importing cache manifest from eu.gcr.io/REPO/IMAGE:latest
#7 ...
#8 [auth] REPO/IMAGE:pull token for eu.gcr.io
#8 DONE 0.0s
#7 importing cache manifest from eu.gcr.io/REPO/IMAGE:latest
#7 DONE 3.0s
#6 [1/2] FROM docker.io/library/alpine@sha256:f271e74b17ced29b915d351685fd4644785c6d1559dd1f2d4189a5e851ef753a
#6 sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9 3.37MB / 3.37MB 0.2s done
#6 extracting sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
#6 extracting sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9 0.1s done
#6 DONE 0.3s
#9 [2/2] RUN echo "hello"
#0 0.095 hello
#9 DONE 0.1s
#10 exporting to image
#10 exporting layers 0.0s done
#10 exporting manifest sha256:28a7fa76848982633add699ceb9298436b7b47739c7283305a90077db4cc085d done
#10 exporting config sha256:ee1628b850162fc8f713cfff44eaa1300163f1a47143c72163f96ee58503e629 done
#10 pushing layers
#10 ...
#11 [auth] REPO/IMAGE:pull,push token for eu.gcr.io
#11 DONE 0.0s
#10 exporting to image
#10 pushing layers 2.3s done
#10 pushing manifest for eu.gcr.io/REPO/IMAGE:tag@sha256:28a7fa76848982633add699ceb9298436b7b47739c7283305a90077db4cc085d
#10 pushing manifest for eu.gcr.io/REPO/IMAGE:tag@sha256:28a7fa76848982633add699ceb9298436b7b47739c7283305a90077db4cc085d 0.7s done
#10 DONE 3.1s
#12 exporting content cache
#12 preparing build cache for export
#12 writing layer sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
#12 writing layer sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1 0.2s done
#12 writing layer sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9
#12 writing layer sha256:8921db27df2831fa6eaa85321205a2470c669b855f3ec95d5a3c2b46de0442c9 0.2s done
#12 writing config sha256:9d4fd633816360389de7c5b214258bc238039775740843cfd1a8988335095b4d
#12 writing config sha256:9d4fd633816360389de7c5b214258bc238039775740843cfd1a8988335095b4d 2.0s done
#12 writing manifest sha256:82f8f3e3f3209c5179a044d71fee90746657150bc259c7b24173bb3ace61a773
#12 preparing build cache for export 2.5s done
    1) Build and push image
    - Build and push image with provenance
    - Cache image
  Help
#12 writing manifest sha256:82f8f3e3f3209c5179a044d71fee90746657150bc259c7b24173bb3ace61a773 0.2s done
#12 ERROR: error writing manifest blob: failed commit on ref "sha256:82f8f3e3f3209c5179a044d71fee90746657150bc259c7b24173bb3ace61a773": unexpected status: 400 Bad Request
------
 > exporting content cache:
------
ERROR: failed to solve: error writing manifest blob: failed commit on ref "sha256:82f8f3e3f3209c5179a044d71fee90746657150bc259c7b24173bb3ace61a773": unexpected status: 400 Bad Request

@crazy-max
Copy link
Member

@pimvandenbroek Can you create another BuildKit container with --buildkitd-flags '--debug' flag and post the logs of the container please? Like in moby/buildkit#2479 (comment)

@pimvandenbroek
Copy link

@crazy-max

time="2023-02-03T09:43:49Z" level=debug msg="do request" request.header.accept="application/vnd.docker.distribution.manifest.list.v2+json, */*" request.header.user-agent=buildkit/v0.11 request.method=HEAD spanID=cc7745aaeb3d9bd9 traceID=e7a2f5c820057c4947c93e8f9b473275 url="https://eu.gcr.io/v2/REPO/IMAGE/manifests/latest"
  time="2023-02-03T09:43:49Z" level=debug msg="fetch response received" response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000" response.header.content-length=855 response.header.content-type=application/vnd.oci.image.index.v1+json response.header.date="Fri, 03 Feb 2023 09:43:49 GMT" response.header.docker-content-digest="sha256:3e238896d240821df797f0f4d2f1d7577553e654eec9a66da8dc3cc7692d86ea" response.header.docker-distribution-api-version=registry/2.0 response.header.server="Docker Registry" response.header.x-frame-options=SAMEORIGIN response.header.x-xss-protection=0 response.status="200 OK" spanID=cc7745aaeb3d9bd9 traceID=e7a2f5c820057c4947c93e8f9b473275 url="https://eu.gcr.io/v2/REPO/IMAGE/manifests/latest"
  time="2023-02-03T09:43:49Z" level=debug msg="do request" request.header.content-type=application/vnd.docker.distribution.manifest.list.v2+json request.header.user-agent=buildkit/v0.11 request.method=PUT spanID=cc7745aaeb3d9bd9 traceID=e7a2f5c820057c4947c93e8f9b473275 url="https://eu.gcr.io/v2/REPO/IMAGE/manifests/latest"
  time="2023-02-03T09:43:49Z" level=debug msg="fetch response received" response.header.alt-svc="h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000" response.header.cache-control=private response.header.content-type=application/json response.header.date="Fri, 03 Feb 2023 09:43:49 GMT" response.header.docker-distribution-api-version=registry/2.0 response.header.server="Docker Registry" response.header.x-frame-options=SAMEORIGIN response.header.x-xss-protection=0 response.status="400 Bad Request" spanID=cc7745aaeb3d9bd9 traceID=e7a2f5c820057c4947c93e8f9b473275 url="https://eu.gcr.io/v2/REPO/IMAGE/manifests/latest"
  time="2023-02-03T09:43:49Z" level=debug msg="unexpected response" body="{\"errors\":[{\"code\":\"MANIFEST_INVALID\",\"message\":\"Failed to parse manifest for request \\\"/v2/REPO/IMAGE/manifests/latest\\\": Failed to deserialize application/vnd.docker.distribution.manifest.list.v2+json.\"}]}" resp="&{400 Bad Request 400 HTTP/1.1 1 1 map[Alt-Svc:[h3=\":443\"; ma=2592000,h3-29=\":443\"; ma=2592000] Cache-Control:[private] Content-Type:[application/json] Date:[Fri, 03 Feb 2023 09:43:49 GMT] Docker-Distribution-Api-Version:[registry/2.0] Server:[Docker Registry] X-Frame-Options:[SAMEORIGIN] X-Xss-Protection:[0]] {0xc000966940} -1 [chunked] false true map[] 0xc000547900 0xc0000eb130}" spanID=cc7745aaeb3d9bd9 traceID=e7a2f5c820057c4947c93e8f9b473275
  time="2023-02-03T09:43:49Z" level=error msg="/moby.buildkit.v1.Control/Solve returned error: rpc error: code = Unknown desc = error writing manifest blob: failed commit on ref \"sha256:5e1cf7823d402156c4e2edc2a27246dd41cf869dfa0840518206d212f8af6c74\": unexpected status: 400 Bad Request"
  error writing manifest blob: failed commit on ref "sha256:5e1cf7823d402156c4e2edc2a27246dd41cf869dfa0840518206d212f8af6c74": unexpected status: 400 Bad Request
  1 v0.11.2 buildkitd --debug
  github.com/moby/buildkit/cache/remotecache.(*contentCacheExporter).Finalize
  	/src/cache/remotecache/export.go:140
  github.com/moby/buildkit/solver/llbsolver.runCacheExporters.func1.1.1
  	/src/solver/llbsolver/solver.go:605
  github.com/moby/buildkit/solver/llbsolver.inBuilderContext.func1
  	/src/solver/llbsolver/solver.go:913
  github.com/moby/buildkit/solver.(*Job).InContext
  	/src/solver/jobs.go:611
  github.com/moby/buildkit/solver/llbsolver.inBuilderContext
  	/src/solver/llbsolver/solver.go:909
  github.com/moby/buildkit/solver/llbsolver.runCacheExporters.func1.1
  	/src/solver/llbsolver/solver.go:586
  golang.org/x/sync/errgroup.(*Group).Go.func1
  	/src/vendor/golang.org/x/sync/errgroup/errgroup.go:75
  runtime.goexit
  	/usr/local/go/src/runtime/asm_amd64.s:1594
  
  1 v0.11.2 buildkitd --debug
  main.unaryInterceptor.func1
  	/src/cmd/buildkitd/main.go:576
  github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1.1.1
  	/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:25
  github.com/grpc-ecosystem/go-grpc-middleware.ChainUnaryServer.func1
  	/src/vendor/github.com/grpc-ecosystem/go-grpc-middleware/chain.go:34
  github.com/moby/buildkit/api/services/control._Control_Solve_Handler
  	/src/api/services/control/control.pb.go:2440
  google.golang.org/grpc.(*Server).processUnaryRPC
  	/src/vendor/google.golang.org/grpc/server.go:1340
  google.golang.org/grpc.(*Server).handleStream
  	/src/vendor/google.golang.org/grpc/server.go:1713
  google.golang.org/grpc.(*Server).serveStreams.func1.2
  	/src/vendor/google.golang.org/grpc/server.go:965
  runtime.goexit
  	/usr/local/go/src/runtime/asm_amd64.s:1594

@crazy-max
Copy link
Member

Failed to deserialize application/vnd.docker.distribution.manifest.list.v2+json

Looks to be an issue with Yandex registry 😟

@pimvandenbroek
Copy link

@crazy-max I'm not using Yandex, I'm using GCR

@crazy-max
Copy link
Member

crazy-max commented Feb 3, 2023

Ah I'm confused, thought it was same issue as OP 😣

Maybe related to moby/buildkit#1143 (comment) in your case.

Also switching to GAR is recommended: https://cloud.google.com/artifact-registry/docs/transition/transition-from-gcr

@pimvandenbroek
Copy link

That's odd, as the issues began when 0.10.0 was pushed.
As we have way too many builds a the moment, just migrating to GAR isn't possible just yet, we'll have to plan a migration.

For now I think we'll disabled caching, or maybe revert to 0.9.1 untill we have the time to migrate to GAR.

Thanks for your help untill now. If you do have any ideas to solve this (without migrating to GAR), I'm all ears :)

@rdhatt
Copy link

rdhatt commented Feb 4, 2023

As a data point: also repro'ing this issue running a buildx build --push ... with Artifactory container registry version 6.23.38

ERROR: failed to solve: failed to push ... failed commit on ref "manifest-sha256...": unexpected status: 400 Bad Request

We noted a new line in the output from buildx v0.10.0 while exporting the image: exporting attestation manifest ...

Looking at the docs for the --provenance option, there didn't appear anyway to turn it off. So we reverted to buildx version v0.9.1, which fixed our issue.

Returning to buildx v0.10.0 and using the undocumented --provenance false option also fixes our issue. Will this option be documented or is intentionally undocumented?

It appears OCI support came to Artifactory 7.11.1. We're upgrading in a few weeks but until then can't confirm/deny if Artifactory fully complies with the index spec.

I've attached the buildkit log. One discrepancy I noted is the logs mention request.header.user-agent=buildkit/v0.11 while the output of docker buildx version is github.com/docker/buildx v0.10.0-docker 876462897612d36679153c3414f7689626251501

buildkitlog.txt

@jedevc
Copy link
Collaborator

jedevc commented Feb 6, 2023

Returning to buildx v0.10.0 and using the undocumented --provenance false option also fixes our issue. Will this option be documented or is intentionally undocumented?

😱 oops, yup, that's an oversight, I've PRed some docs in.

I've attached the buildkit log. One discrepancy I noted is the logs mention request.header.user-agent=buildkit/v0.11 while the output of docker buildx version is github.com/docker/buildx v0.10.0-docker 8764628

Version mismatch is because buildkit and buildx releases do not necessarily line up. It's slightly unfortunate they're so close to each other, since it means it's easy to mistake one for the other - but this is expected, those are both part of the latest release series for their respective projects.

@ljy-life
Copy link

ljy-life commented Jul 20, 2023

@pimvandenbroek Hello, you can configure an environment variable to solve this problem, it's same like --provenance false

export BUILDX_NO_DEFAULT_ATTESTATIONS=1

@sleeperss
Copy link

sleeperss commented Oct 30, 2023

Hello

I have a similar issue on gitea :

ERROR: failed to solve: failed to push ****gitea_url****/****repo_orga****/****image_name****:latest: unexpected status from HEAD request to https://****gitea_url****/v2/****repo_orga****/****image_name****/blobs/sha256:7c6fed81b558a4c5d4dacc0e9fb078dbc9fb8b789a485833408237f3c46dafff: 400 Bad Request

unfortunatly provenance: false does not change anything :'(
If I remove the buildx step everything works as expected.

Using the following pipeline (My server needs a client certificate) :

name: Build docker-ci
on: 
  push:
    tags:
      - v*
  pull_request:
    branches:
      - master


defaults:
  run:
    shell: bash

jobs:
  Build:
    env:
      internal_address: ${{ secrets.internal_address }}
      PACKAGES_USER: ${{ secrets.PACKAGES_USER }}
      PACKAGES_ACCESS_TOKEN: ${{ secrets.PACKAGES_ACCESS_TOKEN }}
      CERT: ${{ secrets.CERT }}
      CERT_KEY: ${{ secrets.CERT_KEY }}
    name: ⛏️ Build
    steps:
      - name: 🔎 Checkout code
        uses: actions/checkout@v3   

      - name: 🪴 Setup environment
        uses: ./.gitea/actions/setup_env 
        
      - name: ⛏️ Build Dockerfile
        uses: ./.gitea/actions/build_docker
name: '🪴 Setup environment'
description: '🪴 Setup environment'
runs:
  using: 'composite'
  steps:
    - shell: bash
      name: '🪴 Setup environment'
      run: |
        apt update
        apt install -y nodejs npm ca-certificates curl gnupg
        install -m 0755 -d /etc/apt/keyrings

        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
        chmod a+r /etc/apt/keyrings/docker.gpg

        echo \
          "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
           focal stable" \
          > /etc/apt/sources.list.d/docker.list

        apt update -y

        apt install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
name: 'Build Dockerfile'
description: 'Build Dockerfile'
runs:
  using: 'composite'
  steps:
    - name: 🪴 Set BuildX
      uses: docker/setup-buildx-action@v0.9.1
    - name: 🪴 Set Docker credentials
      shell: bash
      run: | 
        mkdir -p /etc/docker/certs.d/****gitea_url****/
        echo "${{ env.CERT }}" > /etc/docker/certs.d/****gitea_url****/client.cert
        echo "${{ env.CERT_KEY }}" > /etc/docker/certs.d/****gitea_url****/client.key
    - name: 🔑 Login to Docker Registry
      uses: docker/login-action@v2
      with:
        registry: https://****gitea_url****
        username: ${{ env.PACKAGES_USER  }}
        password: ${{ env.PACKAGES_ACCESS_TOKEN  }}
    - uses: docker/build-push-action@master
      with:
        context: .
        file: "Dockerfile"
        tags: ****gitea_url****/****repo_orga****/****image_name****:latest
        push: ${{ github.ref_type == 'tag' }}

@vfiset
Copy link

vfiset commented Oct 31, 2023

@sleeperss have you tried a more recent version ? The latest is 3.0.0. We had similar problems back when 0.10 came out and since we upgraded past 1.1.0 (IIRC) we did not have these problems anymore.

@sleeperss
Copy link

@vfiset I tried v1 v2 v3 and master, unfortunatly, each of them leads to the same results :(

@vfiset
Copy link

vfiset commented Nov 1, 2023

Is it possible that the gitea_url does not support HEAD requests ? Try to do a similar HEAD request on the same endpoint. Then a GET.

@sleeperss
Copy link

Do you have a curl example that I can try to run ?

@sleeperss
Copy link

It is definitly linked to the client certificat,

If I disable ssl_verify_client on my reverse proxy, evrything works correctly.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests