Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"imagetools create" panics when pushing the created image (episode 2) #2232

Closed
3 tasks done
dunglas opened this issue Feb 1, 2024 · 10 comments · Fixed by moby/buildkit#4768
Closed
3 tasks done

"imagetools create" panics when pushing the created image (episode 2) #2232

dunglas opened this issue Feb 1, 2024 · 10 comments · Fixed by moby/buildkit#4768

Comments

@dunglas
Copy link

dunglas commented Feb 1, 2024

Contributing guidelines

I've found a bug and checked that ...

  • ... the documentation does not mention anything about my problem
  • ... there are no open or closed issues that are related to my problem

Description

#2230 fixes the initial bug (#2229) but sometimes triggers another panic related to opentelemetry (#2230 (comment)).

Expected behaviour

No crash.

Actual behaviour

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11e7f95]

Buildx version

github.com/docker/buildx 95bdecc 95bdecc

Docker info

/usr/bin/docker version
  Client: Docker Engine - Community
   Version:           24.0.7
   API version:       1.43
   Go version:        go1.20.10
   Git commit:        afdd53b
   Built:             Thu Oct 26 09:07:41 2023
   OS/Arch:           linux/amd64
   Context:           default
  
  Server: Docker Engine - Community
   Engine:
    Version:          24.0.7
    API version:      1.43 (minimum version 1.12)
    Go version:       go1.20.10
    Git commit:       311b9ff
    Built:            Thu Oct 26 09:07:41 2023
    OS/Arch:          linux/amd64
    Experimental:     false
   containerd:
    Version:          1.6.27
    GitCommit:        a1496014c916f9e62104b33d1bb5bd03b0858e59
   runc:
    Version:          1.1.11
    GitCommit:        v1.1.11-0-g4bccb38
   docker-init:
    Version:          0.19.0
    GitCommit:        de40ad0
  /usr/bin/docker info
  Client: Docker Engine - Community
   Version:    24.0.7
   Context:    default
   Debug Mode: false
   Plugins:
    buildx: Docker Buildx (Docker Inc.)
      Version:  v0.12.1
      Path:     /usr/libexec/docker/cli-plugins/docker-buildx
    compose: Docker Compose (Docker Inc.)
      Version:  v2.23.3
      Path:     /usr/libexec/docker/cli-plugins/docker-compose
  
  Server:
   Containers: 0
    Running: 0
    Paused: 0
    Stopped: 0
   Images: 14
   Server Version: 24.0.7
   Storage Driver: overlay2
    Backing Filesystem: extfs
    Supports d_type: true
    Using metacopy: false
    Native Overlay Diff: false
    userxattr: false
   Logging Driver: json-file
   Cgroup Driver: cgroupfs
   Cgroup Version: 2
   Plugins:
    Volume: local
    Network: bridge host ipvlan macvlan null overlay
    Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
   Swarm: inactive
   Runtimes: io.containerd.runc.v2 runc
   Default Runtime: runc
   Init Binary: docker-init
   containerd version: a1496014c916f9e62104b33d1bb5bd03b0858e59
   runc version: v1.1.11-0-g4bccb38
   init version: de40ad0
   Security Options:
    apparmor
    seccomp
     Profile: builtin
    cgroupns
   Kernel Version: 6.2.0-1018-azure
   Operating System: Ubuntu 22.04.3 LTS
   OSType: linux
   Architecture: x86_64
   CPUs: 4
   Total Memory: 15.61GiB
   Name: fv-az1210-739
   ID: 9368eea6-35a7-407f-8893-7ddd489a1d43
   Docker Root Dir: /var/lib/docker
   Debug Mode: false
   Username: githubactions
   Experimental: false
   Insecure Registries:
    127.0.0.0/8
   Live Restore Enabled: false

Builders list

Created using the docker/setup-buildx-action GitHub actions.

Configuration

Bake definition: https://github.com/dunglas/frankenphp/blob/main/docker-bake.hcl (Dockerfiles in the same repo).

Build logs

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x18 pc=0x11e7f95]

goroutine 1985 [running]:
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace.(*clientTracer).end(0xc005c1da40, {0x23169ef, 0xc}, {0x0?, 0x0?}, {0xc006e63200?, 0x26a2980?, 0x4})
	go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace@v0.45.0/clienttrace.go:231 +0x795
go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace.(*clientTracer).gotConn(0x2699d58?, {{0x26a2980?, 0xc0057e7500?}, 0x0?, 0x50?, 0x3?})
	go.opentelemetry.io/contrib/instrumentation/net/http/httptrace/otelhttptrace@v0.45.0/clienttrace.go:288 +0x64d
net/http.http2traceGotConn(0xc00626e1a0?, 0xc006a74c00, 0x1)
	net/http/h2_bundle.go:10239 +0x1dd
net/http.(*http2Transport).RoundTripOpt(0xc000350510, 0xc002930000, {0x40?})
	net/http/h2_bundle.go:7648 +0x19c
net/http.(*http2Transport).RoundTrip(...)
	net/http/h2_bundle.go:7598
net/http.http2noDialH2RoundTripper.RoundTrip({0x3850580?}, 0xc002930000?)
	net/http/h2_bundle.go:10203 +0x16
net/http.(*Transport).roundTrip(0x3850580, 0xc002930000)
	net/http/transport.go:549 +0x39e
net/http.(*Transport).RoundTrip(0x3850300?, 0x2699d58?)
	net/http/roundtrip.go:17 +0x13
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Transport).RoundTrip(0xc000343880, 0xc002a51c00)
	go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.45.0/transport.go:116 +0x52b
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*Transport).RoundTrip(0xc002a26fc0, 0xc002a51b00)
	go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp@v0.45.0/transport.go:116 +0x52b
net/http.send(0xc002a51b00, {0x2675de0, 0xc002a26fc0}, {0xc0043fe301?, 0xc001a3a5e0?, 0x0?})
	net/http/client.go:260 +0x606
net/http.(*Client).send(0xc002a60210, 0xc002a51b00, {0x7f9e18ebb888?, 0xc006aa9d30?, 0x0?})
	net/http/client.go:181 +0x98
net/http.(*Client).do(0xc002a60210, 0xc002a51b00)
	net/http/client.go:724 +0x912
net/http.(*Client).Do(...)
	net/http/client.go:590
github.com/containerd/containerd/remotes/docker.(*request).do(0xc001a2c900, {0x2699d58, 0xc001a421b0})
	github.com/containerd/containerd@v1.7.12/remotes/docker/resolver.go:591 +0x4e5
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries(0xc001958fc0?, {0x2699d58, 0xc001a421b0}, {0x0, 0x0, 0x0})
	github.com/containerd/containerd@v1.7.12/remotes/docker/resolver.go:600 +0x45
github.com/containerd/containerd/remotes/docker.dockerPusher.push({0xc0001722d0, {0xc000776619, 0x18}, {0x7f9e18f094c8, 0xc00065e6c0}}, {0x2699d58, 0xc001a42180}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, ...}, ...}, ...)
	github.com/containerd/containerd@v1.7.12/remotes/docker/pusher.go:120 +0xb50
github.com/containerd/containerd/remotes/docker.dockerPusher.Push({0xc0001722d0, {0xc000776619, 0x18}, {0x7f9e18f094c8, 0xc00065e6c0}}, {0x2699d58, 0xc001a42180}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, ...}, ...})
	github.com/containerd/containerd@v1.7.12/remotes/docker/pusher.go:67 +0x10c
github.com/moby/buildkit/util/contentutil.(*pushingIngester).Writer(0xc00047e570, {0x2699d58, 0xc001a42180}, {0xc00041dc70, 0x2, 0x7f9e5fcefa68?})
	github.com/moby/buildkit@v0.13.0-beta1.0.20240126101002-6bd81372ad6f/util/contentutil/pusher.go:76 +0x2e8
github.com/containerd/containerd/content.OpenWriter({0x2699d58, 0xc001a42180}, {0x2676e80, 0xc00047e570}, {0xc00041dc70, 0x2, 0x2})
	github.com/containerd/containerd@v1.7.12/content/helpers.go:115 +0xc3
github.com/containerd/containerd/remotes.Fetch({0x2699d58, 0xc001a42180}, {0x2676e80, 0xc00047e570}, {0x267ba60, 0xc00065e768}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, ...})
	github.com/containerd/containerd@v1.7.12/remotes/handlers.go:117 +0x26e
github.com/moby/buildkit/util/resolver/limited.FetchHandler.FetchHandler.func1({0x2699d90, 0xc000172280}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, 0x60d4b86, {0x0, 0x0, 0x0}, ...})
	github.com/containerd/containerd@v1.7.12/remotes/handlers.go:104 +0x2fa
github.com/moby/buildkit/util/contentutil.CopyChain.New.func5({0x2699d90, 0xc000172280}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, 0x60d4b86, {0x0, 0x0, 0x0}, ...})
	github.com/moby/buildkit@v0.13.0-beta1.0.20240126101002-6bd81372ad6f/util/resolver/retryhandler/retry.go:25 +0xb9
github.com/containerd/containerd/images.HandlerFunc.Handle(0x1000?, {0x2699d90?, 0xc000172280?}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, 0x60d4b86, {0x0, 0x0, ...}, ...})
	github.com/containerd/containerd@v1.7.12/images/handlers.go:59 +0x63
github.com/moby/buildkit/util/contentutil.CopyChain.Handlers.func6({0x2699d90, 0xc000172280}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, 0x60d4b86, {0x0, 0x0, 0x0}, ...})
	github.com/containerd/containerd@v1.7.12/images/handlers.go:69 +0x15e
github.com/containerd/containerd/images.HandlerFunc.Handle(0xc0007e3680?, {0x2699d90?, 0xc000172280?}, {{0xc00190eae0, 0x2b}, {0xc001115ae0, 0x47}, 0x60d4b86, {0x0, 0x0, ...}, ...})
	github.com/containerd/containerd@v1.7.12/images/handlers.go:59 +0x63
github.com/containerd/containerd/images.Dispatch.func1()
	github.com/containerd/containerd@v1.7.12/images/handlers.go:168 +0xd6
golang.org/x/sync/errgroup.(*Group).Go.func1()
	golang.org/x/sync@v0.4.0/errgroup/errgroup.go:75 +0x56
created by golang.org/x/sync/errgroup.(*Group).Go in goroutine 1106
	golang.org/x/sync@v0.4.0/errgroup/errgroup.go:72 +0x96

Additional info

Full logs: https://github.com/dunglas/frankenphp/actions/runs/7728429011/job/21072578318

@dunglas
Copy link
Author

dunglas commented Feb 2, 2024

I think that I've a better understanding of what is causing this bug and #2229.

I fixed a mistake in the CI that was making imagetools to manipulate two different repositories at the same time (dunglas/frankenphp and dunglas/frankenphp-dev: dunglas/frankenphp@b61900e. The bug doesn't happen anymore since this fix.

@crazy-max crazy-max added this to the v0.12.2 milestone Feb 5, 2024
@crazy-max
Copy link
Member

crazy-max commented Feb 26, 2024

@dunglas

I fixed a mistake in the CI that was making imagetools to manipulate two different repositories at the same time (dunglas/frankenphp and dunglas/frankenphp-dev: dunglas/frankenphp@b61900e. The bug doesn't happen anymore since this fix.

Thanks for your feedback but it should not panic anyway 😇. We updated otel dependencies in #2281 which might fix the otel issue. If you still repro, you can try with:

      -
        name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v3
        with:
          version: "https://github.com/docker/buildx.git#5723ceefb6a52e16339acfed86871ffdabf240a7"

Otherwise feel free to close this issue, thanks!

@alessfg
Copy link

alessfg commented Mar 1, 2024

I am not quite sure if it's the same issue, but I am running into a panic error when using imagetools create to pull and push an image from Docker hub to GHCR when using the workflow file here https://github.com/nginxinc/docker-nginx-unprivileged/pull/195/files#diff-6456e6e208552591349a09da1db573bb5de5fdfe76776686baf7558f547f4287.

I get a panic: send on closed channel error (https://github.com/nginxinc/docker-nginx-unprivileged/actions/runs/8111951034/job/22172780476#step:9:43). I used to get an OTEL error the second time I reran the workflow but that seems to have been fixed by using the latest RC for 0.13.

The underlying issue seems to be related to how buildx interacts with containerd, but that's about all I can parse from the error log:

panic: send on closed channel

goroutine 7797 [running]:
github.com/containerd/containerd/remotes/docker.(*pushWriter).setPipe(...)
	github.com/containerd/containerd@v1.7.13/remotes/docker/pusher.go:364
github.com/containerd/containerd/remotes/docker.dockerPusher.push.func1()
	github.com/containerd/containerd@v1.7.13/remotes/docker/pusher.go:286 +0x12a
github.com/containerd/containerd/remotes/docker.(*request).do(0xc002c4a240, {0x276fc58, 0xc000e7ce10})
	github.com/containerd/containerd@v1.7.13/remotes/docker/resolver.go:556 +0x162
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries(0x0?, {0x276fc58, 0xc000e7ce10}, {0xc0006a10e0, 0x1, 0x1})
	github.com/containerd/containerd@v1.7.13/remotes/docker/resolver.go:600 +0x[45](https://github.com/nginxinc/docker-nginx-unprivileged/actions/runs/8111951034/job/22172780476#step:9:46)
github.com/containerd/containerd/remotes/docker.(*request).doWithRetries(0x0?, {0x276fc58, 0xc000e7ce10}, {0x0, 0x0, 0x0})
	github.com/containerd/containerd@v1.7.13/remotes/docker/resolver.go:613 +0x12d
github.com/containerd/containerd/remotes/docker.dockerPusher.push.func2()
	github.com/containerd/containerd@v1.7.13/remotes/docker/pusher.go:292 +0x45
created by github.com/containerd/containerd/remotes/docker.dockerPusher.push in goroutine 11[54](https://github.com/nginxinc/docker-nginx-unprivileged/actions/runs/8111951034/job/22172780476#step:9:55)
	github.com/containerd/containerd@v1.7.13/remotes/docker/pusher.go:291 +0x21ad
Error: Process completed with exit code 2.

@crazy-max
Copy link
Member

crazy-max commented Mar 1, 2024

@alessfg The docker pusher issue might have been fixed with containerd/containerd#8379 (cc @jedevc) but I don't see it backported to container 1.7 branch yet.

@jedevc
Copy link
Collaborator

jedevc commented Mar 4, 2024

Opened a backport in containerd/containerd#9921.

@tonistiigi
Copy link
Member

@jedevc Is this related to this OTEL panic? I also opened open-telemetry/opentelemetry-go-contrib#5187 but does not seem to be moving.

@jedevc
Copy link
Collaborator

jedevc commented Mar 6, 2024

I think @alessfg's issue is likely separate, the upstream OTEL PR looks like the way to go for now.

@alessfg
Copy link

alessfg commented Mar 6, 2024

Yup I ran into OTEL issues a couple times but they seem to have gone away when I started using the latest changes in main instead of the 0.12.1 release (I have not tested the 0.13.0 release).

@tonistiigi
Copy link
Member

I was looking at the trace from first comment, did not notice that #2232 (comment) was different.

@dunglas
Copy link
Author

dunglas commented Mar 18, 2024

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants