Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: log producer panics if no logs are produced when using podman #946

Open
martin-sucha opened this issue Mar 21, 2023 · 5 comments · Fixed by #947
Open

[Bug]: log producer panics if no logs are produced when using podman #946

martin-sucha opened this issue Mar 21, 2023 · 5 comments · Fixed by #947
Labels
bug An issue with the library podman Issues regarding podman.

Comments

@martin-sucha
Copy link
Contributor

Testcontainers version

v0.19.0

Using the latest Testcontainers version?

Yes

Host OS

Linux

Host arch

x86_64

Go version

go1.20

Docker version

podman version
Client:       Podman Engine
Version:      4.4.2
API Version:  4.4.2
Go Version:   go1.19.6
Built:        Wed Mar  1 12:22:59 2023
OS/Arch:      linux/amd64


### Docker info

```shell
podman info:

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers:
  - cpu
  - io
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.7-2.fc37.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.7, commit: '
  cpuUtilization:
    idlePercent: 94.41
    systemPercent: 0.97
    userPercent: 4.62
  cpus: 12
  distribution:
    distribution: fedora
    variant: workstation
    version: "37"
  eventLogger: journald
  hostname: fedora
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 6.1.18-200.fc37.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 1141927936
  memTotal: 33228570624
  networkBackend: netavark
  ociRuntime:
    name: crun
    package: crun-1.8.1-1.fc37.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.8.1
      commit: f8a096be060b22ccd3d5f3ebe44108517fbf6c30
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-8.fc37.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.7.0
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.3
  swapFree: 8272998400
  swapTotal: 8589930496
  uptime: 5h 36m 47.00s (Approximately 0.21 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /home/martin/.config/containers/storage.conf
  containerStore:
    number: 4
    paused: 0
    running: 4
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/martin/.local/share/containers/storage
  graphRootAllocated: 510389125120
  graphRootUsed: 55164096512
  graphStatus:
    Backing Filesystem: btrfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 182
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/martin/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.2
  Built: 1677669779
  BuiltTime: Wed Mar  1 12:22:59 2023
  GitCommit: ""
  GoVersion: go1.19.6
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.2

What happened?

The worker goroutine in DockerContainer.StartLogProducer panics:

panic: Get "http://%2Frun%2Fuser%2F1000%2Fpodman%2Fpodman.sock/v1.41/containers/17131d3a1a9090c17437423165cbaf4324140e875b5f69c0ff5c4ee0a78711c5/logs?follow=1&since=1679409204.942642010&stderr=1&stdout=1&tail=": context deadline exceeded

goroutine 66 [running]:
github.com/testcontainers/testcontainers-go.(*DockerContainer).StartLogProducer.func1()
        /home/martin/go/pkg/mod/github.com/testcontainers/testcontainers-go@v0.19.0/docker.go:604 +0x617
created by github.com/testcontainers/testcontainers-go.(*DockerContainer).StartLogProducer
        /home/martin/go/pkg/mod/github.com/testcontainers/testcontainers-go@v0.19.0/docker.go:586 +0x8a

This is because:

  • there is a harcoded 5 second timeout
  • since parameter is current time
  • the container does not produce log output after the since parameter, there are logs only before that time
  • podman does not return HTTP response headers until some log output is available

So the context deadline expires and the code panics.

It seems that instead of a hardcoded 5 second timeout, the context should be cancelled when the producer is stopped.

Unless this is a bug in podman, in that case we can open a bug report there.

Relevant log output

No response

Additional information

No response

@martin-sucha martin-sucha added the bug An issue with the library label Mar 21, 2023
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Mar 21, 2023
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Mar 21, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
@jdfoster
Copy link

I have also encounter this issue when using podman and can confirm your fix (#947) has resolves the bug. Any chance we could get this merged into main soon?

@mdelapenya
Copy link
Member

Any chance we could get this merged into main soon?

I'm waiting for author's feedback on #947 (comment). But I'll take the PR and try to resolve the conflicts by myself.

martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
mdelapenya pushed a commit that referenced this issue Apr 24, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix #946
weeco pushed a commit to weeco/testcontainers-go that referenced this issue Apr 24, 2023
…ers#947)

This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
@mdelapenya
Copy link
Member

Reopening for #1164

@mdelapenya mdelapenya reopened this May 10, 2023
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Jun 8, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

This should fix testcontainers#946
martin-sucha added a commit to kiwicom/testcontainers-go that referenced this issue Jun 8, 2023
This removes panic when logs endpoint takes more than 5 seconds to respond.
The panic happened at least with podman when no new logs appear when using follow and since parameters.

We keep retrying until the context is canceled
(the retry request would fail anyway with canceled context)
or the producer is stopped,
whichever comes first.
This makes the retry behavior consistent with closed connections
handling.

Outstanding HTTP calls for fetching logs are now interrupted when
a producer is stopped.
Previously the consumer and StopProducer() waited for the HTTP call
to complete.

This should fix testcontainers#946
@mdelapenya mdelapenya added the podman Issues regarding podman. label Jun 12, 2024
@stevenh
Copy link
Collaborator

stevenh commented Aug 8, 2024

This is likely fixed by the logging changes in #2664

@mdelapenya
Copy link
Member

@martin-sucha could you please take a look with the latest release, v0.33.0?

Thanks in advance! 🙏

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment