Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing healthcheck defined in image #12226

Closed
jwillikers opened this issue Nov 8, 2021 · 14 comments · Fixed by #12239
Closed

Missing healthcheck defined in image #12226

jwillikers opened this issue Nov 8, 2021 · 14 comments · Fixed by #12239
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jwillikers
Copy link
Contributor

jwillikers commented Nov 8, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

When creating a Syncthing container from the image docker.io/syncthing/syncthing, the included healthcheck is missing.
The Syncthing Dockerfile includes a healtcheck and Docker Hub shows the healtcheck provided by the manifest here.

Steps to reproduce the issue:

  1. Create / run a Syncthing container.
$ podman run \
           --detach \
           --env PGID=(id -g) \
           --env PUID=(id -u) \
           --env TZ=America/Chicago \
           --hostname syncthing \
           --label "io.containers.autoupdate=image" \
           --name syncthing \
           --publish 8384:8384/tcp \
           --publish 22000:22000/tcp \
           --publish 22000:22000/udp \
           --publish 21027:21027/udp \
           --rm \
           --userns keep-id \
           --volume syncthing-config:/var/syncthing/config:Z \
           docker.io/syncthing/syncthing:1.18.4
  1. Run the healthcheck.
$ podman healthcheck run syncthing

Describe the results you received:

Podman claims that no healthcheck is defined when running a healthcheck on the container.

$ podman healthcheck run syncthing
Error: container 001cfd46102d18b366b8278db62cd21febd63cca70eeeaf2920a8d4d1a8211d0 has no defined healthcheck

Describe the results you expected:

I expected Podman to run the healthcheck defined for the image.

Additional information you deem important (e.g. issue happens only occasionally):

$ podman inspect syncthing --format '{{.State.Healthcheck}}'
{ 0 []}

Syncthing image info from podman inspect:

$ podman inspect aa10c0945a16
[
    {
        "Id": "aa10c0945a16a9aa34593914cefceffecc13f69d57946373ccb05ade124e465e",
        "Digest": "sha256:97fc6221819aceab9055d120a01d2981a30d3aef962f1ec5eda2b21cfaa883c8",
        "RepoTags": [
            "docker.io/syncthing/syncthing:latest"
        ],
        "RepoDigests": [
            "docker.io/syncthing/syncthing@sha256:8ead1a4b86ba94ca6232d16b6bfe12472faf6921001600ac6b5bfbb93384d8d3",
            "docker.io/syncthing/syncthing@sha256:97fc6221819aceab9055d120a01d2981a30d3aef962f1ec5eda2b21cfaa883c8"
        ],
        "Parent": "",
        "Comment": "",
        "Created": "2021-11-02T17:15:52.488768443Z",
        "Config": {
            "ExposedPorts": {
                "21027/udp": {},
                "22000/tcp": {},
                "22000/udp": {},
                "8384/tcp": {}
            },
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "PUID=1000",
                "PGID=1000",
                "HOME=/var/syncthing",
                "STGUIADDRESS=0.0.0.0:8384"
            ],
            "Entrypoint": [
                "/bin/entrypoint.sh",
                "/bin/syncthing",
                "-home",
                "/var/syncthing/config"
            ],
            "Volumes": {
                "/var/syncthing": {}
            }
        },
        "Version": "",
        "Author": "",
        "Architecture": "amd64",
        "Os": "linux",
        "Size": 31922967,
        "VirtualSize": 31922967,
        "GraphDriver": {
            "Name": "btrfs",
            "Data": null
        },
        "RootFS": {
            "Type": "layers",
            "Layers": [
                "sha256:e2eb06d8af8218cfec8210147357a68b7e13f7c485b991c288c2d01dc228bb68",
                "sha256:9fecf3fd98613682ac00b6a55075b0fe7fd51ef403bb01f59a2cc81ebff36c91",
                "sha256:10d58152cae4673a32880f2bd857b15efdc47cf90fc604e885e8c0927279957a",
                "sha256:e36dc21aa90a78a063b1c82c17fdafd363ac08782686cfefe6a9a49bba330bba"
            ]
        },
        "Labels": null,
        "Annotations": {},
        "ManifestType": "application/vnd.docker.distribution.manifest.v2+json",
        "User": "",
        "History": [
            {
                "created": "2021-08-27T17:19:45.553092363Z",
                "created_by": "/bin/sh -c #(nop) ADD file:aad4290d27580cc1a094ffaf98c3ca2fc5d699fe695dfb8e6e9fac20f1129450 in / "
            },
            {
                "created": "2021-08-27T17:19:45.758611523Z",
                "created_by": "/bin/sh -c #(nop)  CMD [\"/bin/sh\"]",
                "empty_layer": true
            },
            {
                "created": "2021-08-28T00:00:38.511298429Z",
                "created_by": "ARG TARGETARCH",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-08-28T00:00:38.511298429Z",
                "created_by": "EXPOSE map[21027/udp:{} 22000/tcp:{} 22000/udp:{} 8384/tcp:{}]",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-08-28T00:00:38.511298429Z",
                "created_by": "VOLUME [/var/syncthing]",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-08-28T00:00:38.511298429Z",
                "created_by": "RUN |1 TARGETARCH=amd64 /bin/sh -c apk add --no-cache ca-certificates su-exec tzdata # buildkit",
                "comment": "buildkit.dockerfile.v0"
            },
            {
                "created": "2021-11-02T17:15:52.472110565Z",
                "created_by": "COPY ./syncthing-linux-amd64 /bin/syncthing # buildkit",
                "comment": "buildkit.dockerfile.v0"
            },
            {
                "created": "2021-11-02T17:15:52.488768443Z",
                "created_by": "COPY ./script/docker-entrypoint.sh /bin/entrypoint.sh # buildkit",
                "comment": "buildkit.dockerfile.v0"
            },
            {
                "created": "2021-11-02T17:15:52.488768443Z",
                "created_by": "ENV PUID=1000 PGID=1000 HOME=/var/syncthing",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-11-02T17:15:52.488768443Z",
                "created_by": "HEALTHCHECK \u0026{[\"CMD-SHELL\" \"nc -z 127.0.0.1 8384 || exit 1\"] \"1m0s\" \"10s\" \"0s\" '\\x00'}",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-11-02T17:15:52.488768443Z",
                "created_by": "ENV STGUIADDRESS=0.0.0.0:8384",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            },
            {
                "created": "2021-11-02T17:15:52.488768443Z",
                "created_by": "ENTRYPOINT [\"/bin/entrypoint.sh\" \"/bin/syncthing\" \"-home\" \"/var/syncthing/config\"]",
                "comment": "buildkit.dockerfile.v0",
                "empty_layer": true
            }
        ],
        "NamesHistory": [
            "docker.io/syncthing/syncthing:latest"
        ]
    }
]

Output of podman version:

Version:      3.4.1
API Version:  3.4.1
Go Version:   go1.16.8
Built:        Wed Oct 20 09:31:56 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers:
  - memory
  - pids
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.30-2.fc35.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.30, commit: '
  cpus: 12
  distribution:
    distribution: fedora
    variant: silverblue
    version: "35"
  eventLogger: journald
  hostname: precision
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.14.16-301.fc35.x86_64
  linkmode: dynamic
  logDriver: journald
  memFree: 16054222848
  memTotal: 33386582016
  ociRuntime:
    name: crun
    package: crun-1.2-1.fc35.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 1.2
      commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.12-2.fc35.x86_64
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 8589930496
  swapTotal: 8589930496
  uptime: 2h 27m 52.28s (Approximately 0.08 days)
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - registry.fedoraproject.org
  - registry.access.redhat.com
  - docker.io
  - quay.io
store:
  configFile: /var/home/jordan/.config/containers/storage.conf
  containerStore:
    number: 59
    paused: 0
    running: 1
    stopped: 58
  graphDriverName: btrfs
  graphOptions: {}
  graphRoot: /var/home/jordan/.local/share/containers/storage
  graphStatus:
    Build Version: 'Btrfs v5.14.1 '
    Library Version: "102"
  imageStore:
    number: 87
  runRoot: /run/user/1000/containers
  volumePath: /var/home/jordan/.local/share/containers/storage/volumes
version:
  APIVersion: 3.4.1
  Built: 1634740316
  BuiltTime: Wed Oct 20 09:31:56 2021
  GitCommit: ""
  GoVersion: go1.16.8
  OsArch: linux/amd64
  Version: 3.4.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.4.1-1.fc35.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

Fedora IoT - aarch64
Fedora Silverblue - x86_64

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 8, 2021
@mheon
Copy link
Member

mheon commented Nov 8, 2021

The output of podman inspect on that image does not show a healthcheck. There's a note about one in the comments but no actual healthcheck in the config.

@jwillikers
Copy link
Contributor Author

jwillikers commented Nov 8, 2021

The output of podman inspect on that image does not show a healthcheck. There's a note about one in the comments but no actual healthcheck in the config.

I'm checking with the Syncthing project upstream to see if it's an issue with how they build their images. Building locally with --format docker, the health check works.

@rhatdan
Copy link
Member

rhatdan commented Nov 8, 2021

Does not seem to be a podman issue. closing.

@rhatdan rhatdan closed this as completed Nov 8, 2021
@jwillikers
Copy link
Contributor Author

jwillikers commented Nov 8, 2021

@rhatdan Please re-open this issue.
I've conferred with the Syncthing developer and both of us have a working healthcheck out-of-the-box when using Docker directly.
It appears that Podman isn't registering the healthcheck for some reason.

$ docker run -it --rm syncthing/syncthing:1.18.4
$ docker ps | grep syncthing
0627293d6ccf   syncthing/syncthing:1.18.4 "/bin/entrypoint.sh …"   40 seconds ago   Up 39 seconds (health: starting)   8384/tcp, 21027/udp, 22000/tcp, 22000/udp                                              agitated_kare

@rhatdan rhatdan reopened this Nov 8, 2021
@rhatdan
Copy link
Member

rhatdan commented Nov 8, 2021

The only reference to the HEALTHCHECK is in History.
Could you ask Syncthing developer how Docker knows to run a Healthcheck? If the image does not have a healthcheck field?

@rhatdan
Copy link
Member

rhatdan commented Nov 8, 2021

Ok when I pull the image with Docker I see.

            "Healthcheck": {
                "Test": [
                    "CMD-SHELL",
                    "nc -z 127.0.0.1 8384 || exit 1"
                ],
                "Interval": 60000000000,
                "Timeout": 10000000000
            },

Which I don't see in Podman.

Is there two images, once OCI and one Docker format?

@mtrmac @vrothberg Ideas?

@jwillikers
Copy link
Contributor Author

@rhatdan I've invited the Syncthing developers to respond to your question in this thread.

@AudriusButkevicius
Copy link

AudriusButkevicius commented Nov 8, 2021

It's just a single image, one for each architecture.

docker buildx build \
	--platform linux/amd64,linux/arm64,linux/arm/7 \
    -f Dockerfile.buildx \
    ${tags[*]}

The layers do have the health check:
https://hub.docker.com/layers/syncthing/syncthing/1.18/images/sha256-bc5810f54839d3976ac2558fd0ad4b4e2e9987282764151690c23ae8d415ebb2?context=explore

@vrothberg
Copy link
Member

The healthcheck is definitely in the image on the registry (and once pulled in the local containers storage):

~ $ skopeo inspect  --config --raw docker://docker.io/syncthing/syncthing@sha256:8ead1a4b86ba94ca6232d16b6bfe12472faf6921001600ac6b5bfbb93384d8d3 | jq ".config.Healthcheck"

{
  "Test": [
    "CMD-SHELL",
    "nc -z 127.0.0.1 8384 || exit 1"
  ],
  "Interval": 60000000000,
  "Timeout": 10000000000

I need to have a closer look at what's going on.

@vrothberg
Copy link
Member

I found the issue. buildkit is setting the health check in image's config while Docker and Podman set it into the image's container config.

I created a test image and will prepare a PR in containers/common.

@vrothberg vrothberg self-assigned this Nov 9, 2021
vrothberg added a commit to vrothberg/common that referenced this issue Nov 9, 2021
buildkit is setting the health check in image's config while Docker and
Podman set it into the image's container config.  Hence, if the
container config's healthcheck is nil, have a look at the config.

Fixes: #containers/podman/issues/12226
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
vrothberg added a commit to vrothberg/common that referenced this issue Nov 9, 2021
buildkit is setting the health check in the image's config while Docker
and Podman set it in the image's container config.  Hence, if the
container config's healthcheck is nil, have a look at the config.

Fixes: #containers/podman/issues/12226
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
@vrothberg
Copy link
Member

containers/common#822

vrothberg added a commit to vrothberg/common that referenced this issue Nov 9, 2021
buildkit is setting the health check in the image's config while Docker
and Podman set it in the image's container config.  Hence, if the
container config's healthcheck is nil, have a look at the config.

Fixes: #containers/podman/issues/12226
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
@rhatdan
Copy link
Member

rhatdan commented Nov 9, 2021

The containers/common fix only fixes the inspect, it does not fix Podman to actually use the different location of the health check.
Also since podman build and buildah are embedding buildkit features. Should we start to store the healthcheck in the new buildkit location?

@vrothberg
Copy link
Member

Also since podman build and buildah are embedding buildkit features. Should we start to store the healthcheck in the new buildkit location?

Both locations are fine. AFAIK buildah config ... allows for setting it in both locations.

The containers/common fix only fixes the inspect, it does not fix Podman to actually use the different location of the health check.

Yes, I will do the rest of the plumbing here.

@vrothberg
Copy link
Member

The containers/common fix only fixes the inspect, it does not fix Podman to actually use the different location of the health check.

Actually, it fixes it entirely. Just need to vendor in c/common.

vrothberg added a commit to vrothberg/libpod that referenced this issue Nov 9, 2021
Health checks may be defined in the container config or the config of an
image.  So far, Podman only looked at the container config.

The plumbing happened in libimage but add a regression test to Podman as
well to make sure the glue code will not regress.

Note that I am pinning github.com/onsi/gomega to v1.16.0 since v1.17.0
requires go 1.16 which in turn is breaking CI.

Fixes: containers#12226
Signed-off-by: Valentin Rothberg <rothberg@redhat.com>
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants