Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

glibc 2.33 bug causes regressions #3021

Closed
polygamma opened this issue Feb 18, 2021 · 28 comments
Closed

glibc 2.33 bug causes regressions #3021

polygamma opened this issue Feb 18, 2021 · 28 comments

Comments

@polygamma
Copy link

polygamma commented Feb 18, 2021

Description

With the release of Podman 3.0.0 we are not able to build images that we've been able to build before the release of Podman 3.0.0.

EDIT: It's not Podman 3.0.0 that introduced the bug, but rather something that was changed by Ubuntu.

EDIT 2: It's glibc, sorry Ubuntu!

This happens using Ubuntu 20.04.2 LTS but not using Arch Linux.

To be very precise: We can still build those images on Arch Linux with Podman 3.0.0, but not on Ubuntu 20.04.2 LTS with Podman 3.0.0.

All systems are fully updated.

Steps to reproduce the issue:

  1. Use Ubuntu 20.04.2 LTS

  2. Use the following Containerfile:

# Use official `Arch Linux` image
FROM docker.io/archlinux/archlinux:base

# Init `pacman` keyring and populate it
RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
  1. Execute: sudo podman --runtime=crun build (see: kubic xUbuntu_18.04 broken default OCI runtime config? dockerfile RUN lines fail with default installation podman#9365 for why the --runtime specification is needed)

  2. Get an error:

STEP 1: FROM docker.io/archlinux/archlinux:base
STEP 2: RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
==> ERROR: pacman configuration file '/etc/pacman.conf' not found.
Error: error building at STEP "RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*": error while running runtime: exit status 1

The file /etc/pacman.conf is however present during the build process. ls and cat both show that, when including them in the Containerfile.

Describe the results you expected:

Successful building of the image. This was achieved on Arch Linux.

STEP 1: FROM docker.io/archlinux/archlinux:base
STEP 2: RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
gpg: Generating pacman keyring master key...
gpg: key 28FD52C4F55E4B03 marked as ultimately trusted
gpg: revocation certificate stored as '/etc/pacman.d/gnupg/openpgp-revocs.d/39F7709009CBEF06B75AE0DE28FD52C4F55E4B03.rev'
gpg: Done
==> Updating trust database...
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 786C63F330D7CB92: no user ID for key signature packet of class 10
gpg: key 1EB2638FF56C0C53: no user ID for key signature packet of class 10
gpg: key 1EB2638FF56C0C53: no user ID for key signature packet of class 10
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   2  signed:   5  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: depth: 1  valid:   5  signed:  83  trust: 0-, 0q, 0n, 5m, 0f, 0u
gpg: depth: 2  valid:  77  signed:  24  trust: 77-, 0q, 0n, 0m, 0f, 0u
gpg: next trustdb check due at 2021-08-02
==> Appending keys from archlinux.gpg...
==> Locally signing trusted keys in keyring...
  -> Locally signing key AB19265E5D7D20687D303246BA1DFB64FFF979E7...
  -> Locally signing key DDB867B92AA789C165EEFA799B729B06A680C281...
  -> Locally signing key 0E8B644079F599DFC1DDC3973348882F6AC6A4C2...
  -> Locally signing key D8AFDDA07A5B6EDFA7D8CCDAD6D055F927843F1C...
  -> Locally signing key 91FFE0700E80619CEB73235CA88E23E377514E00...
==> Importing owner trust values...
==> Disabling revoked keys in keyring...
  -> Disabling key 4A8B17E20B88ACA61860009B5CED81B7C2E5C0D2...
  -> Disabling key 684148BB25B49E986A4944C55184252D824B18E8...
  -> Disabling key 5357F3B111688D88C1D88119FCF2CB179205AC90...
  -> Disabling key 50F33E2E5B0C3D900424ABE89BDCF497A4BBCC7F...
  -> Disabling key 39F880E50E49A4D11341E8F939E4F17F295AFBF4...
  -> Disabling key F5A361A3A13554B85E57DDDAAF7EF7873CFD4BB6...
  -> Disabling key 40440DC037C05620984379A6761FAD69BA06C6A9...
  -> Disabling key FB871F0131FEA4FB5A9192B4C8880A6406361833...
  -> Disabling key 487EACC08557AD082088DABA1EB2638FF56C0C53...
  -> Disabling key 76B4192E902C0A52642C63C273B8ED52F1D357C1...
  -> Disabling key 40776A5221EF5AD468A4906D42A1DB15EC133BAD...
  -> Disabling key 0B20CA1931F5DA3A70D0F8D2EA6836E1AB441196...
  -> Disabling key 07DFD3A0BC213FA12EDC217559B3122E2FA915EC...
  -> Disabling key 34C5D94FE7E7913E86DC427E7FB1A3800C84C0A5...
  -> Disabling key B1F2C889CB2CCB2ADA36D963097D629E437520BD...
  -> Disabling key D4DE5ABDE2A7287644EAC7E36D1A9E70E19DAA50...
  -> Disabling key 44D4A033AC140143927397D47EFD567D4C7EA887...
  -> Disabling key 8F76BEEA0289F9E1D3E229C05F946DED983D4366...
  -> Disabling key 27FFC4769E19F096D41D9265A04F9397CDFD6BB0...
  -> Disabling key 4FCF887689C41B09506BE8D5F3E1D5C5D30DB0AD...
  -> Disabling key 5A2257D19FF7E1E0E415968CE62F853100F0D0F0...
  -> Disabling key 7FA647CD89891DEDC060287BB9113D1ED21E1A55...
  -> Disabling key 5E7585ADFF106BFFBBA319DC654B877A0864983E...
  -> Disabling key E7210A59715F6940CF9A4E36A001876699AD6E84...
  -> Disabling key 5559BC1A32B8F76B3FCCD9555FA5E5544F010D48...
  -> Disabling key BFA1ECFEF1524EE4099CDE971F0CD4921ECAA030...
  -> Disabling key 4D913AECD81726D9A6C74F0ADA6426DD215B37AD...
  -> Disabling key 8840BD07FC24CB7CE394A07CCF7037A4F27FB7DA...
  -> Disabling key BC1FBE4D2826A0B51E47ED62E2539214C6C11350...
  -> Disabling key 9515D8A8EAB88E49BB65EDBCE6B456CAF15447D5...
  -> Disabling key 779CD2942629B7FA04AB8F172E89012331361F01...
  -> Disabling key D921CABED130A5690EF1896E81AF739EC0711BF1...
  -> Disabling key 5696C003B0854206450C8E5BE613C09CB4440678...
  -> Disabling key 8CF934E339CAD8ABF342E822E711306E3C4F88BC...
  -> Disabling key 1A60DC44245D06FEF90623D6EEEEE2EEEE2EEEEE...
  -> Disabling key 81D7F8241DB38BC759C80FCE3A726C6170E80477...
  -> Disabling key 63F395DE2D6398BBE458F281F2DBB4931985A992...
  -> Disabling key 65EEFE022108E2B708CBFCF7F9E712E59AF5F22A...
  -> Disabling key 66BD74A036D522F51DD70A3C7F2A16726521E06D...
==> Updating trust database...
gpg: next trustdb check due at 2021-08-02
STEP 3: COMMIT
--> 3d0835d038c
3d0835d038cd601fada99979b5f45127bd30cede3bd7650adf6bcb13f56a3cdf

Output of rpm -q buildah or apt list buildah:

Listing... Done
buildah/unknown 100:1.19.4-3 amd64
buildah/unknown 100:1.19.4-3 arm64
buildah/unknown 100:1.19.4-3 armhf
buildah/unknown 100:1.19.4-3 s390x

Output of buildah version:

buildah: command not found

Output of podman version if reporting a podman build issue:

Version:      3.0.0
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of cat /etc/*release:

DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=20.04
DISTRIB_CODENAME=focal
DISTRIB_DESCRIPTION="Ubuntu 20.04.2 LTS"
NAME="Ubuntu"
VERSION="20.04.2 LTS (Focal Fossa)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 20.04.2 LTS"
VERSION_ID="20.04"
HOME_URL="https://www.ubuntu.com/"
SUPPORT_URL="https://help.ubuntu.com/"
BUG_REPORT_URL="https://bugs.launchpad.net/ubuntu/"
PRIVACY_POLICY_URL="https://www.ubuntu.com/legal/terms-and-policies/privacy-policy"
VERSION_CODENAME=focal
UBUNTU_CODENAME=focal

Output of uname -a:

Linux attk-VirtualBox 5.8.0-43-generic #49~20.04.1-Ubuntu SMP Fri Feb 5 09:57:56 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

But it happens on non-virtualized version of Ubuntu, too.

Output of cat /etc/containers/storage.conf:

# This file is is the configuration file for all tools
# that use the containers/storage library.
# See man 5 containers-storage.conf for more information
# The "container storage" table contains all of the server options.
[storage]

# Default Storage Driver, Must be set for proper operation.
driver = "overlay"

# Temporary storage location
runroot = "/run/containers/storage"

# Primary Read/Write location of container storage
graphroot = "/var/lib/containers/storage"

# Storage path for rootless users
#
# rootless_storage_path = "$HOME/.local/share/containers/storage"

[storage.options]
# Storage options to be passed to underlying storage drivers

# AdditionalImageStores is used to pass paths to additional Read/Only image stores
# Must be comma separated list.
additionalimagestores = [
]

# Remap-UIDs/GIDs is the mapping from UIDs/GIDs as they should appear inside of
# a container, to the UIDs/GIDs as they should appear outside of the container,
# and the length of the range of UIDs/GIDs.  Additional mapped sets can be
# listed and will be heeded by libraries, but there are limits to the number of
# mappings which the kernel will allow when you later attempt to run a
# container.
#
# remap-uids = 0:1668442479:65536
# remap-gids = 0:1668442479:65536

# Remap-User/Group is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid or /etc/subgid file.  Mappings are set up starting
# with an in-container ID of 0 and then a host-level ID taken from the lowest
# range that matches the specified name, and using the length of that range.
# Additional ranges are then assigned, using the ranges which specify the
# lowest host-level IDs first, to the lowest not-yet-mapped in-container ID,
# until all of the entries have been used for maps.
#
# remap-user = "containers"
# remap-group = "containers"

# Root-auto-userns-user is a user name which can be used to look up one or more UID/GID
# ranges in the /etc/subuid and /etc/subgid file.  These ranges will be partitioned
# to containers configured to create automatically a user namespace.  Containers
# configured to automatically create a user namespace can still overlap with containers
# having an explicit mapping set.
# This setting is ignored when running as rootless.
# root-auto-userns-user = "storage"
#
# Auto-userns-min-size is the minimum size for a user namespace created automatically.
# auto-userns-min-size=1024
#
# Auto-userns-max-size is the minimum size for a user namespace created automatically.
# auto-userns-max-size=65536

[storage.options.overlay]
# ignore_chown_errors can be set to allow a non privileged user running with
# a single UID within a user namespace to run containers. The user can pull
# and use any image even those with multiple uids.  Note multiple UIDs will be
# squashed down to the default uid in the container.  These images will have no
# separation between the users in the container. Only supported for the overlay
# and vfs drivers.
#ignore_chown_errors = "false"

# Path to an helper program to use for mounting the file system instead of mounting it
# directly.
#mount_program = "/usr/bin/fuse-overlayfs"

# mountopt specifies comma separated list of extra mount options
mountopt = "nodev,metacopy=on"

# Set to skip a PRIVATE bind mount on the storage home directory.
# skip_mount_home = "false"

# Size is used to set a maximum size of the container image.
# size = ""

# ForceMask specifies the permissions mask that is used for new files and
# directories.
#
# The values "shared" and "private" are accepted.
# Octal permission masks are also accepted.
#
#  "": No value specified.
#     All files/directories, get set with the permissions identified within the
#     image.
#  "private": it is equivalent to 0700.
#     All files/directories get set with 0700 permissions.  The owner has rwx
#     access to the files. No other users on the system can access the files.
#     This setting could be used with networked based homedirs.
#  "shared": it is equivalent to 0755.
#     The owner has rwx access to the files and everyone else can read, access
#     and execute them. This setting is useful for sharing containers storage
#     with other users.  For instance have a storage owned by root but shared
#     to rootless users as an additional store.
#     NOTE:  All files within the image are made readable and executable by any
#     user on the system. Even /etc/shadow within your image is now readable by
#     any user.
#
#   OCTAL: Users can experiment with other OCTAL Permissions.
#
#  Note: The force_mask Flag is an experimental feature, it could change in the
#  future.  When "force_mask" is set the original permission mask is stored in
#  the "user.containers.override_stat" xattr and the "mount_program" option must
#  be specified. Mount programs like "/usr/bin/fuse-overlayfs" present the
#  extended attribute permissions to processes within containers rather then the
#  "force_mask"  permissions.
#
# force_mask = ""

[storage.options.thinpool]
# Storage Options for thinpool

# autoextend_percent determines the amount by which pool needs to be
# grown. This is specified in terms of % of pool size. So a value of 20 means
# that when threshold is hit, pool will be grown by 20% of existing
# pool size.
# autoextend_percent = "20"

# autoextend_threshold determines the pool extension threshold in terms
# of percentage of pool size. For example, if threshold is 60, that means when
# pool is 60% full, threshold has been hit.
# autoextend_threshold = "80"

# basesize specifies the size to use when creating the base device, which
# limits the size of images and containers.
# basesize = "10G"

# blocksize specifies a custom blocksize to use for the thin pool.
# blocksize="64k"

# directlvm_device specifies a custom block storage device to use for the
# thin pool. Required if you setup devicemapper.
# directlvm_device = ""

# directlvm_device_force wipes device even if device already has a filesystem.
# directlvm_device_force = "True"

# fs specifies the filesystem type to use for the base device.
# fs="xfs"

# log_level sets the log level of devicemapper.
# 0: LogLevelSuppress 0 (Default)
# 2: LogLevelFatal
# 3: LogLevelErr
# 4: LogLevelWarn
# 5: LogLevelNotice
# 6: LogLevelInfo
# 7: LogLevelDebug
# log_level = "7"

# min_free_space specifies the min free space percent in a thin pool require for
# new device creation to succeed. Valid values are from 0% - 99%.
# Value 0% disables
# min_free_space = "10%"

# mkfsarg specifies extra mkfs arguments to be used when creating the base
# device.
# mkfsarg = ""

# metadata_size is used to set the `pvcreate --metadatasize` options when
# creating thin devices. Default is 128k
# metadata_size = ""

# Size is used to set a maximum size of the container image.
# size = ""

# use_deferred_removal marks devicemapper block device for deferred removal.
# If the thinpool is in use when the driver attempts to remove it, the driver
# tells the kernel to remove it as soon as possible. Note this does not free
# up the disk space, use deferred deletion to fully remove the thinpool.
# use_deferred_removal = "True"

# use_deferred_deletion marks thinpool device for deferred deletion.
# If the device is busy when the driver attempts to delete it, the driver
# will attempt to delete device every 30 seconds until successful.
# If the program using the driver exits, the driver will continue attempting
# to cleanup the next time the driver is used. Deferred deletion permanently
# deletes the device and all data stored in device will be lost.
# use_deferred_deletion = "True"

# xfs_nospace_max_retries specifies the maximum number of retries XFS should
# attempt to complete IO when ENOSPC (no space) error is returned by
# underlying storage device.
# xfs_nospace_max_retries = "0"

@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2021

Could you add in the AUDIT_WRITE capability and see if it works?

@polygamma
Copy link
Author

Could you add in the AUDIT_WRITE capability and see if it works?

attk@attk-VirtualBox:~/podman-container$ sudo podman --runtime=crun build --cap-add=CAP_AUDIT_WRITE -f Containerfile_x86_64 .
STEP 1: FROM docker.io/archlinux/archlinux:base
STEP 2: RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
==> ERROR: pacman configuration file '/etc/pacman.conf' not found.
Error: error building at STEP "RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*": error while running runtime: exit status 1

@rhatdan
Copy link
Member

rhatdan commented Feb 18, 2021

Does it work --privileged?

@polygamma
Copy link
Author

Does it work --privileged?

Sorry, I am not sure how to execute podman build with --privileged.

http://docs.podman.io/en/latest/markdown/podman-build.1.html does not list --privileged as an option, and indeed does appending of that flag lead to a failure.

attk@attk-VirtualBox:~/podman-container$ sudo podman --runtime=crun build --privileged -f Containerfile_x86_64 .
Error: unknown flag: --privileged

@polygamma
Copy link
Author

Does anyone have an idea what we could try next?

It's a big problem for us not being able to build this image on Ubuntu machines, and it would be great if we could avoid having to compile older versions of Podman from source.

The problem persists with Podman 3.0.1 by the way.

@vrothberg
Copy link
Member

Sorry that you ran into this issue, @polygamma!

Does podman build --cap-add all ... work?

@vrothberg
Copy link
Member

Were you using the same version of Podman on Ubuntu and Arch? It worked on Arch but not on Ubuntu?

@polygamma
Copy link
Author

Does podman build --cap-add all ... work?

Unfortunately not.

Were you using the same version of Podman on Ubuntu and Arch? It worked on Arch but not on Ubuntu?

It works on Arch with Podman < 3.0.0 and Podman >= 3.0.0 and it works on Ubuntu with Podman < 3.0.0 but NOT with Podman >= 3.0.0

@vrothberg
Copy link
Member

Can you share the output of podman info --debug?

@polygamma
Copy link
Author

Can you share the output of podman info --debug?

host:
  arch: amd64
  buildahVersion: 1.19.4
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.26, commit: '
  cpus: 1
  distribution:
    distribution: ubuntu
    version: "20.04"
  eventLogger: journald
  hostname: attk-VirtualBox
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.8.0-43-generic
  linkmode: dynamic
  memFree: 1840168960
  memTotal: 4127162368
  ociRuntime:
    name: crun
    package: 'crun: /usr/bin/crun'
    path: /usr/bin/crun
    version: |-
      crun version 0.17.7-5502-dirty
      commit: fd582c529489c0738e7039cbc036781d1d039014
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: true
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    selinuxEnabled: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 15m 42.64s
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.0.0
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.15.2
  OsArch: linux/amd64
  Version: 3.0.1

@vrothberg
Copy link
Member

vrothberg commented Feb 22, 2021

Thanks!

I am going to spin up an Ubuntu VM to have a look at it myself. There must be something going on. Cc @lsm5

@polygamma
Copy link
Author

@vrothberg If it would be of any help, I could also set up a new VM and try Ubuntu 20.10 instead of 20.04... Maybe that gives even more information about what's wrong?

@vrothberg
Copy link
Member

@vrothberg If it would be of any help, I could also set up a new VM and try Ubuntu 20.10 instead of 20.04... Maybe that gives even more information about what's wrong?

Thanks, I appreciate your help! I can reproduce in my new Ubuntu VM and about to track the bug down.

@polygamma
Copy link
Author

I have installed the new VM and the bug also exists on Ubuntu 20.10. I am going to do the following for the time being: Building Podman from source based on different commits, maybe I'm able to find the specific commit introducing the bug.

@vrothberg
Copy link
Member

I have installed the new VM and the bug also exists on Ubuntu 20.10. I am going to do the following for the time being: Building Podman from source based on different commits, maybe I'm able to find the specific commit introducing the bug.

FWIW, building Podman v2.2.1 from source also fails. May be a packaging issue.

@polygamma
Copy link
Author

FWIW, building Podman v2.2.1 from source also fails. May be a packaging issue.

Ha, seems like you had the exact same idea... Was also going to write that it fails with an older version < 3.0.0 which I built from source.

Could it be a crun problem?

@vrothberg
Copy link
Member

vrothberg@ubuntu:~/podman$ ./bin/podman run --runtime crun --rm docker.io/archlinux/archlinux pacman-key --init
==> ERROR: pacman configuration file '/etc/pacman.conf' not found.
vrothberg@ubuntu:~/podman$ ./bin/podman run --runtime /usr/lib/cri-o-runc/sbin/runc --rm docker.io/archlinux/archlinux pacman-key --init
==> ERROR: pacman configuration file '/etc/pacman.conf' not found.
vrothberg@ubuntu:~/podman$ ./bin/podman run --runtime /usr/lib/cri-o-runc/sbin/runc --rm docker.io/archlinux/archlinux ls /etc/pacman.conf
/etc/pacman.conf

hm ... I am still scatching my head a bit

@vrothberg
Copy link
Member

Could it be a crun problem?

It doesn't look like crun since I can reproduce with runc (see above).

@vrothberg
Copy link
Member

It works in the build container started by buildah from though:

vrothberg@ubuntu:~/podman$ buildah run archlinux-working-container-5 pacman-key --init
gpg: Generating pacman keyring master key...
gpg: key 4A74A9EAA6604CB7 marked as ultimately trusted
gpg: revocation certificate stored as '/etc/pacman.d/gnupg/openpgp-revocs.d/D8657B48E59C440EDC2D1EB54A74A9EAA6604CB7.rev'

@polygamma
Copy link
Author

Okay, so it doesn't seem to be Podman itself, nor crun or runc. Do I see that right?

I'm going to boot Ubuntu 20.10 again and install Podman from the official Ubuntu repos instead of the Kubic PPA. Maybe that makes a difference.

@polygamma
Copy link
Author

I'm going to boot Ubuntu 20.10 again and install Podman from the official Ubuntu repos instead of the Kubic PPA. Maybe that makes a difference.

It does not. Still fails. As one can see, the Podman version and the runc version are both very old and I get the same error.

STEP 1: FROM docker.io/archlinux/archlinux:base
Getting image source signatures
Copying config f6c4ddbfb8 done  
Writing manifest to image destination
Storing signatures
STEP 2: RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
==> ERROR: pacman configuration file '/etc/pacman.conf' not found.
attk@attk-VirtualBox:~/podman-container$ sudo podman info
host:
  arch: amd64
  buildahVersion: 1.15.2
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.20, commit: unknown'
  cpus: 1
  distribution:
    distribution: ubuntu
    version: "20.10"
  eventLogger: file
  hostname: attk-VirtualBox
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.8.0-43-generic
  linkmode: dynamic
  memFree: 1694015488
  memTotal: 4123676672
  ociRuntime:
    name: runc
    package: 'runc: /usr/sbin/runc'
    path: /usr/sbin/runc
    version: 'runc version spec: 1.0.1-dev'
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 2147479552
  swapTotal: 2147479552
  uptime: 8m 53.14s
registries:
  search:
  - quay.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 1
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.14.7
  OsArch: linux/amd64
  Version: 2.0.6

Phew, could it be something that Ubuntu itself changed, that leads to this problem?

@vrothberg
Copy link
Member

Still investigating. Docker is failing as well.

@polygamma
Copy link
Author

So... I am going to correct my initial post in this thread: It does not seem to be Podman 3.0.0 that introduced the bug, but what I can say for sure is the following:

I was setting up devices for work on 01.02.21 (I just looked it up) and installed Ubuntu 20.04.02 LTS and fully updated the systems. Podman was installed from the Kubic PPA. Everything worked back then.

So it has to be something that changed in Ubuntu between 01.02.21 and 18.02.21.
And what we also know by now: It changed in Ubuntu 20.04.02 LTS and Ubuntu 20.10.

Gives us at least something to start with.

Docker is failing as well.

Ha, and people use Ubuntu because it's so stable, I see...

@vrothberg
Copy link
Member

Ha, and people use Ubuntu because it's so stable, I see...

Bad things can happen. There are so many moving targets and we don't yet know exactly what's going on. FWIW, Buildah containers work.

@rhatdan
Copy link
Member

rhatdan commented Feb 22, 2021

@giuseppe Any ideas?

@vrothberg
Copy link
Member

I found it: https://bugs.archlinux.org/index.php?do=details&task_id=69563

It's a fart in glibc that renders the archlinux containers to fail on some hosts. @fatherlinux, that's a candidate for your "why hosts matter" conversations :)

I am closing since there's nothing Podman can do.

@vrothberg
Copy link
Member

vrothberg commented Feb 22, 2021

To lift the last open question regarding Buildah: when I build it locally in Ubuntu 20.04, it'll fail as well. It seems like the upstream packages of Podman and Buildah were built with slightly different versions of glibc, which would explain why one fails but not the other.

@vrothberg vrothberg changed the title Since Podman 3.0.0 building of image fails glibc 2.33 bug causes regressions Feb 22, 2021
@polygamma
Copy link
Author

@vrothberg Thank you very much :)

See: opencontainers/runc#2750

Building the latest runc version from source actually fixes the issue.

STEP 1: FROM docker.io/archlinux/archlinux:base
Getting image source signatures
Copying blob 81669aa534ad [--------------------------------------] 0.0b / 0.0b
Copying blob 79ed132fe747 [--------------------------------------] 0.0b / 0.0b
Copying config f6c4ddbfb8 done  
Writing manifest to image destination
Storing signatures
STEP 2: RUN pacman-key --init && pacman-key --populate archlinux && rm -f /etc/pacman.d/gnupg/S.gpg-agent*
gpg: Generating pacman keyring master key...
gpg: key 60EA4BCC772BC871 marked as ultimately trusted
gpg: revocation certificate stored as '/etc/pacman.d/gnupg/openpgp-revocs.d/069CE0D82E3598386F2C86B360EA4BCC772BC871.rev'
gpg: Done

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 7, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

3 participants