Skip to content
Yann Dirson edited this page Jan 9, 2024 · 64 revisions

Documentation for answers to common questions and examples of how to extend cross.

Table of Contents

Container Engines

Custom Images

You can place a Cross.toml file in the root of your Cargo project or use a CROSS_CONFIG environment variable to tweak cross's behavior. cross provides default Docker images for the targets listed below. However, it can't cover every single use case out there.

If you simply need to install a dependency availaible in ubuntus package manager, see target.TARGET.pre-build

[target.x86_64-unknown-linux-gnu]
pre-build = [
    "dpkg --add-architecture $CROSS_DEB_ARCH",
    "apt-get update && apt-get install --assume-yes libssl-dev:$CROSS_DEB_ARCH"
]

For FreeBSD targets, a few helper scripts are available for use in target.TARGET.pre-build:

[target.x86_64-unknown-freebsd]
pre-build = ["""
export FREEBSD_MIRROR=$(/freebsd-fetch-best-mirror.sh) &&
/freebsd-setup-packagesite.sh &&
/freebsd-install-package.sh xen-tools
"""]

For other targets, or when the default image is not enough, you can use the target.{{TARGET}}.dockerfile field in Cross.toml to use custom Docker image for a specific target:

[target.aarch64-unknown-linux-gnu]
dockerfile = "Dockerfile"

Or target.{{TARGET}}.image field in Cross.toml to use an already built image for the specific target:

[target.aarch64-unknown-linux-gnu]
image = "my/image:tag"

In the later example, cross will use a image named my/image:tag instead of the default one. Normal Docker behavior applies, so:

  • Docker will first look for a local image named my/image:tag

  • If it doesn't find a local image, then it will look in Docker Hub.

  • If only image:tag is specified, then Docker won't look in Docker Hub.

  • If only tag is omitted, then Docker will use the latest tag.

It's recommended to base your custom image on the default Docker image that cross uses: ghcr.io/cross-rs/{{TARGET}}:{{VERSION}} (where {{VERSION}} is cross's version). This way you won't have to figure out how to install a cross C toolchain in your custom image. To make this easy, when using dockerfile = "Dockerfile", cross provides the environment variable [CROSS_BASE_IMAGE`](https://github.com/cross-rs/cross/wiki/Configuration#builddockerfile)

Example below for when using a standalone dockerfile:

Dockerfile

FROM ghcr.io/cross-rs/aarch64-unknown-linux-gnu:latest

RUN dpkg --add-architecture arm64 && \
    apt-get update && \
    apt-get install --assume-yes libfoo:arm64

Building

$ docker build -t my/image:tag path/to/where/the/Dockerfile/resides

Cross.toml

[target.aarch64-unknown-linux-gnu]
image = "my/image:tag"

Docker in Docker

When running cross from inside a docker container, cross needs access to the hosts docker daemon itself. This is normally achieved by mounting the docker daemons socket /var/run/docker.sock. For example:

$ docker run -v /var/run/docker.sock:/var/run/docker.sock -v .:/project \
  -w /project my/development-image:tag cross build --target mips64-unknown-linux-gnuabi64

The image running cross requires the rust development tools to be installed.

With this setup cross must find and mount the correct host paths into the container used for cross compilation. This includes the original project directory as well as the root path of the parent container to give access to the rust build tools.

To inform cross that it is running inside a container set CROSS_CONTAINER_IN_CONTAINER=true.

A development or CI container can be created like this:

FROM rust:1

# set CROSS_CONTAINER_IN_CONTAINER to inform `cross` that it is executed from within a container
ENV CROSS_CONTAINER_IN_CONTAINER=true

# install `cross`
RUN cargo install cross

...

Limitations: Finding the mount point for the containers root directory is currently only available for the overlayfs or overlayfs2 storage driver. In order to access the parent containers rust setup, the child container mounts the parents overlayfs. The parent must not be stopped before the child container, as the overlayfs can not be unmounted correctly by Docker if the child container still accesses it. cross currently cannot find the mount point if it is using a container engine running on top of the Windows Subsystem for Linux (WSL2), since WSL2 uses atypical bind-mount paths for the overlayfs2 driver (see #728).

Notes: The environment variable was previously called CROSS_DOCKER_IN_DOCKER but has since been changed to CROSS_CONTAINER_IN_CONTAINER

Explicitly Choose the Container Engine

By default, cross tries to use Docker or Podman, in that order. If you want to choose a container engine explicitly, you can set the binary name (or path) using the CROSS_CONTAINER_ENGINE environment variable.

For example in case you want use Podman, you can set CROSS_CONTAINER_ENGINE=podman.

Container Engine Issues

Whenever debugging any container engine issues, please ensure you are using a recent version of docker or podman. Using an out-of-date version, particularly if the container engine is provided by a package manager, is a likely source of the issue.

macOS Host

There are a few known issues running cross on macOS. The simplest solution is to use Docker itself, which supports all desired features and has no known, macOS-specific issues. If you wish to use another container engine, such as Podman or Lima, the following issues are present:

  1. Lima and Podman both cannot mount directories in /tmp by default.
  2. Podman cannot use bind mounts, and must use remote cross since data volumes work.
  3. Podman and Lima do not support SELinux labels (the former errors, the latter warns), which we use with bind mounts.

Using Podman

By default Podman only makes the $HOME directory available to the virtual machine when running on macOS or Windows. You must initialize your virtual machine with additional mount points for every directory you may need in your build.

# Add any other mount points required
$ podman machine init -v /tmp:/tmp -v /private:/private

Using Lima (NerdCTL)

Lima has a few specific issues due to a lack of compatibility with Docker's CLI interface. First, Lima by default only makes the /tmp/lima directory writable, meaning you must modify its configurations prior to running cross to ensure it works. Due to the beta nature of Lima and the risk of deleting your files, only make the specific directories you require for your project writable. To edit the Lima configurations, run:

$ limactl edit

Inside, ensure the following values are set:

mounts:
# this should be the directory provided by `CARGO_HOME`, or `~/.cargo`
- location: "~/.cargo"
  writable: true
# this should be the directory provided by `XARGO_HOME`, or `~/.xargo`
- location: "~/.xargo"
  writable: true

# Also, the project directory (the workspace root), and additional
# mounted volumes that need write access, and the target directory
# must all be writable. If not, `cross` will fail.

Next, Lima does not support user namespaces, and therefore CROSS_CONTAINER_USER_NAMESPACE=none must be set to disable user namespace remapping. Using Lima (as of nerdctl 0.21) is non-trivial and therefore not recommended.

Overlay Storage Driver

Currently, when using docker-in-docker, we can only support the overlay, overlay2, or fuse-overlayfs storage driver, so we can find our mount points.

Docker

Docker uses overlay2 by default and can be configured by stopping the daemon, changing the overlay type, and restarting it. For example, on a system using systemd, first stop the daemon:

$ sudo systemctl stop docker

Then edit the storage driver config file (/etc/docker/daemon.json):

{
  "storage-driver": "overlay2"
}

Valid values for cross are overlay, overlay2, or fuse-overlayfs. Finally, restart the Docker daemon:

$ sudo systemctl start docker

To check the changes have been registered, run:

$ docker info | grep 'Storage Driver'
 Storage Driver: overlay2

Podman

By default, podman uses the overlay driver and can be configured as described here. This might require changing the root directory data via the --root flag, or manually deleting existing storage drivers.

For example, first create a custom directory for our fuse-overlayfs driver:

$ mkdir -p $HOME/.local/share/containers/fuse-overlayfs-storage/

Next, export our storage driver type via environment variables or use CROSS_CONTAINER_OPTS environment variable to ensure we use the correct storage driver. For example:

# Using environment variables
$ export STORAGE_DRIVER=fuse-overlayfs
$ CROSS_CONTAINER_OPTS="--root=\"$HOME/.local/share/containers/fuse-overlayfs-storage/\"" \
    cross build ...

# Using `CROSS_CONTAINER_OPTS`
$ CROSS_CONTAINER_OPTS="--storage-driver=fuse-overlayfs --root=\"$HOME/.local/share/containers/fuse-overlayfs-storage/\"" \
    cross build ...

You can check podman is using the correct storage driver via:

$ podman --storage-driver=fuse-overlayfs --root $HOME/.local/share/containers/fuse-overlayfs-storage/ info | grep graphDriverName
  graphDriverName: fuse-overlayfs

External Dependencies

Linking External Libraries

Some rust projects depend on external libraries, which are not provided in the pre-built containers. For information on how to create, build, and use custom images, see Custom Images. For example, a Dockerfile and config containing ALSA Development files for an armv7-unknown-linux-gnueabihf target would be as follows:

# base pre-built cross image
ARG CROSS_BASE_IMAGE
FROM $CROSS_BASE_IMAGE

# add our foreign architecture and install our dependencies
RUN apt-get update && apt-get install -y --no-install-recommends apt-utils
RUN dpkg --add-architecture armhf
RUN apt-get update && apt-get -y install libasound2-dev:armhf

# add our linker search paths and link arguments
ENV CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_RUSTFLAGS="-L /usr/lib/arm-linux-gnueabihf -C link-args=-Wl,-rpath-link,/usr/lib/arm-linux-gnueabihf $CARGO_TARGET_ARMV7_UNKNOWN_LINUX_GNUEABIHF_RUSTFLAGS"
[target.armv7-unknown-linux-gnueabihf]
dockerfile = "Dockerfile"

A complete project linking to libdbus can be found here.

Using Debian Repositories

More packages for other architectures can be found in Debian repositories rather than Ubuntu. To install packages from Debian repositories (buster in the example below), you can extend the pre-built Docker images as follows. First, save this file to install_deb.sh, and make sure it's executable (chmod +x install_deb.sh).

Next, extend our Dockerfile with the custom logic (most architectures should work):

ARG CROSS_BASE_IMAGE

FROM $CROSS_BASE_IMAGE

COPY install_deb.sh /
# Change the packages to your dependencies.
RUN chmod +x install_deb.sh
ARG CROSS_DEB_ARCH
RUN /install_deb.sh $CROSS_DEB_ARCH libgstreamer1.0-dev \
  libgstreamer-plugins-base1.0-dev \
  libssl-dev

# Update any environment variables required with `ENV`.
# ENV MYVAR=MYVALUE

We can then build our image and use it for our target as shown in Custom Images. For certain architectures, you may need to use other Debian repositories or more complex logic. See linux-image.sh for code to handle more complex cases.

Using Clang / Bindgen

In order to use bindgen with cross, you must extend the Dockerfile to install clang-<v> and libclang-<v>-dev. An example configuration is:

[build]
pre-build = ["apt-get update && apt-get install --assume-yes --no-install-recommends libclang-3.9-dev clang-3.9"]

If you're using the newer images for cross available on :main, this would be

[build]
pre-build = ["apt-get update && apt-get install --assume-yes --no-install-recommends libclang-10-dev clang-10"]

Bindgen can't find header file for lib*-dev:arch when compiling sys crate

We set BINDGEN_EXTRA_CLANG_ARGS_<target>=--sysroot=$CROSS_SYSROOT in our images, this means that clang can't find headers in /usr/include as it's looking inside $CROSS_SYSROOT/include.

Unable to generate bindings: ClangDiagnostic("wrapper.h:1:10: fatal error: 'mylib.h' file not found

Some -sys crates don't use pkg-config to probe installed libraries, meaning sometimes cross-compilation don't work correctly with bindgen. To solve this, see https://github.com/cross-rs/cross/issues/1389

Installing Clang

When installing clang for a custom image, you must install it for the host and not for the target architecture. Doing so will uninstall the GCC cross-compiler, and therefore prevent cross from compiling for that architecture. For example, the following code is wrong:

FROM ghcr.io/cross-rs/armv7-unknown-linux-gnueabihf:main

RUN dpkg --add-architecture armhf
RUN apt-get update && apt-get install --assume-yes --no-install-recommends libclang-3.9-dev:armhf clang-3.9:armhf

By default, newer images will error if you try to install packages conflicting with our toolchains, saying there is no valid installation candidate. If you are using bindgen, see the Using Clang / Bindgen section.

OpenSSL is Not Installed

Why isn't OpenSSL installed? Maintaining images with OpenSSL proved to be a source of numerous bugs (see #229 and #332), and only some images provide OpenSSL to begin with. Since the openssl crate provides a vendored copy, there's good ways of installed an OpenSSL dependency for rust packages, and we provide a recipe for doing so. If OpenSSL is needed as a dependency for other C/C++ libraries, we document how to install and link to external libraries.

Cannot Find Package

For some targets, certain system packages can conflict with packages required to use the toolchain. For example, installing libclang may be required to use bindgen. However, libclang should be installed for the image architecture and not the target architecture, and many users assume clang is required as well. If users attempt to install clang for the target architecture (say, clang:armel for target arm-unknown-linux-gnueabi), it will uninstall the cross-compiler, rendering the Docker image useless.

To avoid users from installing packages that will negatively interfere with the build system, we use package pinning to prevent apt from finding valid installation candidates for these packages. Each pin will be found in /etc/apt/preferences.d/ with the pin under the package name. In order to override our package pins, you can either delete our custom pin or provide your own pin with a higher priority. Package pinning is superior to package holding, since it works for packages that are not yet installed.

We currently use package pinning to prevent the removal of binutils for any image that uses a cross-compiler installed via apt. In addition, we block the use of any package from armhf foreign architecture for the arm-unknown-linux-gnueabihf target, since the target is an ARMv6 hard-float target, but armhf packages are built for ARMv7-A. This means that any installed armhf system packages will successfully link to the Rust package, however, the generated code will be unable to run on an ARMv6 CPU.

Other External Dependency Issues

Have another issue while building packages using external dependencies? See our documentation on additional external dependency issues.

CI Workflows

Github Workflows

NOTE: This section needs to be reworked, actions-rs is unmaintained

actions-rs/cargo provides built-in support for using cross with the use-cross key. For example, to test your crate on aarch64-unknown-linux-gnu and arm-unknown-linux-gnueabi when pushing new commits, you can use the following sample workflow:

on: [push]

name: Cross CI

jobs:
  cross:
    name: Rust ${{matrix.target}}
    runs-on: ubuntu-latest
    strategy:
      fail-fast: false
      matrix:
        target:
          - aarch64-unknown-linux-gnu
          - arm-unknown-linux-gnueabi
    steps:
      - uses: actions/checkout@v2
      - uses: actions-rs/toolchain@v1
        with:
          toolchain: stable
          target: ${{matrix.target}}
          override: true
      - uses: actions-rs/cargo@v1
        with:
          use-cross: true
          command: test
          args: --target ${{matrix.target}}

Gitlab CI

Gitlab CI uses a remote docker build, which requires the use of cross remote. A sample .gitlab-ci.yml file as as follows:

variables:
    # the host where the docker instance is running
    DOCKER_HOST: tcp://docker:2375/
    # use for much faster builds
    DOCKER_DRIVER: overlay2
    # ensure cross knows it's running remotely
    CROSS_REMOTE: 1

services:
    - docker:18.09-dind

armv6:
    script:
        - cross test --target arm-unknown-linux-gnueabihf

Missing Intrinsics

Undefined Reference with build-std

When using cross's build-std configuration or -Z build-std, cross can fail with numerous error messages such as:

undefined reference to `__addtf3'
undefined reference to `__netf2'
undefined reference to `__subtf3'
undefined reference to `__addtf3'
undefined reference to `__fixtfsi'
undefined reference to `__floatsitf'

Symbols starting with __ are reserved for compiler vendors, and suggests a missing symbol in a compiler intrinsic. For some missing intrinsics, this can be fixed by linking to libgcc to your build command (only for *-linux-gnu and *-linux-musl targets):

# fails
$ cross +nightly build --target aarch64-unknown-linux-musl -Z build-std
# now this succeeds
$ export RUSTFLAGS="-C link-arg=-lgcc -Clink-arg=-static-libgcc"
$ cross +nightly build --target aarch64-unknown-linux-musl -Z build-std

This is because certain missing intrinsics are provided by libgcc, but are not present without using the c feature of compiler-builtins. Not every target will have every intrinsic provided by libgcc, however:

$ cross +nightly build --target aarch64-unknown-linux-gnu -Z build-std
  = note: libstd-3d482a16e52503f2.rlib(std-3d482a16e52503f2.std.18753d0b-cgu.3.rcgu.o): In function `core::sync::atomic::atomic_add::h038154eda15b0d27':
    atomic.rs:3036: undefined reference to `__aarch64_ldadd8_relax'
    atomic.rs:3038: undefined reference to `__aarch64_ldadd8_rel'
    atomic.rs:3037: undefined reference to `__aarch64_ldadd8_acq'
    atomic.rs:3039: undefined reference to `__aarch64_ldadd8_acq_rel'
    atomic.rs:3040: undefined reference to `__aarch64_ldadd8_acq_rel'

To fix this, you must provide the C sources from the LLVM tree. First, add the following to your Cargo.toml:

[dependencies.compiler_builtins]
git = "https://github.com/rust-lang/compiler-builtins"
features = ["c"]

Next, clone to LLVM project and add the compiler-rt sources to your shared volumes and/or project:

$ git clone https://github.com/llvm/llvm-project --branch llvmorg-14.0.6 --depth 1

Next, add the compiler-rt subdirectory to your project or mount the volume, and add the RUST_COMPILER_RT_ROOT environment variable so compiler-builtins knows where to find these sources.

For example, using a mounted volume, in Cross.toml

[build.env]
volumes = ["RUST_COMPILER_RT_ROOT=/path/to/compiler-rt"]

Or, adding the sources to your project subdirectory, in Cross.toml

[build.env]
passthrough = ["RUST_COMPILER_RT_ROOT=/path/to/project/compiler-rt"]

This will allow the project to build the all compiler-builtins intrinsics from source.

Error Adding Symbols: DSO Missing From Command Line

If you get an error saying missing symbols because libgcc was not provided (the error message below), you've likely found a bug in cross or compiler-builtins. Please file an issue and if applicable, we can patch this upstream.

  = note: ld: libc.a(strtod.lo): undefined reference to symbol '__trunctfsf2@@GCC_3.0'
libgcc_s.so.1: error adding symbols: DSO missing from command line

Other

Glibc Version Error

When using build scripts with cross, you might run into the following error:

Caused by:
  process didn't exit successfully: `/target/debug/build/hellopp-0a565c6e09d5b0ce/build-script-build` (exit status: 1)
  --- stderr
  /target/debug/build/hellopp-0a565c6e09d5b0ce/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by /target/debug/build/hellopp-0a565c6e09d5b0ce/build-script-build)
  /target/debug/build/hellopp-0a565c6e09d5b0ce/build-script-build: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.27' not found (required by /target/debug/build/hellopp-0a565c6e09d5b0ce/build-script-build)

This, unfortunately, is an issue (that likely won't be fixed) in how changing targets when compiling with cargo does not invalidate build fingerprints. There are two workarounds:

  1. Run cargo clean every time you change targets (unideal)
  2. Use a custom target directory for each target.

To faciliate the latter, you can use a cargo wrapper similar to the following:

cross.sh

#!/usr/bin/env bash

export CARGO_BUILD_TARGET="${TARGET}"
export CARGO_TARGET_DIR=target/build/"${CARGO_BUILD_TARGET}"
cross "${@}"

You can then either add cross.sh to the path or execute it locally from the project:

$ TARGET=aarch64-unknown-linux-gnu ./cross.sh build -vv
$ ls target/build/aarch64-unknown-linux-gnu/aarch64-unknown-linux-gnu/debug/
build  deps  examples  hello  hello.d  incremental
$ TARGET=sparc64-unknown-linux-gnu ./cross.sh build -vv
$ ls target/build/sparc64-unknown-linux-gnu/sparc64-unknown-linux-gnu/debug
build  deps  examples  hello  hello.d  incremental

This way, cross uses a separate target directory for the artifacts of every build script, so you never have to clean the build just to switch between targets.

Procedural Macros

When using procedural macros, the code that consumes and produces the Rust syntax runs using the host toolchain, that is, the toolchain of the container image architecture (generally x86_64). This means any dependencies required to process the token stream must be installed for the host, while any dependencies to compile the crate after the macro's expansion must be installed for the target. Often, the same libraries are required for both the host and the target. An example project using sqlx which requires OpenSSL for both the host and target. The important parts is in Cross.toml, where we install libssl-dev for the both amd64 and $CROSS_DEB_ARCH:

[target.aarch64-unknown-linux-gnu]
pre-build = [
    "dpkg --add-architecture $CROSS_DEB_ARCH",
    "apt-get update && apt-get install --assume-yes libssl-dev:amd64 libssl-dev:$CROSS_DEB_ARCH"
]

Managing Images

Due to the large storage requirements of cross images, we provide utilities to list and remove images associated with cross. Images can be listed and removed with cross-util:

# list all images created by cross
$ cross-util images list
# a dry-run of removing all images created by cross
$ cross-util images remove
# remove all images created by cross
$ cross-util images remove --execute
# remove all images for the given target created by cross
$ cross-util images remove arm-unknown-linux-gnueabihf --execute

Customizing Runners

By default, cross runs native binaries without emulation, and non-native binaries using Qemu, WINE, or some other runner. However, it may not be desirable to run the binary natively in all cases: for example, code that runs on x86_64 for an i586 may not be valid on a real CPU. Therefore, we support 2 different ways of customizing the runners:

Cross Target Runner

See configuration for more details. A custom runner for cross can be provided via target.(...).runner, and can be qemu-system, qemu-user, or native. This can also be provided via the CROSS_TARGET_${TARGET}_RUNNER environment variable.

[target.aarch64-unknown-linux-gnu]
runner = "qemu-user"

Cargo Target Runner

However, this may not be enough flexibility, and you may wish to provide your own wrapper, or test running on a specific CPU. In this case, you can provide CARGO_TARGET_${TARGET}_RUNNER (which will allow any command, or sequence of arguments, and override CROSS_TARGET_${TARGET}_RUNNER).

# run binaries on the cortex-a72 when targeting `aarch64-unknown-linux-gnu`.
$ export CARGO_TARGET_AARCH64_UNKNOWN_LINUX_GNU_RUNNER="qemu-aarch64 -cpu cortex-a72"
$ cross run --target aarch64-unknown-linux-gnu

Running BSD Tests

Running FreeBSD tests cannot be done on Linux without full system emulation via Qemu, or using FreeBSD cloud services (such as CI images or virtual private servers). The best solution is using Cirrus CI, as is done by Rust in testing libc. You can also use recommended VPSs by the FreeBSD project. Any VPS hosting provider supporting custom images will work for other BSD distros.

Android Test Support

cross does not ship with a full Android emulator, and therefore some tests on Android can fail. The best solution is to install Anbox on a Linux system (possibly a cross Docker image using custom images), and following all the post-install instructions. This has not been tested to work in a container, and may only work on the host machine.

Android Version Configuration

Cross allows you to specify the NDK, SDK, and Android versions when building Android images. The key variables are ANDROID_NDK, ANDROID_SDK, and ANDROID_VERSION. The values default to an NDK version of ARG r25b, an SDK version of 28, and an Android version of 9.0.0_r1. We support NDK versions of r10e-r25b, SDK versions 21-33, and Android versions 5.0, 5.1, 6.0, 7.0, 8.0, 8.1, 9.0, 10.0, and 11.0. For example, to build for Android 11:

cargo build-docker-image aarch64-linux-android \
  --build-arg ANDROID_NDK=r25b \
  --build-arg ANDROID_SDK=30 \
  --build-arg ANDROID_VERSION=11.0.0_r48

You must provide compatible NDK, SDK, and Android versions, as described on the SDK Platform release notes. Note that all non-complete builds will use the bootstrap and not the APEX linker, due to the large number of dependencies to support APEX binaries.

If you would like to target a newer Android version that we do not currently support, you can either disable building the Android system or do a complete build. Providing ANDROID_SYSTEM_NONE=1 will disable running Android binaries, but will have faster image build times and smaller image sizes. Providing ANDROID_SYSTEM_COMPLETE=1 will do a full Android system build. Note that this is currently untested, and is very slow, requires large amounts of storage, and will produce large images. In order to do a complete system build, you will require more than 16GB of RAM and more than 400GB of storage space. This is more than the default WSL2 disk size of 256GB, requiring you to expand the size of the WSL2 virtual hard disk.

Exec Format Error

If using a container image with a different architecture than the host container engine (such as using a linux/amd64 image on a linux/arm64 host), the container engine may not have proper Qemu emulators to run the image with the foreign architecture.

To do so, install the binfmt support for the host kernel, do:

docker run --privileged --rm tonistiigi/binfmt --install all

This requires --privileged because it modifies the host kernel to register the executable file formats with the correct Qemu emulator.

Glacial Custom Image Builds for CentOS

When using yum on some hosts, there is a bug in Docker BuildKit that causes yum to run extremely slowly. The cause is the ulimit being set to unlimited, and can be fixed by manually setting the ulimit in every RUN command used, either within a script or via an environment variable. A Dockerfile example is:

RUN ulimit -n 1024000 && yum install ... -y

Or, from your scripts, you can use:

install_packages() {
    ulimit -n 1024000
    yum install "${@}" -y
}