Releases: NVIDIA/nvidia-container-toolkit
v1.12.0-rc.1
- Improve injection of Vulkan configurations and libraries
- Add
nvidia-ctk info generate-cdi
command to generated CDI specification for available devices
Changes for the container-toolkit
container
- Update CUDA base images to 11.8.0
Changes from libnvidia-container v1.12.0-rc.1
- Add NVVM Compiler Library (
libnvidia-nvvm.so
) to list of compute libraries
v1.11.0
This is a promotion of the v1.11.0-rc.3
release to GA.
This release of the NVIDIA Container Toolkit v1.11.0
is primarily targeted at adding support for injection of GPUDirect Storage and MOFED devices into containerized environments.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
NOTE: This release does not include an update to nvidia-docker2
and is compatible with nvidia-docker2 2.11.0
.
The packages for this release are published to the libnvidia-container
package repositories.
1.11.0-rc.3
- Build fedora35 packages
- Introduce an
nvidia-container-toolkit-base
package for better dependency management - Fix removal of
nvidia-container-runtime-hook
on RPM-based systems - Inject platform files into container on Tegra-based systems
NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.*
version it may be required to remove the nvidia-container-toolkit
or nvidia-container-toolkit-base
package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base
package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0
) should work as expected.
Changes for the container-toolkit
container
- Update CUDA base images to 11.7.1
- Fix bug in setting of toolkit
accept-nvidia-visible-devices-*
config options introduced inv1.11.0-rc.2
.
Changes from libnvidia-container v1.11.0-rc.3
- Preload
libgcc_s.so.1
on arm64 systems
1.11.0-rc.2
Changes for the container-toolkit
container
- Allow
accept-nvidia-visible-devices-*
config options to be set by toolkit container
Changes from libnvidia-container v1.11.0-rc.2
- Fix bug where LDCache was not updated when the
--no-pivot-root
option was specified
1.11.0-rc.1
- Add
cdi
mode to NVIDIA Container Runtime - Add discovery of GPUDirect Storage (
nvidia-fs*
) devices if theNVIDIA_GDS
environment variable of the container is set toenabled
- Add discovery of MOFED Infiniband devices if the
NVIDIA_MOFED
environment variable of the container is set toenabled
- Fix bug in CSV mode where libraries listed as
sym
entries in mount specification are not added to the LDCache. - Rename
nvidia-contianer-toolkit
executable tonvidia-container-runtime-hook
and createnvidia-container-toolkit
as a symlink tonvidia-container-runtime-hook
instead. - Add
nvidia-ctk runtime configure
command to configure the Docker config file (e.g./etc/docker/daemon.json
) for use with the NVIDIA Container Runtime.
v1.11.0-rc.3
- Build fedora35 packages
- Introduce an
nvidia-container-toolkit-base
package for better dependency management - Fix removal of
nvidia-container-runtime-hook
on RPM-based systems - Inject platform files into container on Tegra-based systems
NOTE: When upgrading from(or downgrading to) another 1.11.0-rc.*
version it may be required to remove the nvidia-container-toolkit
or nvidia-container-toolkit-base
package(s) manually. This is due to the introduction of the nvidia-container-toolkit-base
package which now provides the configuration file for the NVIDIA Container Toolkit. Upgrades from or downgrades to older versions of the NVIDIA Container Toolkit (i.e. <= 1.10.0
) should work as expected.
Changes for the container-toolkit
container
- Update CUDA base images to 11.7.1
- Fix bug in setting of toolkit
accept-nvidia-visible-devices-*
config options introduced inv1.11.0-rc.2
.
Changes from libnvidia-container v1.11.0-rc.3
- Preload
libgcc_s.so.1
on arm64 systems
v1.11.0-rc.2
Changes for the container-toolkit
container
- Allow
accept-nvidia-visible-devices-*
config options to be set by toolkit container
Changes from libnvidia-container v1.11.0-rc.2
- Fix bug where LDCache was not updated when the
--no-pivot-root
option was specified
v1.11.0-rc.1
- Add
cdi
mode to NVIDIA Container Runtime - Add discovery of GPUDirect Storage (
nvidia-fs*
) devices if theNVIDIA_GDS
environment variable of the container is set toenabled
- Add discovery of MOFED Infiniband devices if the
NVIDIA_MOFED
environment variable of the container is set toenabled
- Fix bug in CSV mode where libraries listed as
sym
entries in mount specification are not added to the LDCache. - Rename
nvidia-contianer-toolkit
executable tonvidia-container-runtime-hook
and createnvidia-container-toolkit
as a symlink tonvidia-container-runtime-hook
instead. - Add
nvidia-ctk runtime configure
command to configure the Docker config file (e.g./etc/docker/daemon.json
) for use with the NVIDIA Container Runtime.
v1.10.0
This is a promotion of the v1.10.0-rc.3
release to GA.
This release of the NVIDIA Container Toolkit v1.10.0
is primarily targeted at improving support for Tegra-based systems.
It sees the introduction of a new mode of operation for the NVIDIA Container Runtime that makes modifications to the incoming OCI runtime
specification directly instead of relying on the NVIDIA Container CLI.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.10.0
nvidia-container-toolkit 1.10.0
nvidia-container-runtime 3.10.0
nvidia-docker2 2.11.0
The packages for this release are published to the libnvidia-container
package repositories.
- Update config files to include default settings for
nvidia-container-runtime.mode
andnvidia-container-runtime.runtimes
- Update
container-toolkit
base image to CUDA 11.7.0 - Switch to
ubuntu20.04
for defaultcontainer-toolkit
image - Stop publishing all
centos8
andarm64
ubuntu18.04
container-toolkit
images
1.10.0-rc.3
- Use default config instead of raising an error if config file cannot be found
- Ignore
NVIDIA_REQUIRE_JETPACK*
environment variables for requirement checks - Fix bug in detection of Tegra systems where
/sys/devices/soc0/family
is ignored - Fix bug where links to devices were detected as devices
Changes for the container-toolkit
container
- Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file
Changes from libnvidia-container v1.10.0-rc.3
- Fix bug introduced when adding
libcudadebugger.so
to list of libraries inv1.10.0-rc.2
1.10.0-rc.2
- Add support for
NVIDIA_REQUIRE_*
checks forcuda
version andarch
tocsv
mode - Switch to debug logging to reduce log verbosity
- Support logging to logs requested in command line
- Fix bug when launching containers with relative root path (e.g. using containerd)
- Allow low-level runtime path to be set explicitly as
nvidia-container-runtime.runtimes
option - Fix failure to locate low-level runtime if PATH envvar is unset
- Replace experimental option for NVIDIA Container Runtime with
nvidia-container-runtime.mode = "csv"
option - Use
csv
as default mode on Tegra systems without NVML - Add
--version
flag to all CLIs
Changes from libnvidia-container v1.10.0-rc.2
- Bump
libtirpc
to1.3.2
(libnvidia-container#168) - Fix bug when running host
ldconfig
usingglibc
compiled with a non-standard prefix - Add
libcudadebugger.so
to list of compute libraries
1.10.0-rc.1
- Add
nvidia-container-runtime.log-level
config option to control the level of logging in the NVIDIA Container Runtime - Add
nvidia-container-runtime.experimental
config option that allows for experimental features to be enabled. - Add
nvidia-container-runtime.discover-mode
to control how modifications are applied to the incoming OCI runtime specification in experimental mode - Add support for the direct modification of the incoming OCI specification to the NVIDIA Container Runtime; this is targeted at Tegra-based systems with CSV-file based mount specifications.
Changes from libnvidia-container v1.10.0-rc.1
- [WSL2] Fix segmentation fault on WSL2s system with no adapters present (e.g.
/dev/dxg
missing) - Ignore pending MIG mode when checking if a device is MIG enabled
- [WSL2] Fix bug where
/dev/dxg
is not mounted whenNVIDIA_DRIVER_CAPABILITIES
does not include"compute"
v1.10.0-rc.3
- Use default config instead of raising an error if config file cannot be found
- Ignore
NVIDIA_REQUIRE_JETPACK*
environment variables for requirement checks - Fix bug in detection of Tegra systems where
/sys/devices/soc0/family
is ignored - Fix bug where links to devices were detected as devices
Changes for the container-toolkit
container
- Fix bug where runtime binary path was misconfigured for containerd when using v1 of the config file
Changes from libnvidia-container v1.10.0-rc.3
- Fix bug introduced when adding
libcudadebugger.so
to list of libraries inv1.10.0-rc.2
v1.10.0-rc.2
- Add support for
NVIDIA_REQUIRE_*
checks forcuda
version andarch
tocsv
mode - Switch to debug logging to reduce log verbosity
- Support logging to logs requested in command line
- Fix bug when launching containers with relative root path (e.g. using containerd)
- Allow low-level runtime path to be set explicitly as
nvidia-container-runtime.runtimes
option - Fix failure to locate low-level runtime if PATH envvar is unset
- Replace experimental option for NVIDIA Container Runtime with
nvidia-container-runtime.mode = "csv"
option - Use
csv
as default mode on Tegra systems without NVML - Add
--version
flag to all CLIs
Changes from libnvidia-container v1.10.0-rc.2
- Bump
libtirpc
to1.3.2
(libnvidia-container#168) - Fix bug when running host
ldconfig
usingglibc
compiled with a non-standard prefix - Add
libcudadebugger.so
to list of compute libraries
v1.10.0-rc.1
- Add
nvidia-container-runtime.log-level
config option to control the level of logging in the NVIDIA Container Runtime - Add
nvidia-container-runtime.experimental
config option that allows for experimental features to be enabled. - Add
nvidia-container-runtime.discover-mode
to control how modifications are applied to the incoming OCI runtime specification in experimental mode - Add support for the direct modification of the incoming OCI specification to the NVIDIA Container Runtime; this is targeted at Tegra-based systems with CSV-file based mount specifications.
Changes from libnvidia-container v1.10.0-rc.1
- [WSL2] Fix segmentation fault on WSL2s system with no adapters present (e.g.
/dev/dxg
missing) - Ignore pending MIG mode when checking if a device is MIG enabled
- [WSL2] Fix bug where
/dev/dxg
is not mounted whenNVIDIA_DRIVER_CAPABILITIES
does not include"compute"
v1.9.0
This release of the NVIDIA Container Toolkit v1.9.0
is primarily targeted at adding multi-arch support for the container-toolkit
images. It also includes enhancements for use on Tegra-systems and some notable bugfixes.
NOTE: This release is a unified release of the NVIDIA Container Toolkit that consists of the following packages:
libnvidia-container 1.9.0
nvidia-container-toolkit 1.9.0
nvidia-container-runtime 3.9.0
nvidia-docker2 2.10.0
Changes from libnvidia-container 1.9.0
- Add additional check for Tegra in
/sys/.../family
file in CLI - Update jetpack-specific CLI option to only load Base CSV files by default
- Fix bug (from
v1.8.0
) when mounting GSP firmware into containers without/lib
to/usr/lib
symlinks - Update
nvml.h
to CUDA 11.6.1 nvML_DEV 11.6.55 - Update switch statement to include new brands from latest
nvml.h
- Process all
--require
flags on Jetson platforms - Fix long-standing issue with running ldconfig on Debian systems