Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA EULA patchelf exception #10

Open
SomeoneSerge opened this issue Oct 27, 2023 · 0 comments
Open

CUDA EULA patchelf exception #10

SomeoneSerge opened this issue Oct 27, 2023 · 0 comments

Comments

@SomeoneSerge
Copy link

SomeoneSerge commented Oct 27, 2023

Rewriting ELF

The "ELF patching" or rewriting of the .dynamic and .interp sections in shared libraries and executable files is a common step in preparing software for deployment in target environments. The records in these sections control the assumptions the program makes during program loading about the environment it's executed in. The .interp section tells a dynamically linked program where to find ld.so, the "dynamic linker" responsible for program loading. The DT_NEEDED and DT_RUNPATH entries in the .dynamic section give linker the cues about which dynamic libraries provide the required dependencies, and where in the filesystem they could be found.

For example, when one builds a native program using CMake, the built program stores DT_RUNPATH records with locations of all of the dependencies in the development environment. This way, the developer may easily run local tests. However, when the program is being installed, CMake will strip1 these records and re-link the program, because the locations of the dependencies may be different on the user's machine.

Conda and Spack package managers make extensive use23 of the "dynamic string token" $ORIGIN in DT_RUNPATH entries in order to create relocatable binaries: executables and shared libraries that can be moved to and loaded from an arbitrary root in the file system, as long as one preserves the relative paths between the distributed files.

Nix and GNU Guix make loading of dynamic programs deterministic and provably correct by linking them directly to concrete revisions of their dependencies they were built to run with. This is done by recording in the dynamic structure the absolute paths of the dependencies, which are always deployed in unique locations pre-computed in a deterministic fashion.

ELF patching is used in more ad hoc ways too. For example, the official Pytorch wheels often package4 vendored copies of certain native libraries, including but limited to libogmp.so or CUDA and cuDNN shared libraries. In order to avoid conflicts with other versions of the same libraries, their path names are augmented with unique suffices, and all of the DT_NEEDED attributes are updated to reflect this.

Distributing software linked against CUDA

The CUDA and cuDNN End-User License Agreements (EULAs) specifically allow56 redistributing certain NVidia-owned files together with the Licensees' applications, as required for deployment on the target hosts. The licenses also explicitly grant application developers the right to redistribute files whose path names reflect versioning and host architecture information, which are required for the dynamic linker to choose the right binary.

The EULAs do not mention the possibility of updating the dynamic structures (the .dynamic and .interp sections) of the NVidia-owned binaries in order to prepare them for execution on the users' machines. Thus the default assumption is that such updates may constitute a "modification" which would prohibit redistribution of the finalized artifacts. For this reason, NVidia had to grant78 Anaconda and conda-forge an exclusive permission to patchelf the toolkit files, before CUDAToolkit and cuDNN could be published in the conda repositories, and before PyTorch with CUDA support could be made available to users.

This is also the underlying reason that distributions such as Nixpkgs and Spack currently choose not to provide public binary cache for software that links to CUDA, cuDNN, or components of the NVidia HPC SDK. This implies that the consumers interested in the unique correctness guarantees or the security and supply chain inspectability properties these systems provide have to invest in their own build infrastructure capable of handling at times prohibitively heavy builds, such as when building PyTorch and Tensorflow for use with CUDA devices.

Proposal

NVidia could enable all of these new applications of CUDA and cuDNN libraries by integrating the patchelf exception, tested and recognized by NVidia and Anaconda across the years, into the respective EULA texts. This means updating the licenses to explicitly permit updating, with the purpose of communicating assumptions about the environment, the dynamic linker cues in the ELF dynamic structures, namely the .interp section, as well as DT_NEEDED, DT_RPATH, and DT_RUNPATH entries in the .dynamic section.

FAQ

Please do suggest how to update this text!

Could such exception be abused to bypass NVidia's software restrictions, e.g. gain access to datacenter GPUs' features on consumer-grade devices?

Not in any meaningful way. This issue is specifically concerned with the cudatoolkit and cuDNN libraries, not with the libcuda.so userspace driver. Additionally, such an exception would not allow touching the .text sections, where the actual executable code resides.

My project or company is also affected!

Please consider issuing a public statement and inking it in the comments. Also consider reaching out at nvidia-compute-license-questions@nvidia.com, as suggested by the CUDA EULA, and linking this issue and your statement.

Edit history

  • 2023-10-30: Clarified that the libcuda.so driver is out of the scope for this issue

Links

CC #3 #5 NixOS/nixpkgs#76233

Footnotes

  1. https://gitlab.kitware.com/cmake/community/-/wikis/doc/cmake/RPATH-handling

  2. https://docs.conda.io/projects/conda-build/en/3.21.x/resources/use-shared-libraries.html#shared-libraries-in-macos-and-linux

  3. https://spack.readthedocs.io/en/latest/environments.html

  4. https://github.com/pytorch/builder/blob/c5e331c0858e37fedc047707466161dfe0cadff6/manywheel/build_common.sh#L326-L357

  5. https://docs.nvidia.com/cuda/eula/index.html#attachment-a

  6. "2. Distribution" in https://docs.nvidia.com/deeplearning/cudnn/sla/index.html#supplement

  7. https://github.com/conda-forge/cudatoolkit-feedstock/issues/15#issuecomment-717563755

  8. https://nvbugs/3052604

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant