Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

GPU becomes unavailable after some time in Docker container #1469

Closed
9 tasks done
tobigue opened this issue Mar 5, 2021 · 9 comments
Closed
9 tasks done

GPU becomes unavailable after some time in Docker container #1469

tobigue opened this issue Mar 5, 2021 · 9 comments

Comments

@tobigue
Copy link

tobigue commented Mar 5, 2021

1. Issue or feature description

Hello,

after updating the software on some of our workstations, we have the problem that GPUs become unavailable in a docker image.

We first noticed this, when PyTorch experiments failed on the second script called in the container with a RuntimeError: No CUDA GPUs are available.

While trying to debug this, we noticed that also just starting the container with nvidia-docker run --rm -it nvidia/cuda:11.2.1-devel-ubuntu20.04 bash and running a watch -n 1 nvidia-smi inside the container does not work as expected. First, the output is as expected, but after some time (which varies between a few seconds and several hours) the output changes to Failed to initialize NVML: Unknown Error.

We could reproduce the error with different Docker images, such as nvidia/cuda:11.2.1-devel-ubuntu20.04 and images based on nvcr.io/nvidia/pytorch:20.12-py3 and pytorch/pytorch:1.7.1-cuda11.0-cudnn8-runtime.

We have reproduced this bug on different workstations with completely different hardware and GPUs (GTX 1080 Ti and RTX 3090).

Setups that do NOT work (GTX 1080 Ti and RTX 3090 workstations) are:
Ubuntu 20.04 (nvidia-docker2 2.5.0-1):

  • linux-image-5.4.0-65-generic + nvidia-headless-450 450.102.04-0ubuntu0.20.04.1
  • linux-image-5.8.0-44-generic + nvidia-headless-460 460.39-0ubuntu0.20.04.1

A Setup that DOES WORK (on the same GTX 1080 Ti machine) is:
Ubuntu 16.04 (nvidia-docker2 2.0.3+docker18.09.2-1):

  • linux-image-4.4.0-194-generic + nvidia-430 430.26-0ubuntu0~gpu16.04.1

So we suspect that the problem is either in newer versions of the kernel, driver or nvidia-docker of the host machine.

We are looking for advice how to debug this further and fix the problem.
What are things we could try to run on the host and inside the container, while we have a container running that is in the erroneous state to find out what exactely the problem is?

Thanks for any help!

2. Steps to reproduce the issue

E.g. nvidia-docker run --rm -it nvidia/cuda:11.2.1-devel-ubuntu20.04 bash on a system with ubuntu 20.04 and watch -n 1 nvidia-smi inside the container (might take minutes to several hours).

3. Information to attach (optional if deemed irrelevant)

  • Some nvidia-container information: nvidia-container-cli -k -d /dev/tty info
-- WARNING, the following logs are for debugging purposes only --               
                                                                                                     
I0302 15:59:42.287249 182208 nvc.c:372] initializing library context (version=1.3.3, build=bd9fc3f2b642345301cb2e23de07ec5386232317)
I0302 15:59:42.287282 182208 nvc.c:346] using root /                                              
I0302 15:59:42.287295 182208 nvc.c:347] using ldcache /etc/ld.so.cache                      
I0302 15:59:42.287298 182208 nvc.c:348] using unprivileged user 1013:1013                  
I0302 15:59:42.287321 182208 nvc.c:389] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I0302 15:59:42.287405 182208 nvc.c:391] dxcore initialization failed, continuing assuming a non-WSL environment
W0302 15:59:42.288813 182209 nvc.c:269] failed to set inheritable capabilities           
W0302 15:59:42.288849 182209 nvc.c:270] skipping kernel modules load due to failure           
I0302 15:59:42.289058 182210 driver.c:101] starting driver service                 
I0302 15:59:42.784623 182208 nvc_info.c:680] requesting driver information with ''        
I0302 15:59:42.785590 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvoptix.so.460.39 
I0302 15:59:42.785642 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.460.39
I0302 15:59:42.785672 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.460.39
I0302 15:59:42.785703 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.460.39
I0302 15:59:42.785746 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.460.39
I0302 15:59:42.785786 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.460.39
I0302 15:59:42.785815 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.460.39
I0302 15:59:42.785842 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.460.39
I0302 15:59:42.785886 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ifr.so.460.39
I0302 15:59:42.785927 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.460.39
I0302 15:59:42.785955 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.460.39
I0302 15:59:42.785983 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.460.39
I0302 15:59:42.786011 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.460.39
I0302 15:59:42.786050 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.460.39
I0302 15:59:42.786092 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.460.39
I0302 15:59:42.786124 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.460.39
I0302 15:59:42.786155 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.460.39
I0302 15:59:42.786196 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cbl.so.460.39
I0302 15:59:42.786228 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.460.39
I0302 15:59:42.786269 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvcuvid.so.460.39 
I0302 15:59:42.786421 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libcuda.so.460.39
I0302 15:59:42.786507 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.460.39
I0302 15:59:42.786536 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.460.39
I0302 15:59:42.786564 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.460.39
I0302 15:59:42.786594 182208 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.460.39
W0302 15:59:42.786614 182208 nvc_info.c:350] missing library libnvidia-fatbinaryloader.so                                                                                                                  
W0302 15:59:42.786618 182208 nvc_info.c:350] missing library libvdpau_nvidia.so
W0302 15:59:42.786624 182208 nvc_info.c:354] missing compat32 library libnvidia-ml.so
W0302 15:59:42.786628 182208 nvc_info.c:354] missing compat32 library libnvidia-cfg.so
W0302 15:59:42.786642 182208 nvc_info.c:354] missing compat32 library libcuda.so
W0302 15:59:42.786647 182208 nvc_info.c:354] missing compat32 library libnvidia-opencl.so
W0302 15:59:42.786652 182208 nvc_info.c:354] missing compat32 library libnvidia-ptxjitcompiler.so
W0302 15:59:42.786657 182208 nvc_info.c:354] missing compat32 library libnvidia-fatbinaryloader.so
W0302 15:59:42.786663 182208 nvc_info.c:354] missing compat32 library libnvidia-allocator.so
W0302 15:59:42.786672 182208 nvc_info.c:354] missing compat32 library libnvidia-compiler.so
W0302 15:59:42.786677 182208 nvc_info.c:354] missing compat32 library libnvidia-ngx.so
W0302 15:59:42.786684 182208 nvc_info.c:354] missing compat32 library libvdpau_nvidia.so
W0302 15:59:42.786689 182208 nvc_info.c:354] missing compat32 library libnvidia-encode.so
W0302 15:59:42.786693 182208 nvc_info.c:354] missing compat32 library libnvidia-opticalflow.so
W0302 15:59:42.786697 182208 nvc_info.c:354] missing compat32 library libnvcuvid.so
W0302 15:59:42.786701 182208 nvc_info.c:354] missing compat32 library libnvidia-eglcore.so
W0302 15:59:42.786706 182208 nvc_info.c:354] missing compat32 library libnvidia-glcore.so
W0302 15:59:42.786713 182208 nvc_info.c:354] missing compat32 library libnvidia-tls.so
W0302 15:59:42.786719 182208 nvc_info.c:354] missing compat32 library libnvidia-glsi.so
W0302 15:59:42.786724 182208 nvc_info.c:354] missing compat32 library libnvidia-fbc.so
W0302 15:59:42.786728 182208 nvc_info.c:354] missing compat32 library libnvidia-ifr.so
W0302 15:59:42.786732 182208 nvc_info.c:354] missing compat32 library libnvidia-rtcore.so
W0302 15:59:42.786737 182208 nvc_info.c:354] missing compat32 library libnvoptix.so
W0302 15:59:42.786743 182208 nvc_info.c:354] missing compat32 library libGLX_nvidia.so
W0302 15:59:42.786749 182208 nvc_info.c:354] missing compat32 library libEGL_nvidia.so
W0302 15:59:42.786754 182208 nvc_info.c:354] missing compat32 library libGLESv2_nvidia.so
W0302 15:59:42.786758 182208 nvc_info.c:354] missing compat32 library libGLESv1_CM_nvidia.so
W0302 15:59:42.786765 182208 nvc_info.c:354] missing compat32 library libnvidia-glvkspirv.so
W0302 15:59:42.786771 182208 nvc_info.c:354] missing compat32 library libnvidia-cbl.so
I0302 15:59:42.786945 182208 nvc_info.c:276] selecting /usr/bin/nvidia-smi
I0302 15:59:42.786961 182208 nvc_info.c:276] selecting /usr/bin/nvidia-debugdump
I0302 15:59:42.786976 182208 nvc_info.c:276] selecting /usr/bin/nvidia-persistenced
I0302 15:59:42.786990 182208 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-control
I0302 15:59:42.787006 182208 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-server
I0302 15:59:42.787026 182208 nvc_info.c:438] listing device /dev/nvidiactl
I0302 15:59:42.787031 182208 nvc_info.c:438] listing device /dev/nvidia-uvm
I0302 15:59:42.787035 182208 nvc_info.c:438] listing device /dev/nvidia-uvm-tools
I0302 15:59:42.787039 182208 nvc_info.c:438] listing device /dev/nvidia-modeset
I0302 15:59:42.787065 182208 nvc_info.c:317] listing ipc /run/nvidia-persistenced/socket
W0302 15:59:42.787079 182208 nvc_info.c:321] missing ipc /tmp/nvidia-mps
I0302 15:59:42.787084 182208 nvc_info.c:745] requesting device information with ''
I0302 15:59:42.792893 182208 nvc_info.c:628] listing device /dev/nvidia0 (GPU-9ebb44d4-b6d8-37f3-4a5d-8717b752a71f at 00000000:08:00.0)
NVRM version:   460.39
CUDA version:   11.2

Device Index:   0
Device Minor:   0
Model:          GeForce RTX 3090
Brand:          GeForce
GPU UUID:       GPU-9ebb44d4-b6d8-37f3-4a5d-8717b752a71f
Bus Location:   00000000:08:00.0
Architecture:   8.6
I0302 15:59:42.792935 182208 nvc.c:427] shutting down library context
I0302 15:59:42.793175 182210 driver.c:156] terminating driver service
I0302 15:59:42.793491 182208 driver.c:196] driver service terminated successfully
  • Kernel version from uname -a

Linux ws-3090-enterprise 5.4.0-65-generic NVIDIA/nvidia-docker#73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

  • Any relevant kernel output lines from dmesg
[Feb25 06:27] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000093] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,849531] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000093] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,851079] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000096] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,848132] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000092] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,851395] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000092] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,840237] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000093] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,840526] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000094] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,849611] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000164] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,853780] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000094] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,853746] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000093] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
[  +1,849458] resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window]
[  +0,000157] caller os_map_kernel_space.part.0+0x77/0xa0 [nvidia] mapping multiple BARs
  • Driver information from nvidia-smi -a
==============NVSMI LOG==============

Timestamp                                 : Tue Mar  2 17:02:59 2021
Driver Version                            : 460.39
CUDA Version                              : 11.2

Attached GPUs                             : 1
GPU 00000000:08:00.0
    Product Name                          : GeForce RTX 3090
    Product Brand                         : GeForce
    Display Mode                          : Disabled
    Display Active                        : Disabled
    Persistence Mode                      : Disabled
    MIG Mode
        Current                           : N/A
        Pending                           : N/A
    Accounting Mode                       : Disabled
    Accounting Mode Buffer Size           : 4000
    Driver Model
        Current                           : N/A
        Pending                           : N/A
    Serial Number                         : N/A
    GPU UUID                              : GPU-9ebb44d4-b6d8-37f3-4a5d-8717b752a71f
    Minor Number                          : 0
    VBIOS Version                         : 94.02.26.48.5A
    MultiGPU Board                        : No
    Board ID                              : 0x800
    GPU Part Number                       : N/A
    Inforom Version
        Image Version                     : G001.0000.03.03
        OEM Object                        : 2.0
        ECC Object                        : N/A
        Power Management Object           : N/A
    GPU Operation Mode
        Current                           : N/A
        Pending                           : N/A
    GPU Virtualization Mode
        Virtualization Mode               : None
        Host VGPU Mode                    : N/A
    IBMNPU
        Relaxed Ordering Mode             : N/A
    PCI
        Bus                               : 0x08
        Device                            : 0x00
        Domain                            : 0x0000
        Device Id                         : 0x220410DE
        Bus Id                            : 00000000:08:00.0
        Sub System Id                     : 0x38841462
        GPU Link Info
            PCIe Generation
                Max                       : 4
                Current                   : 4
            Link Width
                Max                       : 16x
                Current                   : 16x
        Bridge Chip
            Type                          : N/A
            Firmware                      : N/A
        Replays Since Reset               : 0
        Replay Number Rollovers           : 0
        Tx Throughput                     : 0 KB/s
        Rx Throughput                     : 0 KB/s
    Fan Speed                             : 30 %
    Performance State                     : P0
    Clocks Throttle Reasons
        Idle                              : Active
        Applications Clocks Setting       : Not Active
        SW Power Cap                      : Not Active
        HW Slowdown                       : Not Active
            HW Thermal Slowdown           : Not Active
            HW Power Brake Slowdown       : Not Active
        Sync Boost                        : Not Active
        SW Thermal Slowdown               : Not Active
        Display Clock Setting             : Not Active
    FB Memory Usage
        Total                             : 24267 MiB
        Used                              : 0 MiB
        Free                              : 24267 MiB
    BAR1 Memory Usage
        Total                             : 256 MiB
        Used                              : 2 MiB
        Free                              : 254 MiB
    Compute Mode                          : Default
    Utilization
        Gpu                               : 0 %
        Memory                            : 0 %
        Encoder                           : 0 %
        Decoder                           : 0 %
    Encoder Stats
        Active Sessions                   : 0
        Average FPS                       : 0
        Average Latency                   : 0
    FBC Stats
        Active Sessions                   : 0
        Average FPS                       : 0
        Average Latency                   : 0
    Ecc Mode
        Current                           : N/A
        Pending                           : N/A
    ECC Errors
        Volatile
            SRAM Correctable              : N/A
            SRAM Uncorrectable            : N/A
            DRAM Correctable              : N/A
            DRAM Uncorrectable            : N/A
        Aggregate
            SRAM Correctable              : N/A
            SRAM Uncorrectable            : N/A
            DRAM Correctable              : N/A
            DRAM Uncorrectable            : N/A
    Retired Pages
        Single Bit ECC                    : N/A
        Double Bit ECC                    : N/A
        Pending Page Blacklist            : N/A
    Remapped Rows                         : N/A
    Temperature
        GPU Current Temp                  : 36 C
        GPU Shutdown Temp                 : 98 C
        GPU Slowdown Temp                 : 95 C
        GPU Max Operating Temp            : 93 C
        GPU Target Temperature            : 83 C
        Memory Current Temp               : N/A
        Memory Max Operating Temp         : N/A
    Power Readings
        Power Management                  : Supported
        Power Draw                        : 61.27 W
        Power Limit                       : 370.00 W
        Default Power Limit               : 370.00 W
        Enforced Power Limit              : 370.00 W
        Min Power Limit                   : 100.00 W
        Max Power Limit                   : 380.00 W
    Clocks
        Graphics                          : 1785 MHz
        SM                                : 1785 MHz
        Memory                            : 9751 MHz
        Video                             : 1575 MHz
    Applications Clocks
        Graphics                          : N/A
        Memory                            : N/A
    Default Applications Clocks
        Graphics                          : N/A
        Memory                            : N/A
    Max Clocks
        Graphics                          : 2115 MHz
        SM                                : 2115 MHz
        Memory                            : 9751 MHz
        Video                             : 1950 MHz
    Max Customer Boost Clocks
        Graphics                          : N/A
    Clock Policy
        Auto Boost                        : N/A
        Auto Boost Default                : N/A
    Processes                             : None
  • Docker version from docker version
Client: Docker Engine - Community
 Version:           20.10.3
 API version:       1.40
 Go version:        go1.13.15
 Git commit:        48d30b5
 Built:             Fri Jan 29 14:33:21 2021
 OS/Arch:           linux/amd64
 Context:           default
 Experimental:      true

Server: Docker Engine - Community
 Engine:
  Version:          19.03.14
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.15
  Git commit:       5eb3275d40
  Built:            Tue Dec  1 19:18:53 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.4.3
  GitCommit:        269548fa27e0089a8b8278fc4fc781d7f65a939b
 nvidia:
  Version:          1.0.0-rc92
  GitCommit:        ff819c7e9184c13b7c2607fe6c30ae19403a7aff
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683
  • NVIDIA packages version from dpkg -l '*nvidia*' or rpm -qa '*nvidia*'
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name                          Version                    Architecture Description
+++-=============================-==========================-============-=========================================================
un  libgldispatch0-nvidia         <none>                     <none>       (no description available)
ii  libnvidia-cfg1-460:amd64      460.39-0ubuntu0.20.04.1    amd64        NVIDIA binary OpenGL/GLX configuration library
un  libnvidia-cfg1-any            <none>                     <none>       (no description available)
un  libnvidia-common              <none>                     <none>       (no description available)
ii  libnvidia-common-460          460.32.03-0ubuntu0.20.04.1 all          Shared files used by the NVIDIA libraries
ii  libnvidia-compute-460:amd64   460.39-0ubuntu0.20.04.1    amd64        NVIDIA libcompute package
ii  libnvidia-container-tools     1.3.3-1                    amd64        NVIDIA container runtime library (command-line tools)
ii  libnvidia-container1:amd64    1.3.3-1                    amd64        NVIDIA container runtime library
un  libnvidia-decode              <none>                     <none>       (no description available)
ii  libnvidia-decode-460:amd64    460.39-0ubuntu0.20.04.1    amd64        NVIDIA Video Decoding runtime libraries
un  libnvidia-encode              <none>                     <none>       (no description available)
ii  libnvidia-encode-460:amd64    460.39-0ubuntu0.20.04.1    amd64        NVENC Video Encoding runtime library
un  libnvidia-extra               <none>                     <none>       (no description available)
ii  libnvidia-extra-460:amd64     460.39-0ubuntu0.20.04.1    amd64        Extra libraries for the NVIDIA driver
un  libnvidia-fbc1                <none>                     <none>       (no description available)
ii  libnvidia-fbc1-460:amd64      460.39-0ubuntu0.20.04.1    amd64        NVIDIA OpenGL-based Framebuffer Capture runtime library
un  libnvidia-gl                  <none>                     <none>       (no description available)
ii  libnvidia-gl-460:amd64        460.39-0ubuntu0.20.04.1    amd64        NVIDIA OpenGL/GLX/EGL/GLES GLVND libraries and Vulkan ICD
un  libnvidia-ifr1                <none>                     <none>       (no description available)
ii  libnvidia-ifr1-460:amd64      460.39-0ubuntu0.20.04.1    amd64        NVIDIA OpenGL-based Inband Frame Readback runtime library
un  libnvidia-ml1                 <none>                     <none>       (no description available)
un  nvidia-384                    <none>                     <none>       (no description available)
un  nvidia-390                    <none>                     <none>       (no description available)
ii  nvidia-compute-utils-460      460.39-0ubuntu0.20.04.1    amd64        NVIDIA compute utilities
ii  nvidia-container-runtime      3.4.2-1                    amd64        NVIDIA container runtime
un  nvidia-container-runtime-hook <none>                     <none>       (no description available)
ii  nvidia-container-toolkit      1.4.2-1                    amd64        NVIDIA container runtime hook
ii  nvidia-dkms-460               460.39-0ubuntu0.20.04.1    amd64        NVIDIA DKMS package
un  nvidia-dkms-kernel            <none>                     <none>       (no description available)
un  nvidia-docker                 <none>                     <none>       (no description available)
ii  nvidia-docker2                2.5.0-1                    all          nvidia-docker CLI wrapper
ii  nvidia-driver-460             460.39-0ubuntu0.20.04.1    amd64        NVIDIA driver metapackage
un  nvidia-driver-binary          <none>                     <none>       (no description available)
un  nvidia-kernel-common          <none>                     <none>       (no description available)
ii  nvidia-kernel-common-460      460.39-0ubuntu0.20.04.1    amd64        Shared files used with the kernel module
un  nvidia-kernel-source          <none>                     <none>       (no description available)
ii  nvidia-kernel-source-460      460.39-0ubuntu0.20.04.1    amd64        NVIDIA kernel source package
ii  nvidia-modprobe               460.27.04-0ubuntu1         amd64        Load the NVIDIA kernel driver and create device files
un  nvidia-opencl-icd             <none>                     <none>       (no description available)
un  nvidia-persistenced           <none>                     <none>       (no description available)
un  nvidia-prime                  <none>                     <none>       (no description available)
ii  nvidia-settings               460.27.04-0ubuntu1         amd64        Tool for configuring the NVIDIA graphics driver
un  nvidia-settings-binary        <none>                     <none>       (no description available)
un  nvidia-smi                    <none>                     <none>       (no description available)
un  nvidia-utils                  <none>                     <none>       (no description available)
ii  nvidia-utils-460              460.39-0ubuntu0.20.04.1    amd64        NVIDIA driver support binaries
ii  xserver-xorg-video-nvidia-460 460.39-0ubuntu0.20.04.1    amd64        NVIDIA binary Xorg driver
  • NVIDIA container library version from nvidia-container-cli -V
version: 1.3.3
build date: 2021-02-05T13:29+00:00
build revision: bd9fc3f2b642345301cb2e23de07ec5386232317
build compiler: x86_64-linux-gnu-gcc-7 7.5.0
build platform: x86_64
build flags: -D_GNU_SOURCE -D_FORTIFY_SOURCE=2 -DNDEBUG -std=gnu11 -O2 -g -fdata-sections -ffunction-sections -fstack-protector -fno-strict-aliasing -fvisibility=hidden -Wall -Wextra -Wcast-align -Wpointer-arith -Wmissing-prototypes -Wnonnull -Wwrite-strings -Wlogical-op -Wformat=2 -Wmissing-format-attribute -Winit-self -Wshadow -Wstrict-prototypes -Wunreachable-code -Wconversion -Wsign-conversion -Wno-unknown-warning-option -Wno-format-extra-args -Wno-gnu-alignof-expression -Wl,-zrelro -Wl,-znow -Wl,-zdefs -Wl,--gc-sections

/var/log/nvidia-container-runtime.log

2021/03/02 17:15:21 Running /usr/bin/nvidia-container-runtime
2021/03/02 17:15:21 Using bundle file: /run/containerd/io.containerd.runtime.v1.linux/moby/782dd3fed134c82c4057c09cc2a54c45925065eadadb023044afbfad926d524c/config.json
2021/03/02 17:15:21 prestart hook path: /usr/bin/nvidia-container-runtime-hook
2021/03/02 17:15:21 Prestart hook added, executing runc
2021/03/02 17:15:21 Looking for "docker-runc" binary
2021/03/02 17:15:21 "docker-runc" binary not found
2021/03/02 17:15:21 Looking for "runc" binary
2021/03/02 17:15:21 Runc path: /usr/bin/runc
2021/03/02 17:15:22 Running /usr/bin/nvidia-container-runtime
2021/03/02 17:15:22 Command is not "create", executing runc doing nothing
2021/03/02 17:15:22 Looking for "docker-runc" binary
2021/03/02 17:15:22 "docker-runc" binary not found
2021/03/02 17:15:22 Looking for "runc" binary
2021/03/02 17:15:22 Runc path: /usr/bin/runc
2021/03/02 17:15:36 Running /usr/bin/nvidia-container-runtime
2021/03/02 17:15:36 Command is not "create", executing runc doing nothing
2021/03/02 17:15:36 Looking for "docker-runc" binary
2021/03/02 17:15:36 "docker-runc" binary not found
2021/03/02 17:15:36 Looking for "runc" binary
2021/03/02 17:15:36 Runc path: /usr/bin/runc
2021/03/02 17:33:05 Running /usr/bin/nvidia-container-runtime
2021/03/02 17:33:05 Command is not "create", executing runc doing nothing
2021/03/02 17:33:05 Looking for "docker-runc" binary
2021/03/02 17:33:05 "docker-runc" binary not found
2021/03/02 17:33:05 Looking for "runc" binary
2021/03/02 17:33:05 Runc path: /usr/bin/runc
2021/03/02 17:33:05 Running /usr/bin/nvidia-container-runtime
2021/03/02 17:33:05 Command is not "create", executing runc doing nothing
2021/03/02 17:33:05 Looking for "docker-runc" binary
2021/03/02 17:33:05 "docker-runc" binary not found
2021/03/02 17:33:05 Looking for "runc" binary
2021/03/02 17:33:05 Runc path: /usr/bin/runc

/var/log/nvidia-container-toolkit.log

-- WARNING, the following logs are for debugging purposes only --

I0302 16:15:22.027395 187479 nvc.c:372] initializing library context (version=1.3.3, build=bd9fc3f2b642345301cb2e23de07ec5386232317)
I0302 16:15:22.027437 187479 nvc.c:346] using root /
I0302 16:15:22.027443 187479 nvc.c:347] using ldcache /etc/ld.so.cache
I0302 16:15:22.027448 187479 nvc.c:348] using unprivileged user 65534:65534
I0302 16:15:22.027462 187479 nvc.c:389] attempting to load dxcore to see if we are running under Windows Subsystem for Linux (WSL)
I0302 16:15:22.027556 187479 nvc.c:391] dxcore initialization failed, continuing assuming a non-WSL environment
I0302 16:15:22.028667 187483 nvc.c:274] loading kernel module nvidia
I0302 16:15:22.028869 187483 nvc.c:278] running mknod for /dev/nvidiactl
I0302 16:15:22.028901 187483 nvc.c:282] running mknod for /dev/nvidia0
I0302 16:15:22.028927 187483 nvc.c:286] running mknod for all nvcaps in /dev/nvidia-caps
I0302 16:15:22.038198 187483 nvc.c:214] running mknod for /dev/nvidia-caps/nvidia-cap1 from /proc/driver/nvidia/capabilities/mig/config
I0302 16:15:22.038284 187483 nvc.c:214] running mknod for /dev/nvidia-caps/nvidia-cap2 from /proc/driver/nvidia/capabilities/mig/monitor
I0302 16:15:22.040309 187483 nvc.c:292] loading kernel module nvidia_uvm
I0302 16:15:22.040381 187483 nvc.c:296] running mknod for /dev/nvidia-uvm
I0302 16:15:22.040441 187483 nvc.c:301] loading kernel module nvidia_modeset
I0302 16:15:22.040532 187483 nvc.c:305] running mknod for /dev/nvidia-modeset
I0302 16:15:22.040701 187484 driver.c:101] starting driver service
I0302 16:15:22.725958 187479 nvc_container.c:388] configuring container with 'compute utility supervised'
I0302 16:15:22.740545 187479 nvc_container.c:236] selecting /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/local/cuda-11.2/compat/libcuda.so.460.32.03
I0302 16:15:22.740636 187479 nvc_container.c:236] selecting /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/local/cuda-11.2/compat/libnvidia-ptxjitcompiler.
so.460.32.03
I0302 16:15:22.740775 187479 nvc_container.c:408] setting pid to 187446
I0302 16:15:22.740782 187479 nvc_container.c:409] setting rootfs to /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df
I0302 16:15:22.740787 187479 nvc_container.c:410] setting owner to 0:0
I0302 16:15:22.740793 187479 nvc_container.c:411] setting bins directory to /usr/bin
I0302 16:15:22.740798 187479 nvc_container.c:412] setting libs directory to /usr/lib/x86_64-linux-gnu
I0302 16:15:22.740804 187479 nvc_container.c:413] setting libs32 directory to /usr/lib/i386-linux-gnu
I0302 16:15:22.740809 187479 nvc_container.c:414] setting cudart directory to /usr/local/cuda
I0302 16:15:22.740815 187479 nvc_container.c:415] setting ldconfig to @/sbin/ldconfig.real (host relative)
I0302 16:15:22.740821 187479 nvc_container.c:416] setting mount namespace to /proc/187446/ns/mnt
I0302 16:15:22.740828 187479 nvc_container.c:418] setting devices cgroup to /sys/fs/cgroup/devices/system.slice/docker-782dd3fed134c82c4057c09cc2a54c45925065eadadb023044afbfad926d524c.scope
I0302 16:15:22.740837 187479 nvc_info.c:680] requesting driver information with ''
I0302 16:15:22.741885 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvoptix.so.460.39
I0302 16:15:22.741937 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-tls.so.460.39
I0302 16:15:22.741967 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-rtcore.so.460.39
I0302 16:15:22.741999 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.460.39
I0302 16:15:22.742043 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opticalflow.so.460.39
I0302 16:15:22.742086 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.460.39
I0302 16:15:22.742117 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ngx.so.460.39
I0302 16:15:22.742147 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.460.39
I0302 16:15:22.742201 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-ifr.so.460.39
I0302 16:15:22.742245 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glvkspirv.so.460.39
I0302 16:15:22.742276 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glsi.so.460.39
I0302 16:15:22.742305 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-glcore.so.460.39
I0302 16:15:22.742335 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-fbc.so.460.39
I0302 16:15:22.742378 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-encode.so.460.39
I0302 16:15:22.742420 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-eglcore.so.460.39
I0302 16:15:22.742451 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.460.39
I0302 16:15:22.742481 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.460.39
I0302 16:15:22.742523 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-cbl.so.460.39
I0302 16:15:22.742553 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.460.39
I0302 16:15:22.742594 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libnvcuvid.so.460.39
I0302 16:15:22.742771 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libcuda.so.460.39
I0302 16:15:22.742869 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLX_nvidia.so.460.39
I0302 16:15:22.742900 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLESv2_nvidia.so.460.39
I0302 16:15:22.742930 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libGLESv1_CM_nvidia.so.460.39
I0302 16:15:22.742960 187479 nvc_info.c:169] selecting /usr/lib/x86_64-linux-gnu/libEGL_nvidia.so.460.39
W0302 16:15:22.742979 187479 nvc_info.c:350] missing library libnvidia-fatbinaryloader.so
W0302 16:15:22.742985 187479 nvc_info.c:350] missing library libvdpau_nvidia.so
W0302 16:15:22.742990 187479 nvc_info.c:354] missing compat32 library libnvidia-ml.so
W0302 16:15:22.742996 187479 nvc_info.c:354] missing compat32 library libnvidia-cfg.so
W0302 16:15:22.743001 187479 nvc_info.c:354] missing compat32 library libcuda.so
W0302 16:15:22.743006 187479 nvc_info.c:354] missing compat32 library libnvidia-opencl.so
W0302 16:15:22.743012 187479 nvc_info.c:354] missing compat32 library libnvidia-ptxjitcompiler.so
W0302 16:15:22.743017 187479 nvc_info.c:354] missing compat32 library libnvidia-fatbinaryloader.so
W0302 16:15:22.743022 187479 nvc_info.c:354] missing compat32 library libnvidia-allocator.so
W0302 16:15:22.743028 187479 nvc_info.c:354] missing compat32 library libnvidia-compiler.so
W0302 16:15:22.743033 187479 nvc_info.c:354] missing compat32 library libnvidia-ngx.so
W0302 16:15:22.743039 187479 nvc_info.c:354] missing compat32 library libvdpau_nvidia.so
W0302 16:15:22.743044 187479 nvc_info.c:354] missing compat32 library libnvidia-encode.so
W0302 16:15:22.743049 187479 nvc_info.c:354] missing compat32 library libnvidia-opticalflow.so
W0302 16:15:22.743054 187479 nvc_info.c:354] missing compat32 library libnvcuvid.so
W0302 16:15:22.743060 187479 nvc_info.c:354] missing compat32 library libnvidia-eglcore.so
W0302 16:15:22.743065 187479 nvc_info.c:354] missing compat32 library libnvidia-glcore.so
W0302 16:15:22.743070 187479 nvc_info.c:354] missing compat32 library libnvidia-tls.so
W0302 16:15:22.743076 187479 nvc_info.c:354] missing compat32 library libnvidia-glsi.so
W0302 16:15:22.743081 187479 nvc_info.c:354] missing compat32 library libnvidia-fbc.so
W0302 16:15:22.743086 187479 nvc_info.c:354] missing compat32 library libnvidia-ifr.so
W0302 16:15:22.743092 187479 nvc_info.c:354] missing compat32 library libnvidia-rtcore.so
W0302 16:15:22.743097 187479 nvc_info.c:354] missing compat32 library libnvoptix.so
W0302 16:15:22.743102 187479 nvc_info.c:354] missing compat32 library libGLX_nvidia.so
W0302 16:15:22.743108 187479 nvc_info.c:354] missing compat32 library libEGL_nvidia.so
W0302 16:15:22.743113 187479 nvc_info.c:354] missing compat32 library libGLESv2_nvidia.so
W0302 16:15:22.743123 187479 nvc_info.c:354] missing compat32 library libGLESv1_CM_nvidia.so
W0302 16:15:22.743128 187479 nvc_info.c:354] missing compat32 library libnvidia-glvkspirv.so
W0302 16:15:22.743133 187479 nvc_info.c:354] missing compat32 library libnvidia-cbl.so
I0302 16:15:22.743370 187479 nvc_info.c:276] selecting /usr/bin/nvidia-smi
I0302 16:15:22.743387 187479 nvc_info.c:276] selecting /usr/bin/nvidia-debugdump
I0302 16:15:22.743405 187479 nvc_info.c:276] selecting /usr/bin/nvidia-persistenced
I0302 16:15:22.743422 187479 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-control
I0302 16:15:22.743438 187479 nvc_info.c:276] selecting /usr/bin/nvidia-cuda-mps-server
I0302 16:15:22.743460 187479 nvc_info.c:438] listing device /dev/nvidiactl
I0302 16:15:22.743466 187479 nvc_info.c:438] listing device /dev/nvidia-uvm
I0302 16:15:22.743471 187479 nvc_info.c:438] listing device /dev/nvidia-uvm-tools
I0302 16:15:22.743476 187479 nvc_info.c:438] listing device /dev/nvidia-modeset
I0302 16:15:22.743499 187479 nvc_info.c:317] listing ipc /run/nvidia-persistenced/socket
W0302 16:15:22.743513 187479 nvc_info.c:321] missing ipc /tmp/nvidia-mps
I0302 16:15:22.743519 187479 nvc_info.c:745] requesting device information with ''
I0302 16:15:22.749248 187479 nvc_info.c:628] listing device /dev/nvidia0 (GPU-9ebb44d4-b6d8-37f3-4a5d-8717b752a71f at 00000000:08:00.0)
I0302 16:15:22.749308 187479 nvc_mount.c:344] mounting tmpfs at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/proc/driver/nvidia
I0302 16:15:22.750224 187479 nvc_mount.c:112] mounting /usr/bin/nvidia-smi at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/bin/nvidia-smi
I0302 16:15:22.750338 187479 nvc_mount.c:112] mounting /usr/bin/nvidia-debugdump at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/bin/nvidia-debugdump
I0302 16:15:22.750406 187479 nvc_mount.c:112] mounting /usr/bin/nvidia-persistenced at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/bin/nvidia-persistenc
ed
I0302 16:15:22.750471 187479 nvc_mount.c:112] mounting /usr/bin/nvidia-cuda-mps-control at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/bin/nvidia-cuda-m
ps-control
I0302 16:15:22.750536 187479 nvc_mount.c:112] mounting /usr/bin/nvidia-cuda-mps-server at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/bin/nvidia-cuda-mp
s-server
I0302 16:15:22.751306 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/l
ib/x86_64-linux-gnu/libnvidia-ml.so.460.39
I0302 16:15:22.751402 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-cfg.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/
lib/x86_64-linux-gnu/libnvidia-cfg.so.460.39
I0302 16:15:22.751475 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libcuda.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/lib/x8
6_64-linux-gnu/libcuda.so.460.39
I0302 16:15:22.751543 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-opencl.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/u
sr/lib/x86_64-linux-gnu/libnvidia-opencl.so.460.39
I0302 16:15:22.751611 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba401057
6343df/usr/lib/x86_64-linux-gnu/libnvidia-ptxjitcompiler.so.460.39
I0302 16:15:22.751677 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/lib/x86_64-linux-gnu/libnvidia-allocator.so.460.39
I0302 16:15:22.751755 187479 nvc_mount.c:112] mounting /usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.460.39 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/lib/x86_64-linux-gnu/libnvidia-compiler.so.460.39
I0302 16:15:22.751779 187479 nvc_mount.c:524] creating symlink /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/usr/lib/x86_64-linux-gnu/libcuda.so -> libcuda.so.1
I0302 16:15:22.752029 187479 nvc_mount.c:239] mounting /run/nvidia-persistenced/socket at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/run/nvidia-persistenced/socket
I0302 16:15:22.752078 187479 nvc_mount.c:208] mounting /dev/nvidiactl at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/dev/nvidiactl
I0302 16:15:22.752105 187479 nvc_mount.c:499] whitelisting device node 195:255
I0302 16:15:22.752146 187479 nvc_mount.c:208] mounting /dev/nvidia-uvm at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/dev/nvidia-uvm
I0302 16:15:22.752164 187479 nvc_mount.c:499] whitelisting device node 511:0
I0302 16:15:22.752193 187479 nvc_mount.c:208] mounting /dev/nvidia-uvm-tools at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/dev/nvidia-uvm-tools
I0302 16:15:22.752209 187479 nvc_mount.c:499] whitelisting device node 511:1
I0302 16:15:22.752254 187479 nvc_mount.c:208] mounting /dev/nvidia0 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/dev/nvidia0
I0302 16:15:22.752319 187479 nvc_mount.c:412] mounting /proc/driver/nvidia/gpus/0000:08:00.0 at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df/proc/driver/nvidia/gpus/0000:08:00.0
I0302 16:15:22.752339 187479 nvc_mount.c:499] whitelisting device node 195:0
I0302 16:15:22.752355 187479 nvc_ldcache.c:360] executing /sbin/ldconfig.real from host at /var/lib/docker/zfs/graph/df8e27baab33d74851cf61b79ccee3c7813f61556721d9e133ba4010576343df
I0302 16:15:22.818866 187479 nvc.c:427] shutting down library context
I0302 16:15:22.917351 187484 driver.c:156] terminating driver service
I0302 16:15:22.917605 187479 driver.c:196] driver service terminated successfully
  • Docker command, image and tag used

nvidia-docker run --rm -it nvidia/cuda:11.2.1-devel-ubuntu20.04 bash -> watch nvidia-smi

nvidia-docker run --rm -it nvcr.io/nvidia/pytorch:20.12-py3 bash -> watch nvidia-smi or

python  -c 'import torch; print(torch.__version__, "device:", torch.cuda.current_device(), torch.cuda.get_device_name(torch.cuda.current_device()))'
@klueska
Copy link
Contributor

klueska commented Mar 5, 2021

Is your issue possibly related to this:
NVIDIA/nvidia-container-toolkit#138

If you call docker update on our container from any external source it's likely to run into this.

We are in the process of rearchitecting the container stack to avoid problems like these in the future. But that work is still a few months out.

@tobigue
Copy link
Author

tobigue commented Mar 5, 2021

hey, thanks alot for the fast answer @klueska

I'll try if the bug can be reproduced by this.

@klueska
Copy link
Contributor

klueska commented Mar 5, 2021

No worries. The underlying issue is summarized here:
#966 (comment)

Whether it's a call to docker update (or something else), something out of band is likely syncing the cgroup settings known by docker (or runC) and thus undoing what libnvidia-container has done under the hood.

It's a fundamental flaw in the way libnvidia-container and the rest of the NVIDIA container stack is architected, and one we are (finally) actively working to address.

@tobigue
Copy link
Author

tobigue commented Apr 14, 2021

In the end we found a working configuration by downgrading the machines to Ubuntu 18.04, which gave us the combination of the old, working versions of the nvidia container libraries we used under 16.04 and up to date driver packages.

# dpkg -l | grep nvidia
ii  libnvidia-cfg1-460:amd64               460.56-0ubuntu0.18.04.1                         amd64        NVIDIA binary OpenGL/GLX configuration library
ii  libnvidia-compute-460:amd64            460.56-0ubuntu0.18.04.1                         amd64        NVIDIA libcompute package
ii  libnvidia-container-tools              1.0.0-1                                         amd64        NVIDIA container runtime library (command-line tools)
ii  libnvidia-container1:amd64             1.0.0-1                                         amd64        NVIDIA container runtime library
ii  nvidia-compute-utils-460               460.56-0ubuntu0.18.04.1                         amd64        NVIDIA compute utilities
ii  nvidia-container-runtime               2.0.0+docker18.09.2-1                           amd64        NVIDIA container runtime
ii  nvidia-container-runtime-hook          1.4.0-1                                         amd64        NVIDIA container runtime hook
ii  nvidia-dkms-460                        460.56-0ubuntu0.18.04.1                         amd64        NVIDIA DKMS package
ii  nvidia-docker2                         2.0.3+docker18.09.2-1                           all          nvidia-docker CLI wrapper
ii  nvidia-headless-460                    460.56-0ubuntu0.18.04.1                         amd64        NVIDIA headless metapackage
ii  nvidia-headless-no-dkms-460            460.56-0ubuntu0.18.04.1                         amd64        NVIDIA headless metapackage - no DKMS
ii  nvidia-kernel-common-460               460.56-0ubuntu0.18.04.1                         amd64        Shared files used with the kernel module
ii  nvidia-kernel-source-460               460.56-0ubuntu0.18.04.1                         amd64        NVIDIA kernel source package

Thanks again for pointing us in the direction of the nvidia container libraries @klueska.

@tobigue tobigue closed this as completed Apr 14, 2021
@tobigue
Copy link
Author

tobigue commented Apr 15, 2021

PS: I was able to reproduce the Failed to initialize NVML: Unknown Error error by changing the CPU quota on a docker container (docker update --cpu-quota 640000 <id>; see NVIDIA/nvidia-container-toolkit#138 #966). However, doing that on a system with a working configuration still results in an error. So it seems likely, that the trigger in our case was something else.

@klueska
Copy link
Contributor

klueska commented May 10, 2021

@tobigue Just an update

I believe the underlying issue you are experiencing is related to this:
opencontainers/runc#2366 (comment)

I have proposed the following patch upstream to upstream K8s to help workaround this and will back port it to 1.19, 1.20, and 1.21 once it is merged: kubernetes/kubernetes#101771

It is not a fix of the root cause (for that you will need to update to a newer runc once it is available), but in the meantime, it should resolve the issues you are seeing.

@nloewe
Copy link

nloewe commented May 19, 2021

@klueska
do you plan to backport to 1.21. We are also interested in this fix and I am a little bit overchallenged in implementing on myself in the 1.21 release.

@vincenzoml
Copy link

I still have this problem in a machine based on nvcr.io/nvidia/pytorch:23.08-py3 (and several other machines). Has this been addressed and maybe I have to update the host in some way? Or do I still have to wait? The host is ubuntu 22.04, should I upgrade it to solve this?

@frankjoshua
Copy link

If it's helpful. The systems I have with version 530.41.03 of the nvidia driver are fine the one recently upgraded to 535.129.03 are having the issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants