Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merlin 21.12 NGC release broke extended forward compatibility on older NVIDIA drivers #88

Closed
mengdong opened this issue Jan 10, 2022 · 4 comments · Fixed by #89
Closed

Comments

@mengdong
Copy link

If user invoke a command without calling bash in Docker, the forward compatibility is broken. If user invoke a command with bash, the forward compatibility works as expected.

observed with T4/A100 on driver 450.119.04

docker run --gpus '"device=2,3,4,5"' -it --rm --network host \
>     --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
>     -v ~/workspace_dong:/scripts nvcr.io/nvidia/merlin/merlin-training:21.12 nvidia-smi
Tue Jan  4 17:55:26 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.04   Driver Version: 450.119.04   CUDA Version: 11.0     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:08:00.0 Off |                    0 |
| N/A   28C    P8    10W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla T4            Off  | 00000000:09:00.0 Off |                    0 |
| N/A   29C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  Tesla T4            Off  | 00000000:84:00.0 Off |                    0 |
| N/A   27C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  Tesla T4            Off  | 00000000:85:00.0 Off |                    0 |
| N/A   28C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+

lab@login01:~/workspace_dong/merlin$ docker run --gpus '"device=2,3,4,5"' -it --rm --network host \                                                                                                         >     --shm-size=1g --ulimit memlock=-1 --ulimit stack=67108864 \
>     -v ~/workspace_dong:/scripts nvcr.io/nvidia/merlin/merlin-training:21.12
root@login01:/workspace# nvidia-smi
Tue Jan  4 17:57:16 2022       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 450.119.04   Driver Version: 450.119.04   CUDA Version: 11.5     |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|                               |                      |               MIG M. |
|===============================+======================+======================|
|   0  Tesla T4            Off  | 00000000:08:00.0 Off |                    0 |
| N/A   28C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   1  Tesla T4            Off  | 00000000:09:00.0 Off |                    0 |
| N/A   29C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   2  Tesla T4            Off  | 00000000:84:00.0 Off |                    0 |
| N/A   27C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
|   3  Tesla T4            Off  | 00000000:85:00.0 Off |                    0 |
| N/A   28C    P8     9W /  70W |      0MiB / 15109MiB |      0%      Default |
|                               |                      |                  N/A |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                                  |
|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |
|        ID   ID                                                   Usage      |
|=============================================================================|
|  No running processes found                                                 |
+-----------------------------------------------------------------------------+
@albert17
Copy link
Contributor

@mengdong Try now

docker run --gpus '"device=0,1,2,3"' -it --rm nvcr.io/nvidia/merlin/merlin-pytorch-training:nightly nvidia-smi

@albert17 albert17 linked a pull request Jan 19, 2022 that will close this issue
@cliffwoolley
Copy link

@mengdong - it's actually the ENTRYPOINT change rather than the SHELL change that has an impact here. As a workaround for 21.12, you can just add this to your docker run arguments: --entrypoint /opt/nvidia/nvidia_entrypoint.sh

@albert17
Copy link
Contributor

@cliffwoolley Correct, It was the ENTRYPOINT what changed. Initially I tried SHELL, but it didn't have any effect, I didn't added it back since it didn't seem like adding any value. Any comments about SHELL?

@cliffwoolley
Copy link

@albert17 You do not need to set SHELL, no. We already set it for you in the base image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants