Skip to content
This repository has been archived by the owner on Jan 22, 2024. It is now read-only.

Do I have to build my images from the NVIDIA ones? #410

Closed
tomjaguarpaw opened this issue Jun 27, 2017 · 9 comments
Closed

Do I have to build my images from the NVIDIA ones? #410

tomjaguarpaw opened this issue Jun 27, 2017 · 9 comments
Labels

Comments

@tomjaguarpaw
Copy link

I have built an image based on an arbitrary non-NVIDIA image (it just so happens to be bamos/openface, if that's relevant). However, when I run it with

$ nvidia-docker run  -t -i ca6883e46289 /bin/bash

I still cannot see libcuda.so.1 in the image. In fact, find . -name libcuda.so.1 returns no results.

Do I have to build my image on one of the NVIDIA ones to make libcuda.so.1 accessible inside my container? If so, why? If not, what is going on?

@flx42
Copy link
Member

flx42 commented Jun 27, 2017

Our official images have a special label that nvidia-docker will detect and then mount the driver files. This is explained on our wiki.
Try adding the following to your Dockerfile:

LABEL com.nvidia.volumes.needed="nvidia_driver"
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64

@flx42 flx42 added the question label Jun 27, 2017
@tomjaguarpaw
Copy link
Author

Interesting. Why do you require the presence of this flag? Isn't the fact that the user run nvidia-docker instead of docker evidence enough that you should link in the NVidia stuff?

@flx42
Copy link
Member

flx42 commented Jun 27, 2017

Not really, to integrate into the ecosystem, you might need to swap the docker binary by nvidia-docker for all containers being ran. In this case we want to be a no-op if this is not a CUDA image.

@flx42 flx42 closed this as completed Jun 27, 2017
@tomjaguarpaw
Copy link
Author

How about a flag to force mounting the driver files even if it is not detected to be a CUDA image?

@flx42
Copy link
Member

flx42 commented Jun 27, 2017

This will be possible with nvidia-docker 2.0.

@tomjaguarpaw
Copy link
Author

OK thanks.

@gemfield
Copy link

Please be noted that these are not true any more with the latest docker and nvidia-docker.

@KyleRAnderson
Copy link

@gemfield do you happen to know what this would look like if we did want this working with the latest nvidia-docker2? I'm trying to get Pytorch working in a custom built image, seems like https://github.com/pytorch/pytorch/blob/master/Dockerfile uses the com.nvidia.volumes.needed label and I wanted to make sure it was the recommended way of handling this.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
Projects
None yet
Development

No branches or pull requests

4 participants