-
Notifications
You must be signed in to change notification settings - Fork 2k
Do I have to build my images from the NVIDIA ones? #410
Comments
Our official images have a special label that
|
Interesting. Why do you require the presence of this flag? Isn't the fact that the user run |
Not really, to integrate into the ecosystem, you might need to swap the docker binary by |
How about a flag to force mounting the driver files even if it is not detected to be a CUDA image? |
This will be possible with |
OK thanks. |
Please be noted that these are not true any more with the latest docker and nvidia-docker. |
@gemfield do you happen to know what this would look like if we did want this working with the latest nvidia-docker2? I'm trying to get Pytorch working in a custom built image, seems like https://github.com/pytorch/pytorch/blob/master/Dockerfile uses the |
I have built an image based on an arbitrary non-NVIDIA image (it just so happens to be bamos/openface, if that's relevant). However, when I run it with
I still cannot see
libcuda.so.1
in the image. In fact,find . -name libcuda.so.1
returns no results.Do I have to build my image on one of the NVIDIA ones to make
libcuda.so.1
accessible inside my container? If so, why? If not, what is going on?The text was updated successfully, but these errors were encountered: