Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensorflow missing cuDNN drivers #122

Closed
ca-scribner opened this issue Sep 17, 2020 · 1 comment
Closed

Tensorflow missing cuDNN drivers #122

ca-scribner opened this issue Sep 17, 2020 · 1 comment

Comments

@ca-scribner
Copy link
Contributor

Our GPU dockerfile stack is rooted on an NVIDIA image (nvidia/cuda:10.1-base-ubuntu18.04) which does not include the CUDA Deep Neural Network libraries which tensorflow requires. This is discussed in iot-salzburg/gpu-jupyter#25.

iot-salzburg/gpu-jupyter#25 resolves the issue by basing off a different NVIDIA image - using the current release of iot-salzburg/gpu-jupyter (or changing our NVIDIA base) would resolve this issue.

Current work here on #114 has a solution mocked up for this issue

@ca-scribner
Copy link
Contributor Author

solution tested with:

import tensorflow as tf


print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))
tf.config.list_physical_devices('GPU')
# Should list at least one `PhysicalDevice(name='/physical_device:GPU:*', device_type='GPU')`


tf.debugging.set_log_device_placement(True)
# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)
print(c)

# Should show something like (note the device used for work should be GPU):
# ```
# Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0
# tf.Tensor(
# [[22. 28.]
#  [49. 64.]], shape=(2, 2), dtype=float32)
# 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment