Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot find libdevice #989

Closed
murphyk opened this issue Jul 7, 2019 · 35 comments
Closed

cannot find libdevice #989

murphyk opened this issue Jul 7, 2019 · 35 comments

Comments

@murphyk
Copy link

murphyk commented Jul 7, 2019

Hi

Jax cannot find libdevice.
I'm running Python 3.7 with cuda 10.0 on my personal laptop qwith a GeForce RTX 2080.
I installed jax using pip.

I made a little test script shown below

import os
os.environ["XLA_FLAGS"]="--xla_gpu_cuda_data_dir=/home/murphyk/miniconda3/lib"
os.environ["CUDA_HOME"]="/usr"


import jax
import jax.numpy as np
print("jax version {}".format(jax.__version__))
from jax.lib import xla_bridge
print("jax backend {}".format(xla_bridge.get_backend().platform))


from jax import random
key = random.PRNGKey(0)
x = random.normal(key, (5,5))
print(x)

The output is shown below.

jax version 0.1.39
jax backend gpu
2019-07-07 16:44:03.905071: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/nvptx_backend_lib.cc:105] Unknown compute capability (7, 5) .Defaulting to libdevice for compute_20
Traceback (most recent call last):

  File "<ipython-input-15-e39e42274024>", line 1, in <module>
    runfile('/home/murphyk/github/pyprobml/scripts/jax_debug.py', wdir='/home/murphyk/github/pyprobml/scripts')

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 827, in runfile
    execfile(filename, namespace)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/spyder_kernels/customize/spydercustomize.py", line 110, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "/home/murphyk/github/pyprobml/scripts/jax_debug.py", line 18, in <module>
    x = random.normal(key, (5,5))

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/random.py", line 389, in normal
    return _normal(key, shape, dtype)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/api.py", line 123, in f_jitted
    out = xla.xla_call(flat_fun, *args_flat, device_values=device_values)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/core.py", line 663, in call_bind
    ans = primitive.impl(f, *args, **params)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/interpreters/xla.py", line 606, in xla_call_impl
    compiled_fun = xla_callable(fun, device_values, *map(abstractify, args))

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/linear_util.py", line 208, in memoized_fun
    ans = call(f, *args)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/interpreters/xla.py", line 621, in xla_callable
    compiled, result_shape = compile_jaxpr(jaxpr, consts, *abstract_args)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jax/interpreters/xla.py", line 207, in compile_jaxpr
    backend=xb.get_backend()), result_shape

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jaxlib/xla_client.py", line 535, in Compile
    return backend.compile(self.computation, compile_options)

  File "/home/murphyk/miniconda3/lib/python3.7/site-packages/jaxlib/xla_client.py", line 118, in compile
    compile_options.device_assignment)

RuntimeError: Not found: ./libdevice.compute_20.10.bc not found
@murphyk
Copy link
Author

murphyk commented Jul 8, 2019

I think I want it to find this file

/home/murphyk/miniconda3/lib/libdevice.10.bc

I tried

 export XLA_FLAGS="--xla_gpu_cuda_data_dir=/home/murphyk/miniconda3/lib"

to no avail.

@murphyk
Copy link
Author

murphyk commented Jul 8, 2019

or maybe this file?

/usr/lib/nvidia-cuda-toolkit/libdevice/libdevice.10.bc

@hawkinsp
Copy link
Collaborator

hawkinsp commented Jul 8, 2019

How did you install CUDA? What operating system and what release is this?

In fact, did you install CUDA at all?

@murphyk
Copy link
Author

murphyk commented Jul 8, 2019 via email

@murphyk
Copy link
Author

murphyk commented Jul 8, 2019 via email

@murphyk
Copy link
Author

murphyk commented Jul 8, 2019 via email

@murphyk
Copy link
Author

murphyk commented Jul 8, 2019 via email

@murphyk
Copy link
Author

murphyk commented Jul 16, 2019 via email

@iamlemec
Copy link

iamlemec commented Oct 1, 2019

I'm getting this same error with python3.7 and CUDA 10.0. It seems like it doesn't actually check CUDA_DIR? Symlinking my CUDA_DIR to /usr/local/cuda solved the problem.

@lhk
Copy link

lhk commented Dec 5, 2019

Same problem here. My cuda installation is not in /usr/local/cuda, but in /usr/lib/cuda. After symlinking it works

@martinosorb
Copy link

murphyk's fix worked for me, but it's rather ugly. I hope a better solution can be found soon :) thanks

@KeAWang
Copy link

KeAWang commented Apr 24, 2020

I tried murphyk's fix but I still get RuntimeError: Internal: libdevice not found at ./libdevice.10.bc

@skye
Copy link
Member

skye commented Apr 24, 2020

Here's my understanding of this issue:

jax depends on XLA, which is built as part of TF and bundled up into the jaxlib package. By default, TF is compiled to look for cuda and cudnn in /usr/local/cuda: https://github.com/tensorflow/tensorflow/blob/master/third_party/gpus/cuda_configure.bzl#L14

So symlinking your cuda install to /usr/local/cuda should work. Make sure libdevice actually exists... I always have a hard time figuring out which Nvidia downloads contain libraries, but I think libdevice is shipped as part of https://developer.nvidia.com/cuda-toolkit.

Alternatively, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda should work. I recommend exporting this outside the Python interpreter to be sure it's being picked up when jaxlib is loaded (there's probably a more targeted way to do it, but this will limit mistakes).

Is anyone still having problems after trying these methods?

We should also potentially make a jax-specific environment variable to set a custom cuda install path, or at least document the XLA_FLAGS one more clearly... I can do that once we verify this actually works.

@KeAWang
Copy link

KeAWang commented Apr 24, 2020

Maybe I have a different issue then... I tried both symlinking and setting XLA_FLAGS, and my libdevice.10.bc is located at /usr/local/cuda/nvvm/libdevice/libdevice.10.bc, but I'm still getting the same RuntimeError: Internal: libdevice not found at ./libdevice.10.bc

Full stack trace:

>>> import jax.numpy as np
>>> np.sin(3)
2020-04-24 15:42:25.181013: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:70] Can't find libdevice directory ${CUDA_DIR}/nvvm/libdevice. This may result in compilation or runtime failures, if the program we try to run uses routines from libdevice.
2020-04-24 15:42:25.181030: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:71] Searched for CUDA in the following directories:
2020-04-24 15:42:25.181035: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   ./cuda_sdk_lib
2020-04-24 15:42:25.181038: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   /usr/local/cuda-10.2
2020-04-24 15:42:25.181041: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:74]   .
2020-04-24 15:42:25.181043: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:76] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions.  For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.
2020-04-24 15:42:25.181949: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/llvm_gpu_backend/gpu_backend_lib.cc:311] libdevice is required by this HLO module but was not found at ./libdevice.10.bc
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jax/numpy/lax_numpy.py", line 413, in fn
    return lax_fn(x)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jax/lax/lax.py", line 161, in sin
    return sin_p.bind(x)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jax/core.py", line 199, in bind
    return self.impl(*args, **kwargs)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jax/interpreters/xla.py", line 166, in apply_primitive
    compiled_fun = xla_primitive_callable(prim, *map(arg_spec, args), **params)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jax/interpreters/xla.py", line 197, in xla_primitive_callable
    compiled = built_c.Compile(compile_options=options, backend=backend)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jaxlib/xla_client.py", line 576, in Compile
    return backend.compile(self.computation, compile_options)
  File "/home/alex/miniconda3/envs/dev/lib/python3.8/site-packages/jaxlib/xla_client.py", line 152, in compile
    return _xla.LocalExecutable.Compile(c_computation,
RuntimeError: Internal: libdevice not found at ./libdevice.10.bc

@KeAWang
Copy link

KeAWang commented Apr 24, 2020

Ok upon looking at the stack trace again, it looks like XLA is searching in /usr/local/cuda-10.2 instead of /usr/local/cuda. Making another symlink fixed this issue for me.

Anyone know why it's searching for cuda-10.2? I installed using the automatic method in the README:

pip install --upgrade https://storage.googleapis.com/jax-releases/`nvidia-smi | sed -En "s/.* CUDA Version: ([0-9]*)\.([0-9]*).*/cuda\1\2/p"`/jaxlib-0.1.45-`python3 -V | sed -En "s/Python ([0-9]*)\.([0-9]*).*/cp\1\2/p"`-none-linux_x86_64.whl jax

@iamlemec
Copy link

@skye Both those methods are working for me. As @KeAWang documented, I'm also seeing that in the absence of XLA_FLAGS info it will look in /usr/local/cuda-XXX, depending on the CUDA version. Would be great if the XLA folks could either actually check CUDA_DIR or simply not have an error message claiming to do so.

@skye
Copy link
Member

skye commented Apr 24, 2020

Thanks @KeAWang and @iamlemec. Agreed this could be much clearer. I've filed an internal bug against XLA with your suggestions and some of my own :) These are the suggestions:

  1. The actionable information from the WARNING log could be included directly in the error message (e.g. which path(s) to symlink, XLA_FLAGS=...).
  2. The warning message mentions ${CUDA_DIR}/nvvm/libdevice., but it appears $CUDA_DIR isn't actually used. I'm not sure if this is a standard-ish env var to use, so we could either look use it, or not mention it at all.
  3. We could provide installation instructions for where CUDA Toolkit should be installed. I can do this for the JAX instructions, but maybe https://www.tensorflow.org/install/gpu should be updated as well.
  4. Should we add more default paths to check? e.g. /usr/lib/nvidia-cuda-toolkit?

Please comment if I should correct anything or you have other suggestions!

@martinosorb
Copy link

I personally have cuda in /opt/cuda, not sure why. Also, $CUDA_DIR does not seem to be defined by default, I don't know if it's defined while the script runs but that seems unlikely (I have very little understanding of these issues though).

@KeAWang
Copy link

KeAWang commented Apr 26, 2020

I also have it in /opt/cuda because my installation is through the Arch User Repository package.

@stevensslee
Copy link

Hi All,
I've tried murphyk's fix and symlinking the cuda directory to /usr/local/cuda, but have received this possible error:

2020-04-29 11:18:32.823934: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] Searched for CUDA in the following directories:
2020-04-29 11:18:32.823950: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] /home/steven/xla
2020-04-29 11:18:32.823961: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] /usr/local/cuda
2020-04-29 11:18:32.823973: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] .
2020-04-29 11:18:32.823985: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:82] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.
2020-04-29 11:18:32.872919: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:76] Can't find ptxas binary in ${CUDA_DIR}/bin. Will back to the GPU driver for PTX -> sass compilation. This is OK so long as you don't see a warning below about an out-of-date driver version. Custom ptxas location can be specified using $PATH.

I was wondering if anyone has any more information about this output? Running jax code seems to "work" as it does not through an error, but I'm not sure if the GPU is actually being used.
For further information, I have used conda to create an environment for both Cuda and Jax.

Thanks for reading!

@skye
Copy link
Member

skye commented May 1, 2020

Does /usr/local/cuda/bin/ptxas exist? You may need to install the CUDA toolkit if not.

@tomweingarten
Copy link

For people running into this problem after an install of Ubuntu 20.04 with Ubuntu's cuda toolkit package, KeAWang's suggestion works but you need cuda-10.1 instead:

sudo ln -s /usr/lib/cuda /usr/local/cuda-10.1

@tigerneil
Copy link
Contributor

tigerneil commented May 11, 2020

(jax3) ubuntu@ip-172-31-13-179:~$ python
Python 3.7.0 (default, Oct  9 2018, 10:31:47)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import jax
>>> from jax.lib import xla_bridge
>>> print(xla_bridge.get_backend().platform)
gpu
>>>

Thanks @murphyk It worked for me.

image
use run editor add env variable. make it remotely work.

@tigerneil
Copy link
Contributor

Hi All,
I've tried murphyk's fix and symlinking the cuda directory to /usr/local/cuda, but have received this possible error:

2020-04-29 11:18:32.823934: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:77] Searched for CUDA in the following directories:
2020-04-29 11:18:32.823950: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] /home/steven/xla
2020-04-29 11:18:32.823961: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] /usr/local/cuda
2020-04-29 11:18:32.823973: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:80] .
2020-04-29 11:18:32.823985: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:82] You can choose the search directory by setting xla_gpu_cuda_data_dir in HloModule's DebugOptions. For most apps, setting the environment variable XLA_FLAGS=--xla_gpu_cuda_data_dir=/path/to/cuda will work.
2020-04-29 11:18:32.872919: W external/org_tensorflow/tensorflow/compiler/xla/service/gpu/nvptx_compiler.cc:76] Can't find ptxas binary in ${CUDA_DIR}/bin. Will back to the GPU driver for PTX -> sass compilation. This is OK so long as you don't see a warning below about an out-of-date driver version. Custom ptxas location can be specified using $PATH.

I was wondering if anyone has any more information about this output? Running jax code seems to "work" as it does not through an error, but I'm not sure if the GPU is actually being used.
For further information, I have used conda to create an environment for both Cuda and Jax.

Thanks for reading!

you may try my code above to check whether gpu is used.

@refraction-ray
Copy link

@skye ,set XLA_FLAGS works for me. I believe this is a very important piece of information that should be in README#installation part as soon as possible. Since in most of setups, cuda installation is not in the default path XLA is looking for. And the error is confusing unless they can find this issue :)

@skye
Copy link
Member

skye commented May 22, 2020

Hi, sorry for the delay on this. I've created a PR with updated installation instructions: #3190. Please comment if you have any suggestions. We can do even more to address this situation (@hawkinsp suggested bundling libdevice with jaxlib), but hopefully this will help for now.

@v-i-s-h
Copy link

v-i-s-h commented Jun 5, 2020

I have a conda environment for jax with cudatoolkit and cuDNN installed from anaconda channel. I am in Manajro Linux and use optirun ro enable GPU. I get the same error when executing (the first example in README)

XLA_FLAGS=--xla_gpu_cuda_data_dir=<conda-env-path>/lib/ optirun python gp.py

even after setting XLA_FLAGS.

My CUDA version is

cudatoolkit               10.1.243             h6bb024c_0    anaconda
cudnn                     7.6.5                cuda10.1_0    anaconda

Nvidia driver version is 418.113.
PS: optirun with tensorflow and pytorch runs fine.

@skye
Copy link
Member

skye commented Jun 5, 2020

Do you get the same error without optirun? Also, can you try creating a symlink as described above and in the README? This will help narrow down where the problem is.

@coded5282
Copy link

@v-i-s-h Were you ever able to resolve this issue by setting the anaconda path? That's what I've been trying to do but it hasn't been working.

@v-i-s-h
Copy link

v-i-s-h commented Jul 4, 2020

@coded5282 Nope. I am still getting the same error.
I didn't try the symlink as it goes directly to the system settings and I am a bit afraid it may break some of my other configurations.

@grantmcdermott
Copy link

Just to add: Same error as everyone else. Tried setting the XLA_FLAGS environment variable; didn't work. However, adding the symlink did (for me: $ sudo ln -s /opt/cuda /usr/local/cuda-10.2)

Like @KeAWang and several others I'm on Arch and installed CUDA through through the AUR.

@kmario23
Copy link

kmario23 commented Mar 9, 2021

First check where your CUDA installation resides.

$ whereis -b cuda
cuda: /usr/lib/cuda

For instance, if the above command spits out /usr/lib/cuda, then that's where your CUDA installation is.
So, we now need to get the version number.

$ cat /usr/lib/cuda/version.txt
CUDA Version 11.0.228

But JAX, by default, looks for CUDA installation in /usr/local/cuda-<version>. So, we need to create a symlink with the specific version. This would do the redirection to actual cuda installation location when JAX searches in /usr/local/cuda-<version>.

$ sudo ln -s /usr/lib/cuda /usr/local/cuda-11.0

The above steps, in total in that order, would solve the issue, at least it did for me.


Note: Please keep in mind that the CUDA version (e.g., 11.0) of JAX binary that is installed on your machine should also be compiled for the specific CUDA installation in /usr/local/cuda-<11.0>.

@kayhan-batmanghelich
Copy link

Hi Everyone,

I still have an issue with this. I get the error of :

[...]
RuntimeError: Internal: libdevice not found at ./libdevice.10.bc

cuda is installed and I made a soft link. Here is some more info:

(jax) kayhan@lambda-dual:~$ whereis -b cuda
cuda: /usr/include/cuda.h /usr/include/cuda

(jax) kayhan@lambda-dual:~$ ls -lt /usr/local/cuda-11.1
lrwxrwxrwx 1 root root 39 Mar 11 15:16 /usr/local/cuda-11.1 -> /usr/lib/nvidia-cuda-toolkit/libdevice/

(jax) kayhan@lambda-dual:~$ ls -lt /usr/lib/nvidia-cuda-toolkit/libdevice/
total 464
lrwxrwxrwx 1 root root     13 Apr 15 15:21 cuda -> /usr/lib/cuda
-rw-r--r-- 1 root root 471124 Oct 16 13:42 libdevice.10.bc

i tried setting XLA flag but i still have the same issue:

(jax) kayhan@lambda-dual:~$ export XLA_FLAGS="--xla_gpu_cuda_data_dir=/usr/lib/nvidia-cuda-toolkit/libdevice/"
(jax) kayhan@lambda-dual:~$ ipython
[...]
RuntimeError: Internal: libdevice not found at ./libdevice.10.bc

@kayhan-batmanghelich
Copy link

This problem is addressed here:

#6479 (comment)

the instruction needs to clarify that one should make nvvm folder inside of the cuda to work.

@hawkinsp
Copy link
Collaborator

Good news! As of jaxlib 0.1.66, which was just released yesterday, we now bundle libdevice inside the jaxlib CUDA wheels. JAX should now always find it successfully. Hope that helps!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests