Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt' #81

Open
li563042811 opened this issue Dec 21, 2020 · 1 comment

Comments

@li563042811
Copy link

Please help me with this issue.
I build targets like the installation instructions. I modified the env via $CUDA_HOME before build and used

cmake -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME ..
make

When I run test.py this problem still happened. I don't why.

python test/test.py
CPU Tests passed!
Traceback (most recent call last):
File "test/test.py", line 170, in
small_test()
File "test/test.py", line 60, in small_test
cost, grads = wrap_and_call(acts, labels)
File "test/test.py", line 44, in wrap_and_call
costs = fn(acts, labels, lengths, label_lengths)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 100, in forward
return self.loss(acts, labels, act_lens, label_lens, self.blank, self.reduction)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 23, in forward
loss_func = warp_rnnt.gpu_rnnt if is_cuda else warp_rnnt.cpu_rnnt
AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt'

-- cuda found TRUE
-- Building shared library with GPU support
-- Configuring done
-- Generating done
-- Build files have been written to: /lustr/jsa/env/download/warp-transducer-master/build
[ 7%] Linking CXX shared library libwarprnnt.so
[ 14%] Built target warprnnt
[ 21%] Linking CXX executable test_gpu
[ 35%] Built target test_gpu
[ 42%] Linking CXX executable test_time_gpu
[ 57%] Built target test_time_gpu
[ 64%] Linking CXX executable test_time
[ 78%] Built target test_time
[ 85%] Linking CXX executable test_cpu
[100%] Built target test_cpu

echo $CUDA_HOME
/data/ren/install_dir/cuda-10.1
echo $WARP_RNNT_PATH
/lustr/jsa/env/download/warp-transducer-master/build

@fd873630
Copy link

fd873630 commented Nov 2, 2021

Please help me with this issue. I build targets like the installation instructions. I modified the env via $CUDA_HOME before build and used

cmake -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME .. make

When I run test.py this problem still happened. I don't why.

python test/test.py CPU Tests passed! Traceback (most recent call last): File "test/test.py", line 170, in small_test() File "test/test.py", line 60, in small_test cost, grads = wrap_and_call(acts, labels) File "test/test.py", line 44, in wrap_and_call costs = fn(acts, labels, lengths, label_lengths) File "/data/jsa/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 100, in forward return self.loss(acts, labels, act_lens, label_lens, self.blank, self.reduction) File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 23, in forward loss_func = warp_rnnt.gpu_rnnt if is_cuda else warp_rnnt.cpu_rnnt AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt'

-- cuda found TRUE -- Building shared library with GPU support -- Configuring done -- Generating done -- Build files have been written to: /lustr/jsa/env/download/warp-transducer-master/build [ 7%] Linking CXX shared library libwarprnnt.so [ 14%] Built target warprnnt [ 21%] Linking CXX executable test_gpu [ 35%] Built target test_gpu [ 42%] Linking CXX executable test_time_gpu [ 57%] Built target test_time_gpu [ 64%] Linking CXX executable test_time [ 78%] Built target test_time [ 85%] Linking CXX executable test_cpu [100%] Built target test_cpu

echo $CUDA_HOME /data/ren/install_dir/cuda-10.1 echo $WARP_RNNT_PATH /lustr/jsa/env/download/warp-transducer-master/build

I have same issue in my docker env

I solve this problem when i build cmake file with gpu

and i export cpath like this

export CPATH=/usr/local/cuda-10.1/targets/x86_64-linux/include:$CPATH
export LD_LIBRARY_PATH=/usr/local/cuda-10.1/targets/x86_64-linux/lib:$LD_LIBRARY_PATH
export PATH=/usr/local/cuda-10.1/bin:$PATH

https://stackoverflow.com/questions/13167598/error-cuda-runtime-h-no-such-file-or-directory/43389168

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants