You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Please help me with this issue.
I build targets like the installation instructions. I modified the env via $CUDA_HOME before build and used
cmake -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME ..
make
When I run test.py this problem still happened. I don't why.
python test/test.py
CPU Tests passed!
Traceback (most recent call last):
File "test/test.py", line 170, in
small_test()
File "test/test.py", line 60, in small_test
cost, grads = wrap_and_call(acts, labels)
File "test/test.py", line 44, in wrap_and_call
costs = fn(acts, labels, lengths, label_lengths)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 100, in forward
return self.loss(acts, labels, act_lens, label_lens, self.blank, self.reduction)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 23, in forward
loss_func = warp_rnnt.gpu_rnnt if is_cuda else warp_rnnt.cpu_rnnt
AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt'
-- cuda found TRUE
-- Building shared library with GPU support
-- Configuring done
-- Generating done
-- Build files have been written to: /lustr/jsa/env/download/warp-transducer-master/build
[ 7%] Linking CXX shared library libwarprnnt.so
[ 14%] Built target warprnnt
[ 21%] Linking CXX executable test_gpu
[ 35%] Built target test_gpu
[ 42%] Linking CXX executable test_time_gpu
[ 57%] Built target test_time_gpu
[ 64%] Linking CXX executable test_time
[ 78%] Built target test_time
[ 85%] Linking CXX executable test_cpu
[100%] Built target test_cpu
Please help me with this issue. I build targets like the installation instructions. I modified the env via $CUDA_HOME before build and used
cmake -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME .. make
When I run test.py this problem still happened. I don't why.
python test/test.py CPU Tests passed! Traceback (most recent call last): File "test/test.py", line 170, in small_test() File "test/test.py", line 60, in small_test cost, grads = wrap_and_call(acts, labels) File "test/test.py", line 44, in wrap_and_call costs = fn(acts, labels, lengths, label_lengths) File "/data/jsa/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 100, in forward return self.loss(acts, labels, act_lens, label_lens, self.blank, self.reduction) File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 23, in forward loss_func = warp_rnnt.gpu_rnnt if is_cuda else warp_rnnt.cpu_rnnt AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt'
-- cuda found TRUE -- Building shared library with GPU support -- Configuring done -- Generating done -- Build files have been written to: /lustr/jsa/env/download/warp-transducer-master/build [ 7%] Linking CXX shared library libwarprnnt.so [ 14%] Built target warprnnt [ 21%] Linking CXX executable test_gpu [ 35%] Built target test_gpu [ 42%] Linking CXX executable test_time_gpu [ 57%] Built target test_time_gpu [ 64%] Linking CXX executable test_time [ 78%] Built target test_time [ 85%] Linking CXX executable test_cpu [100%] Built target test_cpu
Please help me with this issue.
I build targets like the installation instructions. I modified the env via $CUDA_HOME before build and used
cmake -DCUDA_TOOLKIT_ROOT_DIR=$CUDA_HOME ..
make
When I run test.py this problem still happened. I don't why.
python test/test.py
CPU Tests passed!
Traceback (most recent call last):
File "test/test.py", line 170, in
small_test()
File "test/test.py", line 60, in small_test
cost, grads = wrap_and_call(acts, labels)
File "test/test.py", line 44, in wrap_and_call
costs = fn(acts, labels, lengths, label_lengths)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 100, in forward
return self.loss(acts, labels, act_lens, label_lens, self.blank, self.reduction)
File "/data/jsa/miniconda3/lib/python3.7/site-packages/warprnnt_pytorch-0.1-py3.7-linux-x86_64.egg/warprnnt_pytorch/init.py", line 23, in forward
loss_func = warp_rnnt.gpu_rnnt if is_cuda else warp_rnnt.cpu_rnnt
AttributeError: module 'warprnnt_pytorch.warp_rnnt' has no attribute 'gpu_rnnt'
-- cuda found TRUE
-- Building shared library with GPU support
-- Configuring done
-- Generating done
-- Build files have been written to: /lustr/jsa/env/download/warp-transducer-master/build
[ 7%] Linking CXX shared library libwarprnnt.so
[ 14%] Built target warprnnt
[ 21%] Linking CXX executable test_gpu
[ 35%] Built target test_gpu
[ 42%] Linking CXX executable test_time_gpu
[ 57%] Built target test_time_gpu
[ 64%] Linking CXX executable test_time
[ 78%] Built target test_time
[ 85%] Linking CXX executable test_cpu
[100%] Built target test_cpu
echo $CUDA_HOME
/data/ren/install_dir/cuda-10.1
echo $WARP_RNNT_PATH
/lustr/jsa/env/download/warp-transducer-master/build
The text was updated successfully, but these errors were encountered: