-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flash Attention 2.0 doesn't work: undefined symbol: _ZNK3c1010TensorImpl27throw_data_ptr_access_errorEv #451
Comments
Error persist also on a small stupid tests import torch
from torch import nn
from flash_attn import flash_attn_qkvpacked_func
qvk = nn.Linear(1, 3, 196, 512).cuda()
res = flash_attn_qkvpacked_func(qvk=qvk)
print(res.shape)
print(res)
|
I think you compiled with 2.0.7 and I am using torch 2.1.0 |
using
so python 3.10, cuda 12.1 and torch 2.1 (see nvidia doc) and the correct wheel (https://github.com/Dao-AILab/flash-attention/releases/download/v2.0.7/flash_attn-2.0.7+cu121torch2.1cxx11abiTRUE-cp310-cp310-linux_x86_64.whl) still result in the same error
ImportError: /usr/local/lib/python3.10/dist-packages/flash_attn_2_cuda.cpython-310-x86_64-linux-gnu.so: undefined symbol: _ZNK3c1010TensorImpl27throw_data_ptr_access_errorEv |
Thanks for the report. I saw just this error on nvcr 23.06 as well. nvcr 23.07 should work, can you try? |
Thanks a lot, will try as soon as I go back to my pc, out of curiosity could to give me more details about |
Oh it's a low-level change in error handling. Pytorch added this "throw_data_ptr_access_error" function in May 11. nvcr 23.06 uses pytorch version on May 2 and nvcr 23.07 uses pytorch version on June 7. |
Try a |
Hi @tridao! I have some troubles with python3.10, so I need to use nvcr 23.04. |
You can compile from source with |
Thank you for fast response! I will try it |
Thank you It seems that it is working now |
Hi @tridao what is nvcr and how to change its version? |
import flash_attn_2_cuda as flash_attn_cuda I am still facing this error. Chnaged the torch version from 2.1.2 to 2.1.0. And it is still not working. |
Also facing this error, in Databricks.
|
|
Also facing this. |
Also facing this issue with nvcr 24.01-py3 |
For nvcr 23.12 and 24.01 please use flash-attn 2.5.1.post1 |
'preciate the immediate response 🙌🏼 |
Really helpful! |
thank you. That's work for me. |
Hello, how can i use the version in nvcr 23.07, any details? |
is this genral useful for different nvcr versions? |
I got similar error. I think this error is caused by cuda version. I added: to the end of ~/.bashrc |
Doing this:
and also forcing:
Fixed the issue for me. UPDATE: it didnt, more problems down the line with missing torch operations |
Hi there,
cloing the repo and running
pytest tests/test_flash_attn.py
givesmaybe somebody else have encountered this
Thanks a lot,
Fra
The text was updated successfully, but these errors were encountered: