-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
need to swap layer norm op for triton-based layer norm? #57
Comments
It looks like on this line, you check if the custom layer norm op is installed. if so, this param is set to true. Following the call stack, that sets this param in the Flash-Attention package. That implementation here has moved to a Triton implementation. However, later in the original hyena-DNA code, we are using the non-Triton function. Does that need to be swapped out? relevant PR: Dao-AILab/flash-attention@abbc131 |
In case the answer is yes, think this should do it: #58 |
In the Flash-attention repo here, there is now a note that the fused CUDA op has been replaced with a Triton op.
in light of that, is it now reasonable to remove from the dependencies section of this readme the suggestion to
pip install
the layer norm op?The text was updated successfully, but these errors were encountered: