-
Notifications
You must be signed in to change notification settings - Fork 607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CUTLASS Fused multi head attention #1112
Comments
I believe those are not the same thing. Where did you see that? |
thank you for relpy.
therefore I think xformers use cutlass custom kernel and tuned it's kernels for oracle setting for kernel. |
❓ Questions and Help
Hello, I am watching fused multi-head attention in 3rdparty/cutlass.
In cutlass/examples, fused multi head attention is upstream to xformers.
And CUTLASS said fused multi head attention examples is same as flash attention-2.
Is it true that cutlass fused multi head attention and flash attention-2 kernel is same things?
Thank you.
The text was updated successfully, but these errors were encountered: