You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Workaround is currently to install xformers from source which is very slow and painful. It would be great if the binaries could be updated to include support for H100 TORCH_CUDA_ARCH.
Expected behavior
Binaries include SM90 arch as well.
The text was updated successfully, but these errors were encountered:
Hi,
Yes we want to support that in the next release hopefully. 2 things blocking us so far:
we will need to support cuda 12 for best performance on Sm90 - this has implications on the build process, because cuda 12 binaries won't be compatible with cuda 11.x pytorch maybe..?
we will need to support cuda 12 for best performance on Sm90 - this has implications on the build process, because cuda 12 binaries won't be compatible with cuda 11.x pytorch maybe..?
SM90 works fine on CUDA 11.8
bertmaher
pushed a commit
to bertmaher/xformers
that referenced
this issue
Dec 20, 2024
* Re-enable block-sparse tests on CI
* Bump tolerance for newer triton
* Fix test
This test was broken for a while but as the test wasn't active it wasn't caught
🐛 Bug
Command
To Reproduce
Steps to reproduce the behavior:
xformers/packaging/pkg_helpers.bash
Line 24 in b31f4a1
Expected behavior
Binaries include SM90 arch as well.
The text was updated successfully, but these errors were encountered: