Skip to content

Tags: Dao-AILab/flash-attention

Tags

v2.7.4

Bump to v2.7.4

v2.7.4.post1

Drop Pytorch 2.1

v2.7.3

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature.
Change version to 2.7.3 (#1437)

Signed-off-by: Kirthi Shankar Sivamani <ksivamani@nvidia.com>

v2.7.2

Bump to v2.7.2

v2.7.2.post1

[CI] Use MAX_JOBS=1 with nvcc 12.3, don't need OLD_GENERATOR_PATH

v2.7.1.post4

[CI] Don't include <ATen/cuda/CUDAGraphsUtils.cuh>

v2.7.1.post3

[CI] Change torch #include to make it work with torch 2.1 Philox

v2.7.1

Bump to v2.7.1

v2.7.1.post2

[CI] Use torch 2.6.0.dev20241001, reduce torch #include

v2.7.1.post1

[CI] Fix CUDA version for torch 2.6