Pinned Loading
-
flash-attention-v2-RDNA3-minimal
flash-attention-v2-RDNA3-minimal PublicForked from Repeerc/flash-attention-v2-RDNA3-minimal
a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA environments.
Python
-
composable_kernel
composable_kernel PublicForked from FeepingCreature/composable_kernel
Composable Kernel: Performance Portable Programming Model for Machine Learning Tensor Operators
C++
-
flash-attention
flash-attention PublicForked from leeliu103/flash-attention
Fast and memory-efficient exact attention
Python
-
pytorch
pytorch PublicForked from ROCm/pytorch
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Python
-
If the problem persists, check the GitHub status page or contact support.

