Commit a2ae88c
authored
[Example] Update GEMM FP8 Example (#123)
* Add DeepSeek MLA decode example with Flash Attention implementation
* Add GEMM SplitK and StreamK example implementations
This commit introduces two new example scripts demonstrating advanced GEMM (matrix multiplication) techniques:
- `example_tilelang_gemm_splitk.py`: Implements a Split-K GEMM kernel using TileLang
- `example_tilelang_gemm_streamk.py`: Implements a Stream-K GEMM kernel using TileLang
Both examples showcase different parallel computation strategies for matrix multiplication, with comprehensive testing using PyTorch reference implementations.
* Refactor GEMM SplitK and StreamK example implementations
Clean up and improve code formatting for the SplitK and StreamK GEMM example scripts:
- Remove unused import (Profiler) in splitk example
- Simplify line breaks and improve code readability
- Standardize indentation and remove unnecessary whitespace
- Optimize atomic add and copy operations for better clarity
* Add block sparse attention benchmarks for multiple libraries
This commit introduces comprehensive block sparse attention benchmarks for different libraries:
- TileLang block sparse FMHA implementation
- Triton block sparse FMHA implementation
- PyTorch reference block sparse FMHA implementation
- FlashAttention dense FMHA reference implementation
The benchmarks include:
- Configurable benchmark parameters (batch size, heads, sequence length, etc.)
- Sparse mask generation using top-k and threshold methods
- Performance measurement for different sparse attention configurations
- Utility functions for mask generation and benchmarking
* Refactor block sparse attention benchmarks with code style improvements
- Add Ruff linter ignore comments to benchmark files
- Improve code formatting and line breaks
- Remove unused imports
- Standardize print statement formatting
- Enhance code readability across multiple library benchmarks
* lint fix
* Add CUDA atomic operations for BFLOAT16 and update function naming
- Implement AtomicAdd functions for BFLOAT16 and BFLOAT16x2 in CUDA common header
- Rename existing atomic add functions to use PascalCase (atomicAdd -> AtomicAdd)
- Add a new __pack_nv_bfloat162 function for packing BFLOAT16 values
- Update kernel and language customization to use new function names
- Add return type annotations in profiler module
* lint fix
* Add example for Group Query Attention (GQA) forward pass using Flash Attention in TileLang
This commit introduces a new example script `example_gqa_fwd_bshd.py` that demonstrates:
- Group Query Attention (GQA) implementation
- Flash Attention forward pass
- Performance benchmarking
- Configurable parameters for batch, heads, sequence length, and dimension
- Autotuning support
- Reference implementation comparison
* Refactor IR lowering pipeline into modular phases
This commit introduces a new module `phase.py` to modularize the IR lowering process by splitting the complex lowering pipeline into two distinct phases:
- `LowerAndLegalize`: Handles initial IR legalization and transformation
- `OptimizeForTarget`: Applies target-specific optimizations
The changes simplify the lowering logic in multiple files by extracting the transformation steps into reusable functions, improving code readability and maintainability.
* lintfix
* nas kernel
* Enhance Native Sparse Attention Examples with Code Improvements and Parameter Updates
- Updated example_tilelang_nsa.py and example_triton_nsa.py with code formatting and style improvements
- Increased default number of heads and selected blocks in TileLang NSA example
- Added Ruff linter ignore comments to reference.py
- Standardized function signatures and improved code readability across NSA implementations
* Add utility math functions for integer operations
- Implement `next_power_of_2()` to calculate the next power of 2 for an integer
- Add `cdiv()` function for ceiling division of integers
* Add utility math functions for integer operations
- Implement `next_power_of_2()` to calculate the next power of 2 for an integer
- Add `cdiv()` function for ceiling division of integers
* Refactor DeepSeek MLA Decode Example with Enhanced Flash Attention Implementation
- Update flash attention kernel to support positional embeddings (PE)
- Modify reference implementation to handle PE and group query attention
- Increase default batch size and adjust benchmarking parameters
- Improve kernel performance and readability
- Add einops and torch operations for more flexible tensor manipulation
* Update README.md with corrected Flash MLA Decoding example path
- Modify the example link for Flash MLA Decoding to point to the correct directory
- Ensure accurate navigation to the DeepSeek MLA decoding example1 parent 375423c commit a2ae88c
File tree
4 files changed
+324
-354
lines changed- examples
- deepseek_mla
- flash_decoding
- gemm_fp8
4 files changed
+324
-354
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
26 | 26 | | |
27 | 27 | | |
28 | 28 | | |
29 | | - | |
| 29 | + | |
30 | 30 | | |
31 | 31 | | |
32 | 32 | | |
| |||
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
3 | 3 | | |
4 | 4 | | |
5 | 5 | | |
| 6 | + | |
6 | 7 | | |
7 | | - | |
| 8 | + | |
8 | 9 | | |
9 | 10 | | |
10 | 11 | | |
11 | 12 | | |
12 | | - | |
13 | | - | |
14 | | - | |
15 | | - | |
16 | | - | |
17 | 13 | | |
18 | 14 | | |
19 | 15 | | |
| |||
22 | 18 | | |
23 | 19 | | |
24 | 20 | | |
25 | | - | |
26 | | - | |
27 | | - | |
| 21 | + | |
| 22 | + | |
| 23 | + | |
| 24 | + | |
28 | 25 | | |
29 | | - | |
| 26 | + | |
30 | 27 | | |
31 | 28 | | |
32 | | - | |
33 | | - | |
34 | | - | |
35 | | - | |
| 29 | + | |
| 30 | + | |
| 31 | + | |
| 32 | + | |
| 33 | + | |
| 34 | + | |
36 | 35 | | |
37 | 36 | | |
| 37 | + | |
38 | 38 | | |
39 | 39 | | |
40 | 40 | | |
| |||
53 | 53 | | |
54 | 54 | | |
55 | 55 | | |
| 56 | + | |
56 | 57 | | |
57 | 58 | | |
58 | 59 | | |
59 | 60 | | |
60 | 61 | | |
61 | | - | |
62 | | - | |
63 | | - | |
64 | | - | |
65 | | - | |
66 | | - | |
67 | | - | |
| 62 | + | |
| 63 | + | |
| 64 | + | |
| 65 | + | |
| 66 | + | |
| 67 | + | |
| 68 | + | |
| 69 | + | |
| 70 | + | |
| 71 | + | |
| 72 | + | |
| 73 | + | |
| 74 | + | |
| 75 | + | |
| 76 | + | |
| 77 | + | |
68 | 78 | | |
69 | 79 | | |
| 80 | + | |
| 81 | + | |
70 | 82 | | |
71 | 83 | | |
72 | 84 | | |
| |||
78 | 90 | | |
79 | 91 | | |
80 | 92 | | |
81 | | - | |
82 | | - | |
83 | | - | |
84 | | - | |
85 | | - | |
| 93 | + | |
86 | 94 | | |
87 | 95 | | |
88 | 96 | | |
| |||
96 | 104 | | |
97 | 105 | | |
98 | 106 | | |
99 | | - | |
100 | | - | |
| 107 | + | |
| 108 | + | |
101 | 109 | | |
102 | 110 | | |
103 | 111 | | |
| |||
133 | 141 | | |
134 | 142 | | |
135 | 143 | | |
136 | | - | |
137 | | - | |
138 | | - | |
| 144 | + | |
| 145 | + | |
| 146 | + | |
| 147 | + | |
139 | 148 | | |
140 | | - | |
141 | | - | |
| 149 | + | |
| 150 | + | |
142 | 151 | | |
143 | | - | |
| 152 | + | |
144 | 153 | | |
145 | 154 | | |
146 | 155 | | |
147 | 156 | | |
148 | 157 | | |
149 | | - | |
| 158 | + | |
150 | 159 | | |
151 | 160 | | |
152 | | - | |
153 | | - | |
154 | | - | |
155 | | - | |
| 161 | + | |
| 162 | + | |
| 163 | + | |
| 164 | + | |
| 165 | + | |
| 166 | + | |
156 | 167 | | |
157 | 168 | | |
158 | 169 | | |
159 | | - | |
160 | | - | |
161 | | - | |
162 | | - | |
163 | | - | |
164 | | - | |
165 | | - | |
166 | | - | |
167 | | - | |
168 | | - | |
169 | | - | |
170 | | - | |
171 | | - | |
172 | | - | |
173 | | - | |
174 | | - | |
175 | | - | |
176 | | - | |
177 | | - | |
178 | | - | |
179 | | - | |
| 170 | + | |
| 171 | + | |
| 172 | + | |
| 173 | + | |
| 174 | + | |
| 175 | + | |
| 176 | + | |
| 177 | + | |
| 178 | + | |
| 179 | + | |
| 180 | + | |
| 181 | + | |
| 182 | + | |
| 183 | + | |
| 184 | + | |
| 185 | + | |
| 186 | + | |
| 187 | + | |
| 188 | + | |
| 189 | + | |
| 190 | + | |
| 191 | + | |
| 192 | + | |
| 193 | + | |
| 194 | + | |
| 195 | + | |
| 196 | + | |
| 197 | + | |
| 198 | + | |
180 | 199 | | |
181 | 200 | | |
182 | 201 | | |
| |||
251 | 270 | | |
252 | 271 | | |
253 | 272 | | |
254 | | - | |
| 273 | + | |
255 | 274 | | |
256 | 275 | | |
257 | 276 | | |
| |||
260 | 279 | | |
261 | 280 | | |
262 | 281 | | |
263 | | - | |
| 282 | + | |
264 | 283 | | |
265 | | - | |
| 284 | + | |
| 285 | + | |
266 | 286 | | |
267 | | - | |
| 287 | + | |
0 commit comments