You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: plugins/fused-ops-and-kernels/README.md
+14-1
Original file line number
Diff line number
Diff line change
@@ -79,10 +79,23 @@ It is realtively easy by following an existing template, in what follows we use
79
79
)
80
80
```
81
81
82
+
### Running Liger Kernel Benchmarks
83
+
84
+
Using the [scenarios-liger.yaml](../../scripts/benchmarks/scenarios-liger.yaml), this will run full fine tuning, lora peft, autoGPTQ lora peft, and bits-and-bytes lora peft with the triton kernels (Fast RMS, RoPE, CrossEnt) as a base and then run with the liger kernel for LigerFusedLinearCrossEntropy as well as Fast RMS, RoPE to compare results. It only runs against mistral and llama models.
85
+
86
+
The benchmarks were ran separately for each `num_gpu` entry; they can be run together in a single command, but this is more efficient.
0 commit comments