Thank you for your interest in contributing to Liger-Kernel! This guide will help you set up your development environment, add a new kernel, run tests, and submit a pull request (PR).
@ByronHsu(admin) @qingquansong @yundai424 @kvignesh1420 @lancerts @JasonZhu1313 @shimizust
Leave #take
in the comment and tag the maintainer.
- Clone the Repository
git clone https://github.com/linkedin/Liger-Kernel.git cd Liger-Kernel
- Install Dependencies and Editable Package
If encounter error
pip install . -e[dev]
no matches found: .[dev]
, please usepip install -e .'[dev]'
To get familiar with the folder structure, please refer to https://github.com/linkedin/Liger-Kernel?tab=readme-ov-file#structure.
-
Create Your Kernel Add your kernel implementation in
src/liger_kernel/
. -
Add Unit Tests Create unit tests and convergence tests for your kernel in the tests directory. Ensure that your tests cover all kernel functionalities.
-
Add Benchmark Script Add a benchmarking script under
benchmark/scripts
using the naming conventionbenchmark_{kernel_name}.py
showing the performance difference between the Liger kernel and HuggingFace.
- Run
make test
to ensure correctness. - Run
make checkstyle
to ensure code style. - Run
make test-convergence
to ensure convergence.
python -m pytest test_sample.py::test_function_name
The /benchmark
directory contains benchmarking scripts for the individual kernels, demonstrating differences in speed and memory usage between using Liger and HuggingFace module implementations.
- Run
make run-benchmarks
to run all benchmarking scripts and append data tobenchmark/data/all_benchmark_data.csv
.- Existing entries that are the same (based on
kernel_name
,kernel_provider
,kernel_operation_mode
,metric_name
,x_name
,x_value
,extra_benchmark_config_str
, andgpu_name
) will not be overwritten.
- Existing entries that are the same (based on
- Run
make run-benchmarks OVERWRITE=1
to overwrite any existing entries that have the same configuration. - Run
python benchmark/scripts/benchmark_{kernel_name}.py
to run an individual benchmark. - You can use the
benchmark/benchmarks_visualizer.py
script to generate visualizations from the CSV, these are then saved to thebenchmark/visualizations
directory (note: this directory is not tracked by git).
Fork the repo, copy and paste the successful test logs in the PR and submit the PR followed by the PR template (example PR).
As a contributor, you represent that the code you submit is your original work or that of your employer (in which case you represent you have the right to bind your employer). By submitting code, you (and, if applicable, your employer) are licensing the submitted code to LinkedIn and the open source community subject to the BSD 2-Clause license.
- Bump the version in pyproject.toml to the desired version (for example,
0.2.0
) - Submit a PR and merge
- Create a new release based on the current HEAD, tag name using
v<version number>
for examplev0.2.0
. Alternatively, If you want to create release based on a different commit hash,git tag v0.2.0 <commit hash> && git push origin v0.2.0
, and create release based on this tag - Adding release note: Minimum requirement is to click the
Generate Release Notes
button that will automatically generates 1) changes included, 2) new contributors. It's good to add sections on top to highlight the important changes. - New pip uploading will be triggered upon a new release. NOTE: Both pre-release and official release will trigger the workflow to build wheel and publish to pypi, so please be sure that step 1-3 are followed correctly!
Here we follow the sematic versioning. Denote the version as major.minor.patch
, we increment:
- Major version when there is backward incompatible change
- Minor version when there is new backward-compatible functionaility
- Patch version for bug fixes