Skip to content

Conversation

@Ther-LF
Copy link
Contributor

@Ther-LF Ther-LF commented Apr 17, 2025

Fixed a bug where cutlass dispatch for fp8 and int8 couldn't invoke the M<=16 config.

Changes

  • Fixed the condition check for fp8/int8 dispatch to properly handle M<=16 cases
  • Ensured the correct kernel config is selected for small matrix sizes

…onfig

Signed-off-by: Ther-LF <2639852836@qq.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@Ther-LF
Copy link
Contributor Author

Ther-LF commented Apr 18, 2025

I tested the performance of meta-llama/Llama-2-7b-hf-TP1 with a token length of 16, comparing FP8 and INT8 precision in Cutlass W8A8 GEMM. The original performance was
711744958240_ pic
721744958898_ pic
, and after optimizations, it improved to
691744956280_ pic
701744956731_ pic
. The results show a consistent speedup of 10-20%.

@Ther-LF
Copy link
Contributor Author

Ther-LF commented Apr 21, 2025

Hi @mgoin ,

Would you mind checking my PR and merging it if possible? You previously reviewed a related PR [Kernel] Tuned int8 kernels for Ada Lovelace

Thanks for your time!

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems reasonable to me thanks for the results. cc @varun-sundar-rabindranath

@mgoin mgoin added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 27, 2025
@vllm-bot vllm-bot merged commit c12df53 into vllm-project:main Apr 28, 2025
84 of 87 checks passed
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Apr 29, 2025
lk-chen pushed a commit to lk-chen/vllm that referenced this pull request Apr 29, 2025
adobrzyn pushed a commit to HabanaAI/vllm-fork that referenced this pull request Apr 30, 2025
vllm-project#16751)

Signed-off-by: Ther-LF <2639852836@qq.com>
Signed-off-by: Agata Dobrzyniewicz <adobrzyniewicz@habana.ai>
RichardoMrMu pushed a commit to RichardoMrMu/vllm that referenced this pull request May 12, 2025
vllm-project#16751)

Signed-off-by: Ther-LF <2639852836@qq.com>
Signed-off-by: Mu Huai <tianbowen.tbw@antgroup.com>
zzzyq pushed a commit to zzzyq/vllm that referenced this pull request May 24, 2025
vllm-project#16751)

Signed-off-by: Ther-LF <2639852836@qq.com>
Signed-off-by: Yuqi Zhang <yuqizhang@google.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants