Skip to content

Conversation

@PerkzZheng
Copy link
Contributor

@PerkzZheng PerkzZheng commented Nov 24, 2025

📌 Description

This MR updates the trtllm-gen cubins which fix several bugs of headDim 256 fmha kernels.

🔍 Related Issues

#1993

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Summary by CodeRabbit

  • Chores
    • Updated artifact references and checksums for TRT-LLM FMHA components.
  • Tests
    • Parameterized attention tests to run with head dimensions 128 and 256; removed the expected failure for the 256-bit decode path so it now runs normally.
    • Modified a communication test to skip when requested world size exceeds available GPUs instead of erroring.

✏️ Tip: You can customize this high-level summary in your review settings.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 24, 2025

Walkthrough

Updated the TRTLLM_GEN_FMHA artifact path and checksum in flashinfer/artifacts.py, parameterized attention tests in tests/attention/test_trtllm_gen_attention.py to run with head_dim values 128 and 256, and changed one comm test to skip when requested world size exceeds available GPUs.

Changes

Cohort / File(s) Summary
Artifact constants update
flashinfer/artifacts.py
Updated ArtifactPath.TRTLLM_GEN_FMHA from 1e49deb33ec20018ae0acf1d956a579578069da1/fmha/trtllm-gen/ to 9f1b6ddaa1592a8339a82fcab7d27a57eff445fd/fmha/trtllm-gen/ and updated CheckSumHash.TRTLLM_GEN_FMHA from 66757498f573430583d63b04c02bf9e38306eefe2ce31df9b5d923d99bd15d84 to a5a60600a80076317703695f56bbef2f0a44075ef4e24d7b06ba67ff68bc9da2.
Head-dim parameterization (tests)
tests/attention/test_trtllm_gen_attention.py
Added @pytest.mark.parametrize("head_dim", [128, 256]) to several tests, threaded head_dim through _test_trtllm_batch_prefill, prefill and decode flows, removed hardcoded head_dim, and removed an explicit xfail for head_dim=256.
Test skip behavior
tests/comm/test_trtllm_mnnvl_allreduce_custom_comm.py
When requested world size > available GPUs, replace raising ValueError with pytest.skip(...) and an explanatory message.

Sequence Diagram(s)

sequenceDiagram
    participant TR as Test Runner
    participant T as test_trtllm_gen_attention
    participant P as Prefill/Decode Flow
    participant C as Comm Test

    TR->>T: run param set head_dim=128,256
    alt each head_dim
        T->>P: call prefill/decode with head_dim
        P-->>T: return result (pass/fail)
    end

    TR->>C: request world_size > available
    alt requested > available
        C-->>TR: pytest.skip("requested world size ...")
    else
        C-->>TR: raise ValueError (old behavior)
    end
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Verify artifact path and checksum values in flashinfer/artifacts.py match release metadata.
  • Run the parameterized attention tests for both head_dim values and inspect prefill/decode behavior (especially previously-xfailed path).
  • Confirm the comm test now skips correctly and messaging is clear.

Possibly related PRs

Suggested reviewers

  • aleozlx
  • cyx-6
  • wenscarl
  • nvmbreughe

Poem

🐇 I hopped across the diff at play,
Checksums changed and head dims sway,
Tests now run in neat sets of two,
Too-large worlds politely skip — who knew?
A rabbit claps for builds anew.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title clearly identifies the main change: updating trtllm-gen FMHA kernels to fix bugs for headDim 256. It's specific and directly related to the core modifications in artifacts.py and test updates.
Description check ✅ Passed The description follows the template with all required sections completed: Description explains the kernel updates and bug fixes, Related Issues links to #1993, Pre-commit Checks are marked complete, and Tests section confirms updates and passing tests.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

📜 Recent review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 499e543 and a013310.

📒 Files selected for processing (1)
  • flashinfer/artifacts.py (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • flashinfer/artifacts.py
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @PerkzZheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request focuses on improving the stability and correctness of the trtllm-gen Fused Multi-Head Attention (FMHA) kernels, particularly for configurations involving a headDim of 256. It achieves this by integrating updated kernel binaries that contain crucial bug fixes and by expanding the existing test coverage to thoroughly validate these specific headDim settings.

Highlights

  • Kernel Updates: New trtllm-gen Fused Multi-Head Attention (FMHA) kernels have been integrated by updating their artifact paths and checksums.
  • Bug Fixes: These updated kernels specifically address and fix several bugs related to headDim 256 configurations within the trtllm-gen FMHA operations.
  • Expanded Test Coverage: The test suite for trtllm-gen attention mechanisms has been enhanced to include head_dim=256 in batch prefill and decode tests, ensuring the stability and correctness of the bug fixes.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request updates the trtllm-gen cubins to fix bugs for head_dim=256 FMHA kernels. The changes correctly update artifact paths and checksums, and expand the test suite to cover head_dim=256. The changes are correct and align with the PR's goal.

For future improvement, consider removing the test_trtllm_batch_decode_head_dim_256 test. Since test_trtllm_batch_decode is updated in this PR to also cover head_dim=256, the specialized test seems redundant. Removing it and its pytest.xfail marker would improve test suite clarity.

@PerkzZheng
Copy link
Contributor Author

let me rebase this.

@PerkzZheng PerkzZheng force-pushed the user/perkzz/trtllm-gen-fmha-headim-256 branch from 54eb341 to b8e6c83 Compare November 24, 2025 06:19
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
tests/attention/test_trtllm_gen_attention.py (1)

402-452: Head-dim parameterization for prefill tests is correctly plumbed end-to-end

Passing head_dim into _test_trtllm_batch_prefill and driving it via pytest.mark.parametrize("head_dim", [128, 256]) in both test_trtllm_batch_prefill and test_trtllm_batch_prefill_bs1 cleanly removes hardcoded dimensions while ensuring q/kv shapes and sm_scale stay consistent with the chosen head size. This doubles the prefill test matrix, so just keep an eye on CI runtime, but structurally the change looks solid.

Also applies to: 642-675, 696-729

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 54eb341 and b8e6c83.

📒 Files selected for processing (2)
  • flashinfer/artifacts.py (2 hunks)
  • tests/attention/test_trtllm_gen_attention.py (7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: Deploy Docs
🔇 Additional comments (1)
flashinfer/artifacts.py (1)

90-124: TRTLLM_GEN_FMHA artifact path/hash update is wired correctly into checksum map

ArtifactPath.TRTLLM_GEN_FMHA and CheckSumHash.TRTLLM_GEN_FMHA are updated in sync, and CheckSumHash.map_checksums keys/values are derived from these constants, so downstream download/verification will automatically target the new FMHA cubins without further code changes.

@PerkzZheng
Copy link
Contributor Author

/bot run

@flashinfer-bot
Copy link
Collaborator

@PerkzZheng is not authorized to trigger this CI job. cc: @yzh119, @sricketts, @yongwww

@yzh119
Copy link
Collaborator

yzh119 commented Nov 24, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !162 has been created, and the CI pipeline #39062449 is currently running. I'll report back once the pipeline job completes.

@flashinfer-bot
Copy link
Collaborator

[FAILED] Pipeline #39062449: 5/18 passed

@PerkzZheng
Copy link
Contributor Author

[FAILED] Pipeline #39062449: 5/18 passed

@yzh119 it seems that the cubins/headers are not accessible in last run. Can you help re-run the CI or I might have pasted the wrong hash. Thanks!

 AssertionError: Failed to get checksums.txt from 9f1b6ddaa1592a8339a82fcab7d27a57eff445fd/fmha/trtllm-gen//checksums.txt

@yzh119
Copy link
Collaborator

yzh119 commented Nov 25, 2025

/bot run

@flashinfer-bot
Copy link
Collaborator

GitLab MR !162 has been updated with latest changes, and the CI pipeline #39124966 is currently running. I'll report back once the pipeline job completes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants