-
Notifications
You must be signed in to change notification settings - Fork 581
fix: some bugs of headDim 256 trtllm-gen fmha kernels. #2137
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix: some bugs of headDim 256 trtllm-gen fmha kernels. #2137
Conversation
WalkthroughUpdated the TRTLLM_GEN_FMHA artifact path and checksum in Changes
Sequence Diagram(s)sequenceDiagram
participant TR as Test Runner
participant T as test_trtllm_gen_attention
participant P as Prefill/Decode Flow
participant C as Comm Test
TR->>T: run param set head_dim=128,256
alt each head_dim
T->>P: call prefill/decode with head_dim
P-->>T: return result (pass/fail)
end
TR->>C: request world_size > available
alt requested > available
C-->>TR: pytest.skip("requested world size ...")
else
C-->>TR: raise ValueError (old behavior)
end
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Possibly related PRs
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @PerkzZheng, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request focuses on improving the stability and correctness of the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the trtllm-gen cubins to fix bugs for head_dim=256 FMHA kernels. The changes correctly update artifact paths and checksums, and expand the test suite to cover head_dim=256. The changes are correct and align with the PR's goal.
For future improvement, consider removing the test_trtllm_batch_decode_head_dim_256 test. Since test_trtllm_batch_decode is updated in this PR to also cover head_dim=256, the specialized test seems redundant. Removing it and its pytest.xfail marker would improve test suite clarity.
|
let me rebase this. |
54eb341 to
b8e6c83
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
tests/attention/test_trtllm_gen_attention.py (1)
402-452: Head-dim parameterization for prefill tests is correctly plumbed end-to-endPassing
head_diminto_test_trtllm_batch_prefilland driving it viapytest.mark.parametrize("head_dim", [128, 256])in bothtest_trtllm_batch_prefillandtest_trtllm_batch_prefill_bs1cleanly removes hardcoded dimensions while ensuring q/kv shapes andsm_scalestay consistent with the chosen head size. This doubles the prefill test matrix, so just keep an eye on CI runtime, but structurally the change looks solid.Also applies to: 642-675, 696-729
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
flashinfer/artifacts.py(2 hunks)tests/attention/test_trtllm_gen_attention.py(7 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Deploy Docs
🔇 Additional comments (1)
flashinfer/artifacts.py (1)
90-124: TRTLLM_GEN_FMHA artifact path/hash update is wired correctly into checksum map
ArtifactPath.TRTLLM_GEN_FMHAandCheckSumHash.TRTLLM_GEN_FMHAare updated in sync, andCheckSumHash.map_checksumskeys/values are derived from these constants, so downstream download/verification will automatically target the new FMHA cubins without further code changes.
|
/bot run |
|
@PerkzZheng is not authorized to trigger this CI job. cc: @yzh119, @sricketts, @yongwww |
|
/bot run |
|
[FAILED] Pipeline #39062449: 5/18 passed |
@yzh119 it seems that the cubins/headers are not accessible in last run. Can you help re-run the CI or I might have pasted the wrong hash. Thanks! |
|
/bot run |
📌 Description
This MR updates the trtllm-gen cubins which fix several bugs of headDim 256 fmha kernels.
🔍 Related Issues
#1993
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commitby runningpip install pre-commit(or used your preferred method).pre-commit install.pre-commit run --all-filesand fixed any reported issues.🧪 Tests
unittest, etc.).Reviewer Notes
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.