-
Notifications
You must be signed in to change notification settings - Fork 529
fix the dequantize_block in the trtllm_cutlass fuse moe test #1721
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix the dequantize_block in the trtllm_cutlass fuse moe test #1721
Conversation
Summary of ChangesHello @rainj-me, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request addresses a critical bug in the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Without this PR, the failure log for the test_trtllm_cutlass_fused_moe.py if we change the HIDDEN_SIZES to 256 is
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request correctly fixes a bug in the dequantize_block
function for activation tensors and improves the test coverage by increasing HIDDEN_SIZES
. The generalization of the block size by replacing hardcoded values is also a good improvement for maintainability. I've identified a couple of areas for further improvement: one is a consistency issue in how ceiling division is calculated, and the other is a more critical bug in the padding logic for weight tensors within the same function. My review includes suggestions to address these points.
if in_dim % block_size_n != 0: | ||
scales = scales[..., : in_dim % block_size_n, :] | ||
if out_dim % block_size_n != 0: | ||
scales = scales[..., :, : out_dim % block_size_n] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The logic for handling padding for weight tensors appears to be incorrect. After transform_dim
is applied, the dimensions of scales
are padded up to be multiples of block_size_n
. To truncate them back to the original dimensions, you should slice to in_dim
and out_dim
respectively, not in_dim % block_size_n
and out_dim % block_size_n
. The use of the modulo operator here is a bug.
This can be simplified to a single line that handles both dimensions, which is more concise and correct for all cases, including when dimensions are already multiples of block_size_n
.
scales = scales[..., :in_dim, :out_dim]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the in_dim and out_dim in the scales per block.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
] | ||
HIDDEN_SIZES = [ | ||
128, | ||
256, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the catch. Could you please add another case that hidden_size is not divisible of 128 to trigger the padding logic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, will fix it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@wenscarl , I believe the padding logic and the per block quantization logic also not correct.
📌 Description
The dequantize_block logic in test_trtllm_cutlass_fused_moe.py file is not correct. The test case is not failed due to the HIDDEN_SIZES intended set to 128, which makes the tensor multiply succeed like
However, if we change the HIDDEN_SIZES to 256 and keep block size to 128, the tensor multiply would failed, since now the
With the logic in this PR, the scales will reshape to (1, 256) which is the same shape as x_quant, now the tensor multiply would succeed
The logic also works with batch size > 1
🔍 Related Issues
N/A
🚀 Pull Request Checklist
Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.
✅ Pre-commit Checks
pre-commit
by runningpip install pre-commit
(or used your preferred method).pre-commit install
.pre-commit run --all-files
and fixed any reported issues.🧪 Tests
unittest
, etc.).Reviewer Notes