Skip to content

Conversation

rainj-me
Copy link
Contributor

📌 Description

The dequantize_block logic in test_trtllm_cutlass_fused_moe.py file is not correct. The test case is not failed due to the HIDDEN_SIZES intended set to 128, which makes the tensor multiply succeed like

x_quant.shape = torch.Size([1, 128])
scales.shape = torch.Size([1, 1, 128])

However, if we change the HIDDEN_SIZES to 256 and keep block size to 128, the tensor multiply would failed, since now the

x_quant.shape = torch.Size([1, 256])
scales.shape = torch.Size([1, 2, 128])

With the logic in this PR, the scales will reshape to (1, 256) which is the same shape as x_quant, now the tensor multiply would succeed

x_quant.shape = torch.Size([1, 256])
scales.shape = torch.Size([1, 256])

The logic also works with batch size > 1

🔍 Related Issues

N/A

🚀 Pull Request Checklist

Thank you for contributing to FlashInfer! Before we review your pull request, please make sure the following items are complete.

✅ Pre-commit Checks

  • I have installed pre-commit by running pip install pre-commit (or used your preferred method).
  • I have installed the hooks with pre-commit install.
  • I have run the hooks manually with pre-commit run --all-files and fixed any reported issues.

If you are unsure about how to set up pre-commit, see the pre-commit documentation.

🧪 Tests

  • Tests have been added or updated as needed.
  • All tests are passing (unittest, etc.).

Reviewer Notes

Copy link
Contributor

Summary of Changes

Hello @rainj-me, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a critical bug in the dequantize_block function within the trtllm_cutlass_fused_moe test. The previous implementation incorrectly handled the reshaping of scaling factors (scales) when HIDDEN_SIZES varied, leading to tensor dimension mismatches during dequantization. The fix introduces a dynamic block_size_n parameter and refines the scale reshaping logic, ensuring correct dequantization across different hidden dimensions and block sizes. This change enhances the robustness and accuracy of the block quantization tests.

Highlights

  • Corrected Dequantization Logic: The dequantize_block function in test_trtllm_cutlass_fused_moe.py has been updated to correctly handle the reshaping of scaling factors (scales) for block-quantized tensors, resolving dimension mismatch issues.
  • Dynamic Block Size Parameter: The dequantize_block function now accepts block_size_n as a parameter, making the dequantization logic more flexible and robust for various block sizes.
  • Enhanced Test Coverage: The HIDDEN_SIZES constant in the test file was increased from 128 to 256 to properly expose and validate the fix for varying hidden dimensions, ensuring the dequantize_block function works as expected under different conditions.
  • Improved Scale Reshaping: The logic for reshaping scales for 2D activation tensors has been revised to ensure scales always matches the x_quant shape, preventing tensor multiplication errors.
  • Flexible Padding Handling: The padding logic for both in_dim and out_dim in weight tensors now correctly utilizes the block_size_n parameter instead of a hardcoded value, improving adaptability.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@rainj-me
Copy link
Contributor Author

Without this PR, the failure log for the test_trtllm_cutlass_fused_moe.py if we change the HIDDEN_SIZES to 256 is

>       x_dequant = x_quant.to(dtype) * scales.to(dtype)
                    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
E       RuntimeError: The size of tensor a (256) must match the size of tensor b (128) at non-singleton dimension 2

tests/test_trtllm_cutlass_fused_moe.py:932: RuntimeError

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request correctly fixes a bug in the dequantize_block function for activation tensors and improves the test coverage by increasing HIDDEN_SIZES. The generalization of the block size by replacing hardcoded values is also a good improvement for maintainability. I've identified a couple of areas for further improvement: one is a consistency issue in how ceiling division is calculated, and the other is a more critical bug in the padding logic for weight tensors within the same function. My review includes suggestions to address these points.

Comment on lines +933 to +936
if in_dim % block_size_n != 0:
scales = scales[..., : in_dim % block_size_n, :]
if out_dim % block_size_n != 0:
scales = scales[..., :, : out_dim % block_size_n]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for handling padding for weight tensors appears to be incorrect. After transform_dim is applied, the dimensions of scales are padded up to be multiples of block_size_n. To truncate them back to the original dimensions, you should slice to in_dim and out_dim respectively, not in_dim % block_size_n and out_dim % block_size_n. The use of the modulo operator here is a bug.

This can be simplified to a single line that handles both dimensions, which is more concise and correct for all cases, including when dimensions are already multiples of block_size_n.

        scales = scales[..., :in_dim, :out_dim]

Copy link
Contributor Author

@rainj-me rainj-me Sep 18, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the in_dim and out_dim in the scales per block.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@yzh119 yzh119 requested a review from wenscarl September 19, 2025 04:07
]
HIDDEN_SIZES = [
128,
256,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the catch. Could you please add another case that hidden_size is not divisible of 128 to trigger the padding logic?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, will fix it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@wenscarl , I believe the padding logic and the per block quantization logic also not correct.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants