Skip to content

Conversation

minosfuture
Copy link
Owner

@minosfuture minosfuture commented Jun 25, 2025

Purpose

vllm-project#19667 changed the workspace creation from torch.zeros to torch.empty. This ends up causing correctness for models using cutlass_moe, e.g. Maverick in our test case. This PR fixes the correctness issue by explicitly filling zeros in cutlass_moe.

Test Plan

lm_eval, ut

Test Result

lm_eval results:

local-chat-completions (model=meta-llama/Llama-4-Maverick-17B-128E-Instruct-FP8,base_url=http://127.0.0.1:8081/v1/chat/completions,num_concurrent=32), gen_kwargs: (None), limit: 200.0, num_fewshot: 5, batch_size: 1

Tasks Version Filter n-shot Metric Value Stderr
gsm8k 3 flexible-extract 5 exact_match 0.935 ± 0.0175
strict-match 5 exact_match 0.920 ± 0.0192

unit test stability verified:

  • without c1.fill_(0), the following one liner verifies stable failure:
for i in {1..10}; do echo $i; pytest -s tests/kernels/moe/test_cutlass_moe.py  -k "test_run_cutlass_moe_fp8 or test_cutlass_moe_8_bit_EP_large" -v  2>&1 > /dev/null && { echo "shouldn't succeed"; exit 1; } done`
  • with c1.fill_(0), the following verifies stable success:
for i in {1..10}; do echo $i; pytest -s tests/kernels/moe/test_cutlass_moe.py  -k "test_run_cutlass_moe_fp8 or test_cutlass_moe_8_bit_EP_large" -v  2>&1 > /dev/null || { echo "should succeed"; exit 1; } done

(Optional) Documentation Update

BEFORE SUBMITTING, PLEASE READ https://docs.vllm.ai/en/latest/contributing (anything written below this line will be removed by GitHub Actions)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great! any way to capture this in test_cutlass_moe?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yep, added a couple unit tests

Signed-off-by: Ming Yang <yming@meta.com>
@minosfuture minosfuture force-pushed the fix_maverick_correctness branch from 52be3eb to 66c457b Compare June 27, 2025 05:07
Signed-off-by: Ming Yang <yming@meta.com>
@minosfuture minosfuture force-pushed the fix_maverick_correctness branch from 66c457b to 25d3af8 Compare June 27, 2025 06:07
Signed-off-by: Ming Yang <yming@meta.com>
Signed-off-by: Ming Yang <yming@meta.com>
Comment on lines +180 to +181
if expert_map is not None:
c1.fill_(0)
Copy link

@ElizaWszola ElizaWszola Jul 1, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One more tiny thing: can you check if we need to do this if per_act_token is true?

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no we don't. I figured out the root cause is that the random data in the unused space in c1 caused scale (over the whole c1) to be larger, resulting in precision loss for the actual data. So if we use per_act_token==True, scales won't be impacted. Let me update the PR in vllm-project.
I'll close this PR to avoid confusion -- this was a experimental PR for early review.

@minosfuture
Copy link
Owner Author

move to vllm-project#20167. closing.

@minosfuture minosfuture closed this Jul 1, 2025
minosfuture pushed a commit that referenced this pull request Oct 9, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants