Skip to content

Conversation

@jiahanc
Copy link
Contributor

@jiahanc jiahanc commented Aug 1, 2025

Essential Elements of an Effective PR Description Checklist

  • The purpose of the PR, such as "Fix some issue (link existing issues this PR will resolve)".
  • The test plan, such as providing test command.
  • The test results, such as pasting the results comparison before and after, or e2e results
  • (Optional) The necessary documentation update, such as updating supported_models.md and examples for a new model.

Purpose

modelopt Llama4-Maverick FP8 checkpoint was updated and vLLM has error when using the latest checkpoint:

(VllmWorker rank=5 pid=45037) ERROR 07-31 23:12:46 [multiproc_executor.py:557]   File "/scratch/vllm/vllm/model_executor/models/llama4.py", line 413, in load_moe_expert_weights
(VllmWorker rank=5 pid=45037) ERROR 07-31 23:12:46 [multiproc_executor.py:557]     param = params_dict[full_param_name]
(VllmWorker rank=5 pid=45037) ERROR 07-31 23:12:46 [multiproc_executor.py:557]             ~~~~~~~~~~~^^^^^^^^^^^^^^^^^
(VllmWorker rank=5 pid=45037) ERROR 07-31 23:12:46 [multiproc_executor.py:557] KeyError: 'layers.1.feed_forward.experts.w2_weight_input_scale'

Fix the logic in weight loading and processing to fit the new checkpoint

Test Plan

lm_eval --model vllm --model_args pretrained=nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8,tensor_parallel_size=8,max_model_len=2048,gpu_memory_utilization=0.9,max_num_seqs=32,quantization=modelopt --trust_remote_code --tasks gsm8k --num_fewshot 5 --batch_size auto

vllm (pretrained=nvidia/Llama-4-Maverick-17B-128E-Instruct-FP8,tensor_parallel_size=8,max_model_len=2048,gpu_memory_utilization=0.9,max_num_seqs=32,quantization=modelopt,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9257|±  |0.0072|
|     |       |strict-match    |     5|exact_match|↑  |0.9272|±  |0.0072|

Test Result

(Optional) Documentation Update

Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
@github-actions
Copy link

github-actions bot commented Aug 1, 2025

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@mergify mergify bot added the llama Related to Llama models label Aug 1, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request addresses a KeyError that occurs when loading weights from the updated modelopt Llama4-Maverick FP8 checkpoint. The fix involves updating the weight renaming logic in _rename_weight_for_modelopt_checkpoint to correctly handle weight names that may or may not already have the language_model.model. prefix.

The changes are well-targeted and effectively resolve the issue. The logic for creating the renamed variable is sound, and the subsequent refactoring to use this variable throughout the function simplifies the code and removes redundancy. The fix appears correct and robust for its intended purpose. I have no further comments.

@jiahanc
Copy link
Contributor Author

jiahanc commented Aug 1, 2025

Hi @mgoin , may you help review this fix PR?

@Edwardf0t1
Copy link
Contributor

@jiahanc Could you run a test on https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8 as well?

@jiahanc
Copy link
Contributor Author

jiahanc commented Aug 1, 2025

@jiahanc Could you run a test on https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8 as well?

Sure.

2025-08-01:16:02:31,793 INFO     [lm_eval.loggers.evaluation_tracker:272] Output path not provided, skipping saving results aggregated
vllm (pretrained=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,tensor_parallel_size=8,max_model_len=2048,gpu_memory_utilization=0.9,max_num_seqs=32,quantization=modelopt,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9075|±  |0.0080|
|     |       |strict-match    |     5|exact_match|↑  |0.8863|±  |0.0087|

@mgoin mgoin added bug Something isn't working ready ONLY add when PR is ready to merge/full CI is needed labels Aug 2, 2025
@mgoin mgoin enabled auto-merge (squash) August 2, 2025 16:35
@vllm-bot vllm-bot merged commit 337eb23 into vllm-project:main Aug 3, 2025
47 of 53 checks passed
@Edwardf0t1
Copy link
Contributor

@jiahanc Could you run a test on https://huggingface.co/nvidia/Llama-4-Scout-17B-16E-Instruct-FP8 as well?

Sure.

2025-08-01:16:02:31,793 INFO     [lm_eval.loggers.evaluation_tracker:272] Output path not provided, skipping saving results aggregated
vllm (pretrained=nvidia/Llama-4-Scout-17B-16E-Instruct-FP8,tensor_parallel_size=8,max_model_len=2048,gpu_memory_utilization=0.9,max_num_seqs=32,quantization=modelopt,trust_remote_code=True), gen_kwargs: (None), limit: None, num_fewshot: 5, batch_size: auto
|Tasks|Version|     Filter     |n-shot|  Metric   |   |Value |   |Stderr|
|-----|------:|----------------|-----:|-----------|---|-----:|---|-----:|
|gsm8k|      3|flexible-extract|     5|exact_match|↑  |0.9075|±  |0.0080|
|     |       |strict-match    |     5|exact_match|↑  |0.8863|±  |0.0087|

Thanks - the scores are a bit lower than here:
#20419 (comment)
Might be due to different params.

npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
noamgat pushed a commit to noamgat/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Noam Gat <noamgat@gmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Paul Pak <paulpak58@gmail.com>
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
zhewenl pushed a commit to zhewenl/vllm that referenced this pull request Aug 28, 2025
Signed-off-by: jiahanc <173873397+jiahanc@users.noreply.github.com>
Co-authored-by: mgoin <mgoin64@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working llama Related to Llama models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants