Skip to content

Conversation

@kylesayrs
Copy link
Contributor

@kylesayrs kylesayrs commented Jun 24, 2025

Purpose

Background

When the quantization config is produced, it has an ignore list which matches the hf model structure. However, the hf model structure is not guaranteed to match the vllm model structure, which can lead to mismatching mappings.

This PR allows provides an interface for the hf_to_vllm_mapper to update the mappings in the quantization config.

Changes

  • Implement apply_vllm_mapper method on quantization configs
    • This method is called by either configure_quant_config or SupportsQuant mixin
    • This method uses the hf_to_vllm_mapper to update quantization config attributes such as the ignore list in order to correct match against vllm module prefixes
  • Remove warning about models which do not define packed_modules_mapping
    • This warning is likely unnecessary and overly verbose
  • Add SupportsQuant to qwen_2_5_vl
  • Implement apply_vllm_mapper for compressed tensors as well as fp8 formats

Testing

Run examples/offline_inference/vision_language.py example with truncated tokenizer model

# Qwen2.5-VL
def run_qwen2_5_vl(questions: list[str], modality: str) -> ModelRequestData:
    #model_name = "Qwen/Qwen2.5-VL-3B-Instruct"
    model_name = "nm-testing/Qwen2.5-VL-7B-Instruct-W4A16-G128"
python3 examples/offline_inference/vision_language.py --model-type qwen2_5_vl

The above script fails on main due to quantizing being applied to the vision tower, but succeeds with these changes.

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@kylesayrs kylesayrs changed the title [Quant] Support [Quant] Fix quantization config matching with hf_to_vllm_mapper Jun 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kylesayrs, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the resolution of quantization schemes for models whose Hugging Face structure differs from vLLM's internal module structure. It achieves this by introducing a new mechanism to adapt quantization configurations using a WeightsMapper, integrating this capability directly into the SupportsQuant interface, and providing concrete implementations for various quantization methods. This ensures more accurate and robust quantization across a broader range of models.

Highlights

  • Quantization Config Adaptation: Introduced an apply_vllm_mapper method to the QuantizationConfig base class and implemented it in CompressedTensorsConfig and Fp8Config. This method allows quantization configurations to dynamically adjust module names (e.g., for ignore lists or scheme maps) to align with vLLM's internal model structure, using a WeightsMapper.
  • Standardized Quantization Interface: The SupportsQuant mixin was enhanced to include an optional hf_to_vllm_mapper class variable. During model initialization, if a quant_config is present, the SupportsQuant mixin now automatically calls apply_vllm_mapper on the quant_config using the model's hf_to_vllm_mapper, standardizing how models inform their quantization configurations about module name differences.
  • Transition and Backward Compatibility: The configure_quant_config utility function was updated to apply the hf_to_vllm_mapper to quantization configs for models that do not yet inherit from the SupportsQuant interface, ensuring a smooth transition and continued support for existing models.
  • Enhanced WeightsMapper Utility: The WeightsMapper utility now includes apply_list and apply_dict methods, making it more versatile for transforming lists of module names (like ignore lists) and dictionaries (like scheme maps) within quantization configurations.
  • Qwen2.5-VL Model Integration: The Qwen2_5_VLForConditionalGeneration model was updated to inherit from the SupportsQuant mixin, demonstrating the practical application of these changes for multi-modal models and ensuring its quantization configuration correctly aligns with vLLM's structure.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@mergify mergify bot added the qwen Related to Qwen models label Jun 24, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a robust mechanism to handle quantization configuration mismatches between Hugging Face and vLLM model structures by adding an apply_vllm_mapper method to quantization configs. The changes are well-structured, introducing a SupportsQuant mixin for new models and providing backward compatibility for older models. The implementation is clean and consistent across the modified files.

I have one suggestion regarding the new apply_vllm_mapper interface to ensure its adoption across all relevant quantization configurations in the future.

@kylesayrs kylesayrs marked this pull request as ready for review June 24, 2025 22:50
@kylesayrs kylesayrs changed the title [Quant] Fix quantization config matching with hf_to_vllm_mapper [Quant] [Bugfix] Fix quantization config matching with hf_to_vllm_mapper Jun 25, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the apply_vllm_mapper method provides a good abstraction. A unit test to lock in some expected behavior from this mapper would be nice to have

@mgoin mgoin enabled auto-merge (squash) June 25, 2025 19:39
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jun 25, 2025
@mgoin
Copy link
Member

mgoin commented Jun 26, 2025

@kylesayrs it looks like there is a related failure in quantization test

[2025-06-25T21:23:31Z] ERROR 06-25 14:23:31 [core.py:519]   File "/usr/local/lib/python3.12/dist-packages/vllm/model_executor/models/interfaces.py", line 515, in __new__
[2025-06-25T21:23:31Z] ERROR 06-25 14:23:31 [core.py:519]     instance.quant_config.packed_modules_mapping.update(
[2025-06-25T21:23:31Z] ERROR 06-25 14:23:31 [core.py:519]     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[2025-06-25T21:23:31Z] ERROR 06-25 14:23:31 [core.py:519] AttributeError: 'TorchAOConfig' object has no attribute 'packed_modules_mapping'

… as nullable, workaround TransformersForCausalLM

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
auto-merge was automatically disabled June 27, 2025 18:18

Head branch was pushed to by a user without write access

Signed-off-by: Kyle Sayers <kylesayrs@gmail.com>
@kylesayrs
Copy link
Contributor Author

@mgoin This is good to go, I needed to fix some edge cases with QuantConfigs not calling super().init() and TransformersForCausalLM

@mergify
Copy link

mergify bot commented Jun 30, 2025

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kylesayrs.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Jun 30, 2025
@mergify mergify bot removed the needs-rebase label Jun 30, 2025
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! FYI @jeejeelee @Isotr0py

@mgoin mgoin merged commit 9025a9a into vllm-project:main Jul 1, 2025
90 checks passed
@kylesayrs kylesayrs deleted the kylesayrs/update-qconfig-with-mappings branch July 1, 2025 13:04
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
…pper` (vllm-project#20046)

Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
@Edwardf0t1 Edwardf0t1 mentioned this pull request Aug 12, 2025
4 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

qwen Related to Qwen models ready ONLY add when PR is ready to merge/full CI is needed

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Issue with updated import and kernel compatibility for Qwen2_5_VL model

2 participants