Skip to content

Conversation

@zRzRzRzRzRzRzR
Copy link
Contributor

need to remove this line

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to fix an issue with quantization for the GLM-4.5 MoE model by removing the quant_config parameter from the VocabParallelEmbedding layer. While this may resolve issues for certain quantization methods, I've identified a critical issue where this change will break support for GGUF-quantized models. My review includes a detailed explanation of the problem and suggests a more robust, conditional approach to ensure all supported quantization formats continue to work correctly.

Comment on lines 390 to 393
self.embed_tokens = VocabParallelEmbedding(
config.vocab_size,
config.hidden_size,
quant_config=quant_config,
prefix=f"{prefix}.embed_tokens")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

This change unconditionally removes quant_config from the VocabParallelEmbedding layer, which will disable quantization for token embeddings. While this might be the intended fix for certain quantization methods (e.g., GPTQ, AWQ), it will break support for others that rely on quantizing the embedding layer, such as GGUF.

When quant_config is not provided, VocabParallelEmbedding defaults to UnquantizedEmbeddingMethod. This method does not create the necessary parameters (like qweight) for GGUF, which will lead to failures during weight loading for GGUF-quantized GLM-4.5 models.

This is a critical issue as it silently disables a supported quantization format for this model.

A more robust solution would be to conditionally pass quant_config based on the quantization method. For instance, you could check if quant_config.get_name() == 'gguf' and only pass the config in that case, preserving the fix for other methods while maintaining GGUF compatibility.

@DarkLight1337 DarkLight1337 enabled auto-merge (squash) July 23, 2025 07:03
@DarkLight1337
Copy link
Member

Next time please sign off your commits using -s flag in git commit

@github-actions
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@DarkLight1337 DarkLight1337 added the ready ONLY add when PR is ready to merge/full CI is needed label Jul 23, 2025
auto-merge was automatically disabled July 23, 2025 08:50

Head branch was pushed to by a user without write access

@zRzRzRzRzRzRzR zRzRzRzRzRzRzR requested a review from aarnphm as a code owner July 23, 2025 08:50
@zRzRzRzRzRzRzR
Copy link
Contributor Author

This PR does not need to be merged first, as we have found some potential issues in the quantized model
@Isotr0py we may need to do some further checks. Regarding FlashInfer, some errors may occur.
Additionally, since the model name has been modified, the tool calls and reason parser in the registry need to be renamed.

@DarkLight1337 DarkLight1337 removed the ready ONLY add when PR is ready to merge/full CI is needed label Jul 23, 2025
@zRzRzRzRzRzRzR
Copy link
Contributor Author

This PR can be merged first; we haven't found any FC issues yet. If we find and fix them, we will submit a new PR.
Regarding quantization, there are no issues for now,

@vllm-bot vllm-bot merged commit 85bda9e into vllm-project:main Jul 24, 2025
69 of 72 checks passed
@zRzRzRzRzRzRzR zRzRzRzRzRzRzR changed the title remove GLM-4.5 quantization wrong Code remove GLM-4 quantization wrong Code Jul 24, 2025
x22x22 pushed a commit to x22x22/vllm that referenced this pull request Aug 5, 2025
Pradyun92 pushed a commit to Pradyun92/vllm that referenced this pull request Aug 6, 2025
npanpaliya pushed a commit to odh-on-pz/vllm-upstream that referenced this pull request Aug 6, 2025
wenbinc-Bin pushed a commit to wenbinc-Bin/vllm-fork that referenced this pull request Aug 7, 2025
jinzhen-lin pushed a commit to jinzhen-lin/vllm that referenced this pull request Aug 9, 2025
Signed-off-by: Jinzhen Lin <linjinzhen@hotmail.com>
paulpak58 pushed a commit to paulpak58/vllm that referenced this pull request Aug 13, 2025
Signed-off-by: Paul Pak <paulpak58@gmail.com>
wenbinc-Bin pushed a commit to wenbinc-Bin/vllm-fork that referenced this pull request Aug 14, 2025
diegocastanibm pushed a commit to diegocastanibm/vllm that referenced this pull request Aug 15, 2025
Signed-off-by: Diego-Castan <diego.castan@ibm.com>
epwalsh pushed a commit to epwalsh/vllm that referenced this pull request Aug 28, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants