-
-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bugfix][Model] Skip loading lm_head weights if using tie_word_embeddings #6758
Conversation
Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, please make sure to run full CI as it is required to merge (or just use auto-merge). To run full CI, you can do one of these:
🚀 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @tjohnson31415!
@tjohnson31415 could you merge in latest main branch? I think that should fix the failing tests |
* upstream/main: (66 commits) [Bugfix] Fix PaliGemma MMP (vllm-project#6930) [TPU] Fix greedy decoding (vllm-project#6933) [Kernel] Tuned int8 kernels for Ada Lovelace (vllm-project#6848) [Kernel] Fix marlin divide-by-zero warnings (vllm-project#6904) [ci] GHA workflow to remove ready label upon "/notready" comment (vllm-project#6921) [Kernel] Remove unused variables in awq/gemm_kernels.cu (vllm-project#6908) [Frontend] New `allowed_token_ids` decoding request parameter (vllm-project#6753) [Bugfix] Allow vllm to still work if triton is not installed. (vllm-project#6786) [TPU] Support tensor parallelism in async llm engine (vllm-project#6891) [Kernel] Fix deprecation function warnings squeezellm quant_cuda_kernel (vllm-project#6901) [Core] Reduce unnecessary compute when logprobs=None (vllm-project#6532) [Kernel] Tuned FP8 Kernels for Ada Lovelace (vllm-project#6677) [Model] Initialize support for InternVL2 series models (vllm-project#6514) [Misc] Pass cutlass_fp8_supported correctly in fbgemm_fp8 (vllm-project#6871) Add Nemotron to PP_SUPPORTED_MODELS (vllm-project#6863) [Kernel] Increase precision of GPTQ/AWQ Marlin kernel (vllm-project#6795) [TPU] Reduce compilation time & Upgrade PyTorch XLA version (vllm-project#6856) [Docs] Add RunLLM chat widget (vllm-project#6857) [Model] Initial support for BLIP-2 (vllm-project#5920) [CI/Build][Doc] Update CI and Doc for VLM example changes (vllm-project#6860) ...
@tjohnson31415 could you do it one more time? :) another fix went in for the tensorizer tests |
* upstream/main: [Build] Temporarily Disable Kernels and LoRA tests (vllm-project#6961) [core][misc] improve free_finished_seq_groups (vllm-project#6865) [Kernel] Remove scaled_fp8_quant kernel padding footgun (vllm-project#6842) [Bugfix] Fix tensorizer memory profiling bug during testing (vllm-project#6881) [OpenVINO] Updated OpenVINO requirements and build docs (vllm-project#6948) [Kernel] Squash a few more warnings (vllm-project#6914) [BugFix] Fix use of per-request seed with pipeline parallel (vllm-project#6698) [Doc] Super tiny fix doc typo (vllm-project#6949)
A emergent solution for this issue: manually initialize the |
…ings (vllm-project#6758) Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
Cherry-pick vllm-project#6758 fix: skip loading lm_head if tie_word_embeddings
…ings (vllm-project#6758) Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
…ings (vllm-project#6758) Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com> Signed-off-by: Alvant <alvasian@yandex.ru>
…ings (vllm-project#6758) Signed-off-by: Travis Johnson <tsjohnso@us.ibm.com>
In llama and other models with
tie_word_embeddings
, there seems to be cases where the weight files will include both thelm_head.weight
andembed_tokens.weight
tensors, particularly after tuning procedures. Attempting to load such weights results in an error like:This error arises when the model does not have an
lm_head.weight
innamed_parameters()
but the weights files includelm_head.weight
. Withtie_word_embeddings
set to true,lm_head.weight
should be a duplicate ofembed_tokens.weight
, so the extra tensor seems to be included in the.safetensors
unnecessarily. The change in this PR is to ignorelm_head.weight
whentie_word_embeddings
is true.#3553 is a related issue that saw the same issue for Gemma which always uses tied weights.
I was made aware of this error in regards to a fine-tune of ibm-granite/granite-3b-code-instruct, which is a llama architecture with
tie_word_embeddings
set toTrue
. After understanding the cause, I looked for other model implementations that may have the same issue with:I found some model implementations with
tie_word_embeddings
already include a check to skip loadinglm_head.weight
(qwen2, starcoder2, falcon). I added a check to the other models that did not include that check inload_weights
.I only tested the fix with llama, but included the checks for the models that were missing it to head off future issues. Let me know if it would be preferable to only change this for llama and leave the other models for a later PRs.