[Feature] Move vllm/lora from SEPARATE_GROUPS to FILES in mypy config and fix type errors #26543
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
This PR moves
vllm/lorafromSEPARATE_GROUPStoFILESin the mypy pre-commit configuration (tools/pre_commit/mypy.py) and resolves all resulting type errors. This is part of the effort to gradually improve mypy coverage across the vLLM codebase as outlined in #26448.Motivation
Files in
SEPARATE_GROUPSare checked with--follow-imports skip, which prevents mypy from following imports to check type consistency. This can lead to situations where:By moving directories to
FILES, they are checked with proper import following in CI, catching type errors earlier and improving overall type safety.Changes
Type Safety Fixes in
vllm/lora/tensorizer_config.tensorizer_diris not None when tensorizer config is usedNonereturn value fromweights_mapper._map_name()by falling back to the original nametensorizer_config.tensorizer_diris not Nonesetattr()for dynamiclora_managerattribute assignmenttype: ignore[attr-defined]comments for SupportsLoRA protocol methods that exist at runtime but aren't defined in the Protocol (e.g.,named_modules,config,get_submodule,get_mm_mapping)max_cpu_lorasproperty return typebase_layer.quant_methodis not None before callingapply()lora_configtype annotation with explicit type narrowing to ensure it's non-optional after initializationMypy Configuration Update
"vllm/lora"fromSEPARATE_GROUPStoFILESintools/pre_commit/mypy.pyTesting
vllm/lorapass mypy with--follow-imports silentRelated Issues
vllm/attentionandvllm/compilation#26482 (attention, compilation) and other similar PRsOriginal prompt
Fixes #26533
💬 Share your feedback on Copilot coding agent for the chance to win a $200 gift card! Click here to start the survey.