Skip to content

Commit c060d73

Browse files
ashwin-phadkejeejeelee
authored andcommitted
Lora bias(enable_lora_bias) deprecate warning (vllm-project#24339)
Signed-off-by: Jee Jee Li <pandaleefree@gmail.com> Co-authored-by: Jee Jee Li <pandaleefree@gmail.com> Signed-off-by: xuebwang-amd <xuebwang@amd.com>
1 parent fcbc916 commit c060d73

File tree

1 file changed

+7
-2
lines changed

1 file changed

+7
-2
lines changed

vllm/config/__init__.py

Lines changed: 7 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2458,7 +2458,6 @@ class LoRAConfig:
24582458
LoRA adapter. Will be removed in v0.12.0."""
24592459
lora_vocab_padding_size: ClassVar[int] = current_platform\
24602460
.get_lora_vocab_padding_size()
2461-
24622461
default_mm_loras: Optional[dict[str, str]] = None
24632462
"""Dictionary mapping specific modalities to LoRA model paths; this field
24642463
is only applicable to multimodal models and should be leveraged when a
@@ -2470,7 +2469,8 @@ class LoRAConfig:
24702469
will be automatically assigned to 1-n with the names of the modalities
24712470
in alphabetic order."""
24722471
bias_enabled: bool = False
2473-
"""Enable bias for LoRA adapters."""
2472+
"""[DEPRECATED] Enable bias for LoRA adapters. This option will be
2473+
removed in v0.12.0."""
24742474

24752475
def compute_hash(self) -> str:
24762476
"""
@@ -2503,6 +2503,11 @@ def __post_init__(self):
25032503
"in v0.12.0. Additional vocabulary support for "
25042504
"LoRA adapters is being phased out.")
25052505

2506+
# Deprecation warning for enable_lora_bias
2507+
if self.bias_enabled:
2508+
logger.warning("`enable_lora_bias` is deprecated "
2509+
"and will be removed in v0.12.0.")
2510+
25062511
# Setting the maximum rank to 512 should be able to satisfy the vast
25072512
# majority of applications.
25082513
possible_max_ranks = (8, 16, 32, 64, 128, 256, 320, 512)

0 commit comments

Comments
 (0)