Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Using 2 LoRA configs with target_modules='all-linear' leads to nested LoRA layers #2390

Open
4 tasks
BenjaminBossan opened this issue Feb 20, 2025 · 0 comments · May be fixed by #2391
Open
4 tasks

Bug: Using 2 LoRA configs with target_modules='all-linear' leads to nested LoRA layers #2390

BenjaminBossan opened this issue Feb 20, 2025 · 0 comments · May be fixed by #2391
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@BenjaminBossan
Copy link
Member

System Info

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder
  • My own task or dataset (give details below)

Reproduction

from transformers import AutoModelForCausalLM
from peft import LoraConfig, get_peft_model

model_id = "hf-internal-testing/tiny-random-OPTForCausalLM"
model = AutoModelForCausalLM.from_pretrained(model_id)
config0 = LoraConfig(target_modules="all-linear")
config1 = LoraConfig(target_modules="all-linear")
model = get_peft_model(model, config0)#, adapter_name="default")
model.add_adapter("adapter1", config1)
print(model.base_model.model.model.decoder.layers[0].self_attn.k_proj)

prints:

lora.Linear(
  (base_layer): lora.Linear(
    (base_layer): Linear(in_features=16, out_features=16, bias=True)
    (lora_dropout): ModuleDict(
      (adapter1): Identity()
    )
    (lora_A): ModuleDict(
      (adapter1): Linear(in_features=16, out_features=8, bias=False)
    )
    (lora_B): ModuleDict(
      (adapter1): Linear(in_features=8, out_features=16, bias=False)
    )
    (lora_embedding_A): ParameterDict()
    (lora_embedding_B): ParameterDict()
    (lora_magnitude_vector): ModuleDict()
  )
  (lora_dropout): ModuleDict(
    (default): Identity()
  )
  (lora_A): ModuleDict(
    (default): lora.Linear(
      (base_layer): Linear(in_features=16, out_features=8, bias=False)
      (lora_dropout): ModuleDict(
        (adapter1): Identity()
      )
      (lora_A): ModuleDict(
        (adapter1): Linear(in_features=16, out_features=8, bias=False)
      )
      (lora_B): ModuleDict(
        (adapter1): Linear(in_features=8, out_features=8, bias=False)
      )
      (lora_embedding_A): ParameterDict()
      (lora_embedding_B): ParameterDict()
      (lora_magnitude_vector): ModuleDict()
    )
  )
  (lora_B): ModuleDict(
    (default): lora.Linear(
      (base_layer): Linear(in_features=8, out_features=16, bias=False)
      (lora_dropout): ModuleDict(
        (adapter1): Identity()
      )
      (lora_A): ModuleDict(
        (adapter1): Linear(in_features=8, out_features=8, bias=False)
      )
      (lora_B): ModuleDict(
        (adapter1): Linear(in_features=8, out_features=16, bias=False)
      )
      (lora_embedding_A): ParameterDict()
      (lora_embedding_B): ParameterDict()
      (lora_magnitude_vector): ModuleDict()
    )
  )
  (lora_embedding_A): ParameterDict()
  (lora_embedding_B): ParameterDict()
  (lora_magnitude_vector): ModuleDict()
)

Expected behavior

Instead of getting nested LoRA layers, the linear layers belonging to a LoRA layer should not be targeted by all-linear.

@BenjaminBossan BenjaminBossan added bug Something isn't working good first issue Good for newcomers labels Feb 20, 2025
BenjaminBossan added a commit to BenjaminBossan/peft that referenced this issue Feb 20, 2025
Resolves huggingface#2390

There was a bug in PEFT when adding a LoRA adapter with
target_modules='all-linear' (e.g. via add_adapter) to a model that
already had LoRA adapters applied. The resolution of 'all-linear' would
result in, for instance, lora_A and lora_B being targeted, leading to
nested LoRA adapters. With this fix, this is prevented and the correct
layers will be targeted.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant