Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed LoraConfig alpha modification on add_weighted_adapter #654

Merged
merged 3 commits into from
Jul 1, 2023

Conversation

kovalexal
Copy link
Contributor

Hi!

Currently adding a weighted adapter rewrites base lora alpha value like this:

import torch
from diffusers import StableDiffusionPipeline
from peft import get_peft_model
from peft.tuners.lora import LoraConfig

pipe = StableDiffusionPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5",
    torch_dtype=torch.float16
).to("cuda")

config = LoraConfig(
    r=8,
    lora_alpha=32,
    target_modules=["to_q"]
)

pipe.unet = get_peft_model(pipe.unet, config)

print(pipe.unet.peft_config)
# {'default': LoraConfig(
# peft_type=<PeftType.LORA: 'LORA'>, base_model_name_or_path=None,
# revision=None, task_type=None, inference_mode=False, r=8, target_modules=['to_q']
# lora_alpha=32, lora_dropout=0.0, fan_in_fan_out=False, bias='none', modules_to_save=None,
# init_lora_weights=True, layers_to_transform=None, layers_pattern=None)}

pipe.unet.add_weighted_adapter(["default"], [0.5], "default_05")
print(pipe.unet.peft_config)
# {'default': LoraConfig(
# peft_type=<PeftType.LORA: 'LORA'>, base_model_name_or_path=None,
# revision=None, task_type=None, inference_mode=False, r=8, target_modules=['to_q'],
# lora_alpha=8, lora_dropout=0.0, fan_in_fan_out=False, bias='none', modules_to_save=None,
# init_lora_weights=True, layers_to_transform=None, layers_pattern=None),
# 'default_05': LoraConfig(
# peft_type=<PeftType.LORA: 'LORA'>, base_model_name_or_path=None,
# revision=None, task_type=None, inference_mode=False, r=8, target_modules=['to_q'],
# lora_alpha=8, lora_dropout=0.0, fan_in_fan_out=False, bias='none', modules_to_save=None,
# init_lora_weights=True, layers_to_transform=None, layers_pattern=None)}

This PR just fixes this small issue.

@pacman100 your review is kindly appreciated.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Jun 29, 2023

The documentation is not available anymore as the PR was closed or merged.

@BenjaminBossan
Copy link
Member

Thanks for the PR. Would you mind adding a small test that fails with the current code and works with your fix? It could be based on the example you posted. That would be great.

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @kovalexal for resolving this issue, LGTM! 🤗

@kovalexal
Copy link
Contributor Author

@BenjaminBossan I've added a small test that will fail if this issue shows up again.

@pacman100, thanks for review, ready to be merged 🤗

Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @kovalexal for iterating!

@pacman100 pacman100 merged commit 032fff9 into huggingface:main Jul 1, 2023
@kovalexal kovalexal deleted the lora_add_weighted_fix branch July 1, 2023 12:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants