Skip to content

Learning the combination weights of pre-trained LoRA Modules #1655

@mahdibeit

Description

@mahdibeit

Feature request

PEFT can combine pre-trained LoRA modules by averaging them or providing custom weights for weighted averaging. This paper showed that learning these weights is better than naive averaging in few-shot adaption settings.
WLoRA

Motivation

Learning the combination weights allows users to utilize already available pre-trained LoRA modules in the Hugging Face Models. Also, it is very parameter efficient since we are only learning the combination weights. More importantly, it can surpass learning a LoRA from scratch in settings where the number of training samples is limited.

Your contribution

I can submit a PR. Then, PEFT can combine any pre-trained LoRA using the following format:

wlora_config = WLoraConfig(skilled_loras = [PATH_TO_UPSTREAM_1, PATH_TO_UPSTREAM_2, ])

model = get_peft_model(llama2, wlora_config )

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions