-
Notifications
You must be signed in to change notification settings - Fork 2.1k
Description
Feature request
PEFT can combine pre-trained LoRA modules by averaging them or providing custom weights for weighted averaging. This paper showed that learning these weights is better than naive averaging in few-shot adaption settings.

Motivation
Learning the combination weights allows users to utilize already available pre-trained LoRA modules in the Hugging Face Models. Also, it is very parameter efficient since we are only learning the combination weights. More importantly, it can surpass learning a LoRA from scratch in settings where the number of training samples is limited.
Your contribution
I can submit a PR. Then, PEFT can combine any pre-trained LoRA using the following format:
wlora_config = WLoraConfig(skilled_loras = [PATH_TO_UPSTREAM_1, PATH_TO_UPSTREAM_2, ])
model = get_peft_model(llama2, wlora_config )