Fine-tuning only Whisper decoder #1707
bardenthenry
started this conversation in
General
Replies: 2 comments 1 reply
-
I want to use layers_to_transform in LoraConfig to specify that add lora layer only in decoder decoder_id_ls = []
for id, (name, param) in enumerate(model.named_parameters()):
if 'model.decoder' in name:
decoder_id_ls.append(id)
target_modules = ["q_proj", "v_proj"]
config = LoraConfig(r=32, lora_alpha=64, target_modules=target_modules, lora_dropout=0.05, bias="none", layers_to_transform=decoder_id_ls)
model = get_peft_model(model, config)
model.print_trainable_parameters() But I got error
|
Beta Was this translation helpful? Give feedback.
1 reply
-
just try to print the model architechture and map the name with the target module, may be in whisper repo, the name of layer is different from name on huggingface |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I found that if I fine-tune Whisper in the PEFT LoRA, its distinctive style will become completely different from the original. Is it possible to use Lora to fine-tune only the decoder of Whisper?
Beta Was this translation helpful? Give feedback.
All reactions