You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you please share what you have tried so far and how it fails? Since prompt tuning is a "prompt learning" technique, I think it is not necessary to support it via PeftMixedModel.
Hi @BenjaminBossan , so basically i am finetuning a LLM for retrieval.
Each query is marked by a task description: TASK_PROMPT + QUERY_PROMPT.
TASK_PROMPT is for example something like: "Extract the data related to the following query. The data format is ...".
I can finetune with a fixed TASK_PROMPT , but i want to optimize it at the same time, otherwise it will become no more than a constant. TASK_PROMPT would be the virtual tokens in the prompt tuning.
I can also try to optimize the prompt on the base model, and then with the optimized prompt finetune the model, but i think i'll get better results if i optimize at the same time. I am also not sure if i can integrate the learned prompt. What do you think? Does it make sense?
Feature request
PROMPT_TUNING is an useful adapter and it would be great if we can combine it with LORA.
Motivation
Lots of finetunes on consumer grade hardware leverage lora. It would be great we can mix prompt tuning with lora as plug and play.
Your contribution
I would like to submit a PR if there is interest.
The text was updated successfully, but these errors were encountered: