Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The provided peft_type 'PROMPT_TUNING' is not compatible with the PeftMixedModel. #2307

Open
Radu1999 opened this issue Jan 7, 2025 · 2 comments

Comments

@Radu1999
Copy link

Radu1999 commented Jan 7, 2025

Feature request

PROMPT_TUNING is an useful adapter and it would be great if we can combine it with LORA.

Motivation

Lots of finetunes on consumer grade hardware leverage lora. It would be great we can mix prompt tuning with lora as plug and play.

Your contribution

I would like to submit a PR if there is interest.

@BenjaminBossan
Copy link
Member

Could you please share what you have tried so far and how it fails? Since prompt tuning is a "prompt learning" technique, I think it is not necessary to support it via PeftMixedModel.

@Radu1999
Copy link
Author

Radu1999 commented Jan 8, 2025

Hi @BenjaminBossan , so basically i am finetuning a LLM for retrieval.
Each query is marked by a task description: TASK_PROMPT + QUERY_PROMPT.
TASK_PROMPT is for example something like: "Extract the data related to the following query. The data format is ...".

I can finetune with a fixed TASK_PROMPT , but i want to optimize it at the same time, otherwise it will become no more than a constant. TASK_PROMPT would be the virtual tokens in the prompt tuning.

I can also try to optimize the prompt on the base model, and then with the optimized prompt finetune the model, but i think i'll get better results if i optimize at the same time. I am also not sure if i can integrate the learned prompt. What do you think? Does it make sense?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants