-
Notifications
You must be signed in to change notification settings - Fork 346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Prompt Tuning #595
Add Prompt Tuning #595
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
did a first review pass, looks good overall, haven't checked all details though
- move prefix_attention_mask_length in ForwardContext
5ab4364
to
5ec546c
Compare
Please also add row in table on main readme |
Hey, love the integration of prompt tuning! I would love to use it when it is stable, do you know when this will be merged? 🙂 @lenglaender @calpt |
@raphaelreimann We're currently fixing the last few tests that aren't running. For every model except CLIP, EncoderDecoder, GPT-2, GPT-J, LLaMA and T5 the implementation is already working. As soon as this PR and the sync PR (#602) are merged, we will release the new Adapters library (#584). |
@lenglaender Thank you for the update on this, great work! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me just a few minor comments
For the subsequent PR, which will extend Prompt Tuning with composition support and which will add support for the not yet supported models, we should think about realising the change of the attention mask for prefix tuning within the function |
Depends on: #591
This PR adds Prompt Tuning (https://aclanthology.org/2021.emnlp-main.243/)
Currently the Prompt Tuning Layer was created and the BERT model has been adapted