-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow the entire model to be targed for LoRA and DoRA fine tuning: LoRA and DoRA embeddings with small DoRALinear bug fix #914
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
7c81ec0
to
0f11d20
Compare
awni
reviewed
Aug 1, 2024
awni
reviewed
Aug 6, 2024
zaithottakath
commented
Aug 6, 2024
zaithottakath
commented
Aug 6, 2024
…d non model.layers Linear layers to be targeted for fine tuning
…d in DoRALinear.from_linear
… fuse over to_linear or to_embedding
48cc0f2
to
6c495ea
Compare
…essary parens in dora embedding dropout
awni
reviewed
Aug 16, 2024
@@ -25,10 +25,10 @@ def from_linear( | |||
dropout=dropout, | |||
scale=scale, | |||
) | |||
dora_lin.linear = linear | |||
dora_lin.set_linear(linear) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch with that bug!
awni
approved these changes
Aug 16, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice addition, thanks!
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
LoRAEmbedding
andDoRAEmbedding
with tests so that embeddings can be targeted for fine tuning.Linear
andEmbedding
modules regardless of if they are inmodel.layers
allowing both the embeddings and thelm_head
to be targeted for fine tuning, allowing a nearly full LoRA or DoRA fine tune of the model.DoRALinear
that sets the wrongself.m
value due to it not being recalculated when theLinear
layer is changed inDoRALinear.from_linear
I checked huggingface's PEFT library for how they handle DoRA for embeddings and there is still an open ticket for it. I wasn't able to find any reference implementations, so this could be the first example of that.