-
Notifications
You must be signed in to change notification settings - Fork 27k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible fix of wrong scale in weight decomposition #16151
Conversation
Hi, this patch is wrong. This variable is used by model authors to normalize the trained LoRA/DoRA from -1 to 1. With this patch the DoRA breaks when alpha changes instead of normalizing. |
it breaks bcuz it is not how alpha works. |
What is this useful for? |
wrong formula: W + scale * alpha/dim * (wd(W + BA) - W) I think this is very intuitive that alpha/dim should not affect the "- W" part. |
Also that's how training of DoRA in my repo works. |
Hi, this PR should be reverted. Here are the reasons why:
|
|
I hope you can understand the impact of these changes on users. If you have the 'special weight scaling' code, feel free to share it. I don't think its possible in this formulation. |
Description
Should resolve this: comfyanonymous/ComfyUI#3922
Checklist: