-
Notifications
You must be signed in to change notification settings - Fork 510
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PEFT LoRA with gradient checkpointing #91
Comments
You're right on that being the fix - would love a PR! |
Added *edit: I'm not sure if i'm doing it right either from the clone or using it so could just be how it is "cached". I'm probably not using it properly.
|
I'm getting this too, investigating. |
Will have a fix for the |
Getting the following error when using gradient checkpointing with PEFT LoRA training.
Basically using the same script as the Finetuning notebook in this repo but adding the Lora PEFT to it.
With
model.enable_input_require_grads()
has similar NotImplementedErrorI think to fix this would be adding the
get_input_embeddings()
to returnself.text_model.get_input_embeddings()
I can give it a shot and make a PR here soon.
Thank you!
Related issues:
The text was updated successfully, but these errors were encountered: