Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] LoRA conceptual guide #331

Merged
merged 5 commits into from
Apr 20, 2023
Merged

Conversation

MKhalusova
Copy link
Contributor

New doc for PEFT: a brief introduction to LoRA technique (a TLDR) with links to examples.

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Apr 18, 2023

The documentation is not available anymore as the PR was closed or merged.

Copy link
Member

@stevhliu stevhliu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice summary of LoRA, and I like the links to the examples 👍

docs/source/conceptual_guides/lora.mdx Outdated Show resolved Hide resolved
This approach has a number of advantages:

* LoRA makes fine-tuning more efficient by drastically reducing the number of trainable parameters.
* The original pre-trained weights are kept frozen, which means you can have multiple lightweight and portable LoRA models for various downstream tasks built on top of them.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

docs/source/conceptual_guides/lora.mdx Outdated Show resolved Hide resolved
Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @MKhalusova for adding this much-needed concept guide! 😄

left few comments

docs/source/conceptual_guides/lora.mdx Outdated Show resolved Hide resolved
docs/source/conceptual_guides/lora.mdx Show resolved Hide resolved
- `bias`: Specifies if the `bias` parameters should be trained. Can be `'none'`, `'all'` or `'lora_only'`.
- `modules_to_save`: List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. These typically include model's custom head that is randomly initialized for the fine-tuning task.

## LoRA examples
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding NLP examples such as RLHF using PEFT+TRL will benefit a lot of users

Copy link
Contributor Author

@MKhalusova MKhalusova Apr 19, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@pacman100 I absolutely agree! At the moment, I only added the examples that we already have in the docs, once we have more examples, we can add them here too. Likely in a separate PR though.

MKhalusova and others added 3 commits April 19, 2023 08:35
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
Co-authored-by: Steven Liu <59462357+stevhliu@users.noreply.github.com>
@MKhalusova MKhalusova requested a review from pacman100 April 19, 2023 12:42
Copy link
Contributor

@pacman100 pacman100 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @MKhalusova, LGTM! 😄

@MKhalusova MKhalusova merged commit 1ef4b61 into huggingface:main Apr 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants