Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clean up README and remove openrlbenchmark dependency #2085

Merged
merged 11 commits into from
Sep 23, 2024
Merged

Conversation

lewtun
Copy link
Member

@lewtun lewtun commented Sep 19, 2024

What does this PR do?

This PR updates the README to display working examples and removes a bunch of other outdated bits. It also removes the openrlbenchmark dependency that was not being used in the core code and caused conflict with make dev

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.


**SFT:**

```bash
trl sft --model_name_or_path facebook/opt-125m --dataset_name stanfordnlp/imdb --output_dir opt-sft-imdb
trl sft --model_name_or_path Qwen/Qwen2.5-0.5B --dataset_name trl-lib/Capybara --output_dir Qwen2.5-0.5B-SFT
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OPT models are really outdated and I think we should use the small Qwen models where possible in our examples since they train fast and are pretty good.

# get dataset
dataset = load_dataset("stanfordnlp/imdb", split="train")
# load dataset
dataset = load_dataset("trl-lib/Capybara", split="train")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should use small, but good quality datasets for our SFT / RM / preference optimisation examples. The Capybara dataset is nice since it's just 16k samples of high quality

# load dataset and preprocess
dataset = load_dataset("trl-lib/Capybara-Preferences", split="train")

def preprocess_function(examples):
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note to self: we should align the reward trainer to be the same as the other ones where this preprocessing step is internalised

)

# train
trainer.train()
```

### `PPOTrainer`
### `RLOOTrainer`
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I switched to RLOO because (a) it is better than PPO and (b) we're deprecating PPOTrainer and I didn't want to use PPOv2Trainer which looks a bit experimental in name

@@ -209,20 +265,11 @@ cd trl/
make dev
```

## References
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are now in the docs so can be removed IMO

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.


# get trainer
# configure trainer
training_args = SFTConfig(output_dir="Qwen/Qwen2.5-0.5B-SFT")
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm using the same convention adopted in #2082

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Show resolved Hide resolved
README.md Show resolved Hide resolved
lewtun and others added 6 commits September 19, 2024 15:02
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
@lewtun lewtun changed the title Clean up README Clean up README and remove openrlbenchmark dependency Sep 19, 2024
@lewtun
Copy link
Member Author

lewtun commented Sep 20, 2024

OK to merge this @qgallouedec ?

README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
README.md Outdated Show resolved Hide resolved
@qgallouedec
Copy link
Member

A few simplifications and we're good to go

Co-authored-by: Quentin Gallouédec <45557362+qgallouedec@users.noreply.github.com>
@lewtun lewtun merged commit 92eea1f into main Sep 23, 2024
2 checks passed
@lewtun lewtun deleted the clean-up-readme branch September 23, 2024 07:21
@qgallouedec qgallouedec mentioned this pull request Sep 23, 2024
5 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants