Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the computation of KL divergence loss in Nash MD #2277

Merged
merged 1 commit into from
Oct 25, 2024

Conversation

d-tiapkin
Copy link
Contributor

What does this PR do?

The initial version of KL divergence in NashMD Trainer had a bug: the gradient of the given KL estimate did not correspond to the actual stochastic gradient of KL divergence between the current policy and the reference policy.

This PR intended to fix this issue by computing a REINFORCE-like estimate, taking into account that the current model generated the completion.

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline, Pull Request section?
  • Was this discussed/approved via a GitHub issue? Please add a link to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the documentation guidelines.
  • Did you write any new necessary tests?

Who can review?

@kashif

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

@kashif kashif added the 🐛 bug Something isn't working label Oct 25, 2024
@qgallouedec qgallouedec merged commit ea7a1be into huggingface:main Oct 25, 2024
8 of 9 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants