Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GRPO with tool calling #2712

Open
3 tasks
accupham opened this issue Jan 31, 2025 · 0 comments
Open
3 tasks

GRPO with tool calling #2712

accupham opened this issue Jan 31, 2025 · 0 comments
Labels
🏋 GRPO Related to GRPO 🏋 Reward Related to Reward modelling

Comments

@accupham
Copy link

Method description

Would it be possible to implement an RL environment that does multi-turn tool calling in the GRPO RL training loop? Right now it seems to be a one-shot inference before passing it to the custom reward function. I'd like to have a multi-turn interaction via tool calling step before passing the final result to the reward function.

Online tool calling would enable RL over a simulation with feedback from the environment. Seems vLLM supports all manner of tool calling. Could this be added as part of GRPO?

@qgallouedec

Open source status

  • The method implementation is available
  • The model weights are available
  • The training datasets are available

Provide useful links for the implementation

No response

@github-actions github-actions bot added 🏋 GRPO Related to GRPO 🏋 Reward Related to Reward modelling labels Jan 31, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🏋 GRPO Related to GRPO 🏋 Reward Related to Reward modelling
Projects
None yet
Development

No branches or pull requests

1 participant