Skip to content

Commit 2236109

Browse files
authored
Merge branch 'main' into Issue_4381
2 parents 7b3180f + 2a64080 commit 2236109

File tree

4 files changed

+906
-0
lines changed

4 files changed

+906
-0
lines changed

docs/source/index.md

Lines changed: 12 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -136,3 +136,15 @@ The documentation is organized into the following sections:
136136
</a>
137137
</div>
138138
</div>
139+
140+
## Talks
141+
142+
<div class="mt-10">
143+
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
144+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/Fine%20tuning%20with%20TRL%20(Oct%2025).pdf">
145+
<img src="https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/Fine%20tuning%20with%20TRL%20(Oct%2025).png" alt="thumbnail" class="mt-0">
146+
<p class="text-gray-500 text-sm">Talk given on October 30, 2025</p>
147+
<p class="text-gray-700">Fine tuning with TRL</p>
148+
</a>
149+
</div>
150+
</div>

docs/source/openenv.md

Lines changed: 215 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -156,3 +156,218 @@ Below is the reward curve from training:
156156
<iframe src="https://trl-lib-trackio.hf.space?project=openenv&metrics=train/rewards/reward_from_env/mean&runs=qgallouedec-1761202871&sidebar=hidden&navbar=hidden" style="width:600px; height:500px; border:0;"></iframe>
157157

158158
To learn more about how to create custom environments, see the [OpenEnv documentation](https://github.com/meta-pytorch/OpenEnv/blob/main/src/envs/README.md).
159+
160+
## Advanced Example
161+
162+
Let's level this up a bit by training a model to interact with a more complex environment. We'll use the game word guessing game [wordle](https://www.nytimes.com/games/wordle/index.html) from the `textarena` environment.
163+
164+
### The TextArena Environment
165+
166+
[TextArena](https://huggingface.co/papers/2504.11442) is an open-source collection of competitive text-based games designed to evaluate reasoning skills in LLMs using textual games like Wordle, Snake, Tic-Tac-Toe, and more. Research has shown that such games improve model performance on reasoning tasks.
167+
168+
![image of textarena](https://huggingface.co/datasets/trl-lib/documentation-images/resolve/main/text_arena_evals.png)
169+
170+
We will use the `textarena` environment to train a model to play Wordle. The environment is a simple text based response environment that allows the model to interact with the game by making guesses and receive feedback on them.
171+
172+
### Wordle
173+
174+
Wordle is a useful game to train a model on because it requires the model to reason about the word and the feedback provided by the environment. Also, it is a purely language based game that requires no external tools or knowledge. Furthermore, we found that models from 1 billion parameters and up are able to improve on wordle and only require 8 tokens to generate a guess, which makes the game a good benchmark to experiment with Reinforcement Learning environments without significant compute requirements.
175+
176+
> [!NOTE] How does Wordle work?
177+
> Wordle is a word guessing game where the player has to guess a 5-letter word. The player can make 6 guesses, and for each guess, the environment will provide feedback on the correctness of the guess. The player wins if they guess the word in 6 guesses or less. It challenges the model to generate words that are likely to be correct, and to learn from the feedback provided by the environment.
178+
>
179+
> For example, if the wordle environment returns the following feedback:
180+
>
181+
> ```
182+
> G U E S S
183+
> X G Y X X
184+
> ```
185+
> The model has guessed the word "GUESS" and the environment has provided feedback as the letters X, G, and Y. Referring to colors in the original game blank, green, and yellow. From this feedback, the model should learn that the word is "GUESS" is incorrect. The letter "E" is in the word, but in the wrong position. The letter "U" is correct and in the correct position.
186+
187+
In the TextArena environment, reward is only given when the model wins the game. The reward is 1.0 if the model wins, and 0.0 otherwise. This is not a very efficient reward signal for the model, so we have added a number of custom reward functions to the script to help the model learn to play the game. The extensible nature of `reward_funcs` and `rollout_func` allows you to add any custom reward function you want to the script.
188+
189+
### Rollout Function
190+
191+
The rollout function runs one full Wordle episode, prompting the model for a guess each turn and capturing both environment rewards and auxiliary signals such as letter coverage and repetition penalties.
192+
193+
```python
194+
def rollout_once(
195+
env: TextArenaEnv,
196+
tokenizer: AutoTokenizer,
197+
args: GRPOConfig,
198+
dataset_prompt: str,
199+
cli_args: argparse.Namespace,
200+
system_prompt: str,
201+
) -> dict[str, list]:
202+
result = env.reset()
203+
observation = result.observation
204+
205+
prompt_ids: list[int] = []
206+
completion_ids: list[int] = []
207+
logprobs: list[float] = []
208+
raw_rewards: list[float] = []
209+
green_scores: list[float] = []
210+
yellow_scores: list[float] = []
211+
repetition_scores: list[float] = []
212+
correct_scores: list[float] = []
213+
guess_counts: dict[str, int] = {}
214+
215+
for _turn in range(cli_args.max_turns):
216+
# when the game is over the environment will return a done=True
217+
if result.done:
218+
break
219+
220+
# set up the prompt for the model
221+
base_prompt = observation.prompt or dataset_prompt
222+
user_prompt = make_user_prompt(base_prompt, observation.messages)
223+
messages = [
224+
{"role": "system", "content": system_prompt},
225+
{"role": "user", "content": user_prompt},
226+
]
227+
prompt_text = tokenizer.apply_chat_template(
228+
messages,
229+
add_generation_prompt=True,
230+
tokenize=False,
231+
enable_thinking=False,
232+
)
233+
234+
# generate the completion from the model using vLLM
235+
vllm_result = request_vllm_completion(
236+
prompt_text,
237+
args,
238+
endpoint=cli_args.vllm_endpoint,
239+
timeout=cli_args.request_timeout,
240+
fallback=cli_args,
241+
)
242+
prompt_ids.extend(vllm_result["prompt_ids"])
243+
completion_ids.extend(vllm_result["completion_ids"])
244+
logprobs.extend(vllm_result["logprobs"])
245+
completion_text = vllm_result.get("text") or tokenizer.decode(
246+
vllm_result["completion_ids"], skip_special_tokens=True
247+
)
248+
# extract the guess from the completion
249+
guess = extract_guess(completion_text)
250+
251+
# step the environment with the guess
252+
result = env.step(TextArenaAction(message=guess))
253+
raw_rewards.append(float(result.reward or 0.0))
254+
observation = result.observation
255+
correct_score = float(result.reward or 0.0)
256+
feedback = extract_wordle_feedback(observation)
257+
258+
# Update guess counts
259+
previous_occurrences = guess_counts[guess]
260+
repetition_score = scale_repetition_score(previous_occurrences, len(guess_counts))
261+
guess_counts[guess] += 1
262+
263+
# calculate custom reward signals from the feedback
264+
if not feedback:
265+
green_score = 0.0
266+
yellow_score = 0.0
267+
else:
268+
green_count, yellow_count = extract_feedback_counts(feedback)
269+
green_score = green_count / 5.0
270+
yellow_score = yellow_count / 5.0
271+
272+
repetition_scores.append(repetition_score)
273+
green_scores.append(green_score)
274+
yellow_scores.append(yellow_score)
275+
correct_scores.append(correct_score)
276+
277+
correct_reward_value = correct_scores[-1] if correct_scores else (raw_rewards[-1] if raw_rewards else 0.0)
278+
279+
return {
280+
"prompt_ids": prompt_ids,
281+
"completion_ids": completion_ids,
282+
"logprobs": logprobs,
283+
"raw_rewards": raw_rewards,
284+
"correct_reward": correct_reward_value,
285+
"green_reward": green_scores[-1] if green_scores else 0.0,
286+
"yellow_reward": yellow_scores[-1] if yellow_scores else 0.0,
287+
"repetition_reward": repetition_scores[-1] if repetition_scores else 0.0,
288+
}
289+
```
290+
291+
The environment has a reward signal based on the completion of the game. We found that most models struggle to ever win the game, so we have added a number of custom reward functions to the script to help the model learn to play the game more iteratively. At first, the model will learn to cover new letters and avoid repeating guesses. As it improves, it will learn to win the game.
292+
293+
### Reward Functions
294+
295+
We log four reward streams that encourage the model to solve the puzzle, cover new letters, and avoid repeating guesses:
296+
297+
- `reward_correct`: final win/loss signal from the environment.
298+
- `reward_greens`: density of green letters in the last feedback.
299+
- `reward_yellows`: density of yellow letters in the last feedback.
300+
- `reward_repetition`: penalty for guessing the same token multiple times.
301+
302+
```python
303+
def reward_correct(completions: List[str], **kwargs: Optional[Dict]) -> List[float]:
304+
rewards = kwargs.get("correct_reward") if kwargs else None
305+
return [float(r) for r in rewards] if rewards is not None else [0.0] * len(completions)
306+
307+
308+
def reward_greens(completions: List[str], **kwargs: Optional[Dict]) -> List[float]:
309+
rewards = kwargs.get("green_reward") if kwargs else None
310+
return [float(r) for r in rewards] if rewards is not None else [0.0] * len(completions)
311+
312+
313+
def reward_yellows(completions: List[str], **kwargs: Optional[Dict]) -> List[float]:
314+
rewards = kwargs.get("yellow_reward") if kwargs else None
315+
return [float(r) for r in rewards] if rewards is not None else [0.0] * len(completions)
316+
317+
318+
def reward_repetition(completions: List[str], **kwargs: Optional[Dict]) -> List[float]:
319+
rewards = kwargs.get("repetition_reward") if kwargs else None
320+
return [float(r) for r in rewards] if rewards is not None else [0.0] * len(completions)
321+
```
322+
323+
### Training the Model
324+
325+
The training script wires the custom rollout and rewards into `GRPOTrainer`. The CLI exposes the configuration used during development as defaults, so you can override endpoints or hyperparameters at launch time.
326+
327+
```python
328+
parser = argparse.ArgumentParser()
329+
# ... add CLI arguments with sensible defaults ...
330+
cli_args = parser.parse_args()
331+
332+
trainer = GRPOTrainer(
333+
model=cli_args.model_id,
334+
processing_class=tokenizer,
335+
reward_funcs=[
336+
reward_correct,
337+
reward_greens,
338+
reward_yellows,
339+
reward_repetition,
340+
],
341+
train_dataset=dataset,
342+
args=grpo_config,
343+
rollout_func=lambda prompts, args, processing_class: rollout_func(
344+
env=env,
345+
tokenizer=tokenizer,
346+
prompts=prompts,
347+
args=args,
348+
cli_args=cli_args,
349+
system_prompt=system_prompt,
350+
),
351+
)
352+
trainer.train()
353+
```
354+
355+
### Running the Example
356+
357+
The example requires two GPUs:
358+
359+
```bash
360+
# Terminal 1: Start vLLM inference server
361+
CUDA_VISIBLE_DEVICES=0 trl vllm-serve --model Qwen/Qwen2.5-0.5B-Instruct --host 0.0.0.0 --port 8000
362+
363+
# Terminal 2: Run GRPO training with OpenEnv
364+
CUDA_VISIBLE_DEVICES=1 python examples/scripts/openenv/wordle.py
365+
```
366+
367+
### Results
368+
369+
The resulting model improves it's performance on the game, both by reducing the number of repetitions and by increasing the number of correct guesses. However, the the Qwen3-1.7B model we trained is not able to consistently win the game. The following reward curve shows the coverage of the model's guesses and the coverage of correct Y and G letters.
370+
371+
<iframe src="https://burtenshaw-wordle-grpo.hf.space/?project=group-Qwen-Qwen3-17B&metrics=train/rewards/reward_coverage/mean&runs=run-2025-10-26_09-39-49&sidebar=hidden&navbar=hidden" style="width:600px; height:500px; border:0;"></iframe>
372+
373+
We experimented larger models like `gpt-oss-20b` and found that model was able to consistently win the game. However, this requires a lot of compute to train and the model. Why not try this out yourself?

0 commit comments

Comments
 (0)