Skip to content

Commit 8380869

Browse files
Community Tutorials design adaptation for videos (#4095)
1 parent 5139af3 commit 8380869

File tree

1 file changed

+13
-4
lines changed

1 file changed

+13
-4
lines changed

docs/source/community_tutorials.md

Lines changed: 13 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,6 +4,8 @@ Community tutorials are made by active members of the Hugging Face community who
44

55
## Language Models
66

7+
### Tutorials
8+
79
| Task | Class | Description | Author | Tutorial | Colab |
810
| --- | --- | --- | --- | --- | --- |
911
| Reinforcement Learning | [`GRPOTrainer`] | Post training an LLM for reasoning with GRPO in TRL | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_llm_grpo_trl) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_llm_grpo_trl.ipynb) |
@@ -15,16 +17,21 @@ Community tutorials are made by active members of the Hugging Face community who
1517
| Preference Optimization | [`ORPOTrainer`] | Fine-tuning Llama 3 with ORPO combining instruction tuning and preference alignment | [Maxime Labonne](https://huggingface.co/mlabonne) | [Link](https://mlabonne.github.io/blog/posts/2024-04-19_Fine_tune_Llama_3_with_ORPO.html) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1eHNWg9gnaXErdAa8_mcvjMupbSS6rDvi) |
1618
| Instruction tuning | [`SFTTrainer`] | How to fine-tune open LLMs in 2025 with Hugging Face | [Philipp Schmid](https://huggingface.co/philschmid) | [Link](https://www.philschmid.de/fine-tune-llms-in-2025) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/philschmid/deep-learning-pytorch-huggingface/blob/main/training/fine-tune-llms-in-2025.ipynb) |
1719

18-
<Youtube id="cnGyyM0vOes" />
1920

20-
<Youtube id="jKdXv3BiLu0" />
21+
### Videos
22+
23+
| Task | Title | Author | Video |
24+
| --- | --- | --- | --- |
25+
| Instruction tuning | Fine-tuning open AI models using Hugging Face TRL | [Wietse Venema](https://huggingface.co/wietsevenema) | [<img src="https://img.youtube.com/vi/cnGyyM0vOes/0.jpg">](https://youtu.be/cnGyyM0vOes) |
26+
| Instruction tuning | How to fine-tune a smol-LM with Hugging Face, TRL, and the smoltalk Dataset | [Mayurji](https://huggingface.co/iammayur) | [<img src="https://img.youtube.com/vi/jKdXv3BiLu0/0.jpg">](https://youtu.be/jKdXv3BiLu0) |
27+
2128

2229
<details>
23-
<summary>⚠️ Deprecated features notice (click to expand)</summary>
30+
<summary>⚠️ Deprecated features notice for "How to fine-tune a smol-LM with Hugging Face, TRL, and the smoltalk Dataset" (click to expand)</summary>
2431

2532
<Tip warning={true}>
2633

27-
The tutorial above uses two deprecated features:
34+
The tutorial uses two deprecated features:
2835
- `SFTTrainer(..., tokenizer=tokenizer)`: Use `SFTTrainer(..., processing_class=tokenizer)` instead, or simply omit it (it will be inferred from the model).
2936
- `setup_chat_format(model, tokenizer)`: Use `SFTConfig(..., chat_template_path="Qwen/Qwen3-0.6B")`, where `chat_template_path` specifies the model whose chat template you want to copy.
3037

@@ -34,6 +41,8 @@ The tutorial above uses two deprecated features:
3441

3542
## Vision Language Models
3643

44+
### Tutorials
45+
3746
| Task | Class | Description | Author | Tutorial | Colab |
3847
| --- | --- | --- | --- | --- | --- |
3948
| Visual QA | [`SFTTrainer`] | Fine-tuning Qwen2-VL-7B for visual question answering on ChartQA dataset | [Sergio Paniego](https://huggingface.co/sergiopaniego) | [Link](https://huggingface.co/learn/cookbook/fine_tuning_vlm_trl) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/cookbook/blob/main/notebooks/en/fine_tuning_vlm_trl.ipynb) |

0 commit comments

Comments
 (0)