Skip to content

Commit

Permalink
More doc nits (#1611)
Browse files Browse the repository at this point in the history
  • Loading branch information
SalmanMohammadi authored Sep 17, 2024
1 parent 60a7e3d commit eb2ba94
Show file tree
Hide file tree
Showing 10 changed files with 27 additions and 29 deletions.
2 changes: 1 addition & 1 deletion docs/source/api_ref_rlhf.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ torchtune.rlhf

.. currentmodule:: torchtune.rlhf

Components and losses for RLHF algorithms like PPO and DPO:
Components and losses for RLHF algorithms like PPO and DPO.

.. autosummary::
:toctree: generated/
Expand Down
3 changes: 0 additions & 3 deletions docs/source/deep_dives/recipe_deepdive.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,9 +14,6 @@ This deep-dive will walk you through the design of training-recipes in torchtune
* What are the core components that make up a recipe?
* How should I structure a new recipe?


What are Recipes?
-----------------
Recipes are the primary entry points for torchtune users. These can be thought of
as "targeted" end-to-end pipelines for training and optionally evaluating LLMs.
Each recipe implements a training method (eg: full fine-tuning) with a set of meaningful
Expand Down
4 changes: 2 additions & 2 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -123,12 +123,12 @@ torchtune tutorials.
:hidden:

tutorials/llama3
tutorials/chat
tutorials/lora_finetune
tutorials/qlora_finetune
tutorials/qat_finetune
tutorials/e2e_flow
tutorials/datasets
tutorials/chat
tutorials/memory_optimizations

.. toctree::
Expand All @@ -138,9 +138,9 @@ torchtune tutorials.
:hidden:

deep_dives/checkpointer
deep_dives/comet_logging
deep_dives/configs
deep_dives/recipe_deepdive
deep_dives/comet_logging
deep_dives/wandb_logging

.. toctree::
Expand Down
9 changes: 6 additions & 3 deletions docs/source/install.rst
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ Pre-requisites
--------------

torchtune requires PyTorch, so please install for your proper host and environment
using the `Start Locally <https://pytorch.org/get-started/locally/>`_ page. You should also install
using the `"Start Locally" <https://pytorch.org/get-started/locally/>`_ page. You should also install
torchvision (for multimodal LLMs) and torchao (for quantization APIs). You can install either stable or
nightly versions with the following commands:

Expand All @@ -18,8 +18,8 @@ nightly versions with the following commands:
# Install stable version of PyTorch libraries using pip
pip install torch torchvision torchao
# Nightly install for latest features
pip install --pre torch torchvision torchao --index-url https://download.pytorch.org/whl/nightly/cu121
# Or nightly install for latest features
pip install --pre torch torchvision torchao --index-url https://download.pytorch.org/whl/nightly/cu121 # full options are cpu/cu118/cu121/cu124
Install via PyPI
Expand Down Expand Up @@ -65,6 +65,9 @@ you can also install the package locally with the following command.
cd torchtune
pip install -e .
# or for a developer installation
pip install -e .["dev"]
|
Install nightly build
Expand Down
15 changes: 7 additions & 8 deletions docs/source/overview.rst
Original file line number Diff line number Diff line change
Expand Up @@ -23,16 +23,16 @@ torchtune provides:
- Interoperability with popular model zoos through checkpoint-conversion utilities
- Training recipes for a variety of fine-tuning techniques
- Integration with `Hugging Face Datasets <https://huggingface.co/docs/datasets/en/index>`_ for training and `EleutherAI's Eval Harness <https://github.com/EleutherAI/lm-evaluation-harness>`_ for evaluation
- Support for distributed training using `FSDP <https://pytorch.org/docs/stable/fsdp.html>`_
- Support for distributed training using `FSDP2 <https://github.com/pytorch/torchtitan/blob/main/docs/fsdp.md>`_
- YAML configs for easily configuring training runs

Excited? To get started, checkout some of our tutorials, including:

- our :ref:`quickstart guide <finetune_llama_label>` to finetune your first LLM using torchtune.
- our :ref:`LoRA tutorial <lora_finetune_label>` to learn about parameter-efficient finetuning with torchtune.
- our :ref:`QLoRA tutorial <qlora_finetune_label>` to attain maximal memory efficiency with torchtune.
- Our :ref:`quickstart guide <finetune_llama_label>` to finetune your first LLM using torchtune.
- Our :ref:`LoRA tutorial <lora_finetune_label>` to learn about parameter-efficient finetuning with torchtune.
- Our :ref:`QLoRA tutorial <qlora_finetune_label>` to attain maximal memory efficiency with torchtune.

Eager for more? Check out our :ref:`recipes index<recipes_overview_label>` to see all the fine-tuning techniques we support.
You can check out our :ref:`recipes overview<recipes_overview_label>` to see all the fine-tuning techniques we support.

Key Concepts
------------
Expand All @@ -46,10 +46,9 @@ See the ":ref:`All About Configs<config_tutorial_label>`" deep-dive for more inf
**Recipes.** Recipes can be thought of
as targeted end-to-end pipelines for training and optionally evaluating LLMs.
Each recipe implements a training method (eg: full fine-tuning) with a set of meaningful
features (eg: FSDP + Activation Checkpointing + Gradient Accumulation + Reduced Precision training)
applied to a given model family (eg: Llama2). See the ":ref:`What Are Recipes?<recipe_deepdive>`" deep-dive for more information.
features (eg: FSDP2 + Activation Checkpointing + Gradient Accumulation + Reduced Precision training)
applied to a given model family (eg: Llama3.1). See the ":ref:`What Are Recipes?<recipe_deepdive>`" deep-dive for more information.

|

.. _design_principles_label:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/chat.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _chat_tutorial_label:

=================================
Fine-tuning Llama3 with Chat Data
Fine-Tuning Llama3 with Chat Data
=================================

Llama3 Instruct introduced a new prompt template for fine-tuning with chat data. In this tutorial,
Expand Down
11 changes: 5 additions & 6 deletions docs/source/tutorials/first_finetune_tutorial.rst
Original file line number Diff line number Diff line change
Expand Up @@ -66,11 +66,10 @@ Each recipe consists of three components:

.. note::

Check out our :ref:`recipes index<recipes_overview_label>` to see all the fine-tuning techniques we support.
To learn more about the concept of "recipes", check out our technical deep-dive: :ref:`recipe_deepdive`.

torchtune provides built-in recipes for finetuning on single device, on multiple devices with `FSDP <https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/>`_,
using memory efficient techniques like `LoRA <https://arxiv.org/abs/2106.09685>`_, and more! Check out all our built-in recipes in our :ref:`recipe index<recipes_overview_label>`. You can also utilize the
using memory efficient techniques like `LoRA <https://arxiv.org/abs/2106.09685>`_, and more! Check out all our built-in recipes in our :ref:`recipes overview<recipes_overview_label>`. You can also utilize the
:code:`tune ls` command to print out all recipes and corresponding configs.

.. code-block:: bash
Expand All @@ -88,7 +87,7 @@ using memory efficient techniques like `LoRA <https://arxiv.org/abs/2106.09685>`
...
For the purposes of this tutorial, you'll will be using the recipe for finetuning a Llama2 model using `LoRA <https://arxiv.org/abs/2106.09685>`_ on
a single device. For a more in-depth discussion on LoRA in torchtune, you can see the complete :ref:`lora_finetune_label` tutorial.
a single device. For a more in-depth discussion on LoRA in torchtune, you can see the complete ":ref:`lora_finetune_label`" tutorial.

.. note::

Expand Down Expand Up @@ -132,15 +131,15 @@ changing the LoRA rank, update batch size, etc.

.. note::

Check out :ref:`config_tutorial_label` for a deeper dive on configs in torchtune.
Check out ":ref:`config_tutorial_label`" for a deeper dive on configs in torchtune.

|
Training a model
----------------
Now that you have a model in the proper format and a config that suits your needs, let's get training!

Just like all the other steps, you will be using the :ref:`tune <cli_label>` CLI tool to launch your finetuning run.
Just like all the other steps, you will be using the tune CLI tool to launch your finetuning run.

.. code-block:: bash
Expand All @@ -165,4 +164,4 @@ Next steps
----------

Now that you have trained your model and set up your environment, let's take a look at what we can do with our
new model by checking out the :ref:`E2E Workflow Tutorial<e2e_flow>`.
new model by checking out the ":ref:`E2E Workflow Tutorial<e2e_flow>`".
6 changes: 3 additions & 3 deletions docs/source/tutorials/lora_finetune.rst
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
.. _lora_finetune_label:

===========================
Finetuning Llama2 with LoRA
===========================
============================
Fine-Tuning Llama2 with LoRA
============================

This guide will teach you about `LoRA <https://arxiv.org/abs/2106.09685>`_, a parameter-efficient finetuning technique,
and show you how you can use torchtune to finetune a Llama2 model with LoRA.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/qat_finetune.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _qat_finetune_label:

===========================
Finetuning Llama3 with QAT
Fine-Tuning Llama3 with QAT
===========================

Quantization-Aware Training (QAT) is a common technique for users to quantize their
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorials/qlora_finetune.rst
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
.. _qlora_finetune_label:

=============================
Finetuning Llama2 with QLoRA
Fine-Tuning Llama2 with QLoRA
=============================

In this tutorial, we'll learn about `QLoRA <https://arxiv.org/abs/2305.14314>`_, an enhancement on top of
Expand Down

0 comments on commit eb2ba94

Please sign in to comment.