Skip to content

Commit

Permalink
Put back open in colab markers (#14684)
Browse files Browse the repository at this point in the history
  • Loading branch information
sgugger authored Dec 9, 2021
1 parent 3bc7d70 commit bab1556
Show file tree
Hide file tree
Showing 9 changed files with 18 additions and 0 deletions.
2 changes: 2 additions & 0 deletions docs/source/benchmarks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Benchmarks

[[open-in-colab]]

Let's take a look at how 🤗 Transformer models can be benchmarked, best practices, and already available benchmarks.

A notebook explaining in more detail how to benchmark 🤗 Transformer models can be found [here](https://github.com/huggingface/transformers/tree/master/notebooks/05-benchmark.ipynb).
Expand Down
2 changes: 2 additions & 0 deletions docs/source/custom_datasets.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# How to fine-tune a model for common downstream tasks

[[open-in-colab]]

This guide will show you how to fine-tune 🤗 Transformers models for common downstream tasks. You will use the 🤗
Datasets library to quickly load and preprocess the datasets, getting them ready for training with PyTorch and
TensorFlow.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/multilingual.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Multi-lingual models

[[open-in-colab]]

Most of the models available in this library are mono-lingual models (English, Chinese and German). A few multi-lingual
models are available and have a different mechanisms than mono-lingual models. This page details the usage of these
models.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/perplexity.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Perplexity of fixed-length models

[[open-in-colab]]

Perplexity (PPL) is one of the most common metrics for evaluating language models. Before diving in, we should note
that the metric applies specifically to classical language models (sometimes called autoregressive or causal language
models) and is not well defined for masked language models like BERT (see [summary of the models](model_summary)).
Expand Down
2 changes: 2 additions & 0 deletions docs/source/preprocessing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Preprocessing data

[[open-in-colab]]

In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we
call a [tokenizer](main_classes/tokenizer). You can build one using the tokenizer class associated to the model
you would like to use, or directly with the [`AutoTokenizer`] class.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/quicktour.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Quick tour

[[open-in-colab]]

Let's have a quick look at the 🤗 Transformers library features. The library downloads pretrained models for Natural
Language Understanding (NLU) tasks, such as analyzing the sentiment of a text, and Natural Language Generation (NLG),
such as completing a prompt with new text or translating in another language.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/task_summary.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Summary of the tasks

[[open-in-colab]]

This page shows the most frequent use-cases when using the library. The models available allow for many different
configurations and a great versatility in use-cases. The most simple ones are presented here, showcasing usage for
tasks such as question answering, sequence classification, named entity recognition and others.
Expand Down
2 changes: 2 additions & 0 deletions docs/source/tokenizer_summary.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Summary of the tokenizers

[[open-in-colab]]

On this page, we will have a closer look at tokenization.

<Youtube id="VFp38yj8h3A"/>
Expand Down
2 changes: 2 additions & 0 deletions docs/source/training.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ specific language governing permissions and limitations under the License.

# Fine-tuning a pretrained model

[[open-in-colab]]

In this tutorial, we will show you how to fine-tune a pretrained model from the Transformers library. In TensorFlow,
models can be directly trained using Keras and the `fit` method. In PyTorch, there is no generic training loop so
the 🤗 Transformers library provides an API with the class [`Trainer`] to let you fine-tune or train
Expand Down

0 comments on commit bab1556

Please sign in to comment.