Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update philosophy to include other preprocessing classes #18550

Merged
merged 2 commits into from
Aug 10, 2022
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
35 changes: 17 additions & 18 deletions docs/source/en/philosophy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,27 +14,27 @@ specific language governing permissions and limitations under the License.

🤗 Transformers is an opinionated library built for:

- NLP researchers and educators seeking to use/study/extend large-scale transformers models
- hands-on practitioners who want to fine-tune those models and/or serve them in production
- engineers who just want to download a pretrained model and use it to solve a given NLP task.
- machine learning researchers and educators seeking to use, study or extend large-scale transformers models.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
- hands-on practitioners who want to fine-tune those models and or serve them in production.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
- engineers who just want to download a pretrained model and use it to solve a given machine learning task.

The library was designed with two strong goals in mind:

- Be as easy and fast to use as possible:

- We strongly limited the number of user-facing abstractions to learn, in fact, there are almost no abstractions,
just three standard classes required to use each model: [configuration](main_classes/configuration),
[models](main_classes/model) and [tokenizer](main_classes/tokenizer).
[models](main_classes/model) and a preprocessing class ([tokenizer](main_classes/tokenizer) for NLP, [feature extractor](main_classes/feature_extractor) for vision and audio, and [processor](main_classes/processors) for multimodal inputs).
- All of these classes can be initialized in a simple and unified way from pretrained instances by using a common
`from_pretrained()` instantiation method which will take care of downloading (if needed), caching and
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
loading the related class instance and associated data (configurations' hyper-parameters, tokenizers' vocabulary,
and models' weights) from a pretrained checkpoint provided on [Hugging Face Hub](https://huggingface.co/models) or your own saved checkpoint.
- On top of those three base classes, the library provides two APIs: [`pipeline`] for quickly
using a model (plus its associated tokenizer and configuration) on a given task and
[`Trainer`]/`Keras.fit` to quickly train or fine-tune a given model.
using a model (plus its associated preprocessing class and configuration) on a given task and
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
[`Trainer`] to quickly train or fine-tune a given model.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
- As a consequence, this library is NOT a modular toolbox of building blocks for neural nets. If you want to
extend/build-upon the library, just use regular Python/PyTorch/TensorFlow/Keras modules and inherit from the base
classes of the library to reuse functionalities like model loading/saving.
extend or build upon the library, just use regular Python, PyTorch, TensorFlow, Keras modules and inherit from the base
classes of the library to reuse functionalities like model loading and saving. If you'd like to learn more about our coding philosophy, check out our [Repeat Yourself](https://huggingface.co/blog/transformers-design-philosophy) blog post.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved

- Provide state-of-the-art models with performances as close as possible to the original models:

Expand All @@ -48,11 +48,11 @@ A few other goals:
- Expose the models' internals as consistently as possible:

- We give access, using a single API, to the full hidden-states and attention weights.
- Tokenizer and base model's API are standardized to easily switch between models.
- The preprocessing classes and base model APIs are standardized to easily switch between models.

- Incorporate a subjective selection of promising tools for fine-tuning/investigating these models:
- Incorporate a subjective selection of promising tools for fine-tuning and investigating these models:

- A simple/consistent way to add new tokens to the vocabulary and embeddings for fine-tuning.
- A simple and consistent way to add new tokens to the vocabulary and embeddings for fine-tuning.
- Simple ways to mask and prune transformer heads.

- Switch easily between PyTorch and TensorFlow 2.0, allowing training using one framework and inference using another.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
Expand All @@ -61,20 +61,19 @@ A few other goals:

The library is built around three types of classes for each model:

- **Model classes** such as [`BertModel`], which are 30+ PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) or Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) that work with the pretrained weights provided in the
- **Model classes** can be PyTorch models ([torch.nn.Module](https://pytorch.org/docs/stable/nn.html#torch.nn.Module)) or Keras models ([tf.keras.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model)) that work with the pretrained weights provided in the
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
library.
- **Configuration classes** such as [`BertConfig`], which store all the parameters required to build
- **Configuration classes** store all the parameters required to build
a model. You don't always need to instantiate these yourself. In particular, if you are using a pretrained model
without any modification, creating the model will automatically take care of instantiating the configuration (which
is part of the model).
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
- **Tokenizer classes** such as [`BertTokenizer`], which store the vocabulary for each model and
provide methods for encoding/decoding strings in a list of token embeddings indices to be fed to a model.
- **Preprocessing classes** convert the raw data into a format accepted by the model. A [tokenizer](main_classes/tokenizer) stores the vocabulary for each model and provide methods for encoding and decoding strings in a list of token embeddings indices to be fed to a model. [Feature extractors](main_classes/feature_extractor) preprocess audio or vision inputs, and a [processor](main_classes/processors) handles multimodal inputs.
stevhliu marked this conversation as resolved.
Show resolved Hide resolved

All these classes can be instantiated from pretrained instances and saved locally using two methods:

- `from_pretrained()` lets you instantiate a model/configuration/tokenizer from a pretrained version either
- `from_pretrained()` lets you instantiate a model, configuration, and preprocessing class from a pretrained version either
provided by the library itself (the supported models can be found on the [Model Hub](https://huggingface.co/models)) or
stored locally (or on a server) by the user,
- `save_pretrained()` lets you save a model/configuration/tokenizer locally so that it can be reloaded using
stored locally (or on a server) by the user.
- `save_pretrained()` lets you save a model, configuration, and preprocessing class locally so that it can be reloaded using
stevhliu marked this conversation as resolved.
Show resolved Hide resolved
`from_pretrained()`.