Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. It provides reference implementations of various sequence-to-sequence models, including:
- Convolutional Neural Networks (CNN)
- Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks
- Gehring et al. (2017): Convolutional Sequence to Sequence Learning
- Edunov et al. (2018): Classical Structured Prediction Losses for Sequence to Sequence Learning
- Fan et al. (2018): Hierarchical Neural Story Generation
- New wav2vec: Unsupervised Pre-training for Speech Recognition (Schneider et al., 2019)
- LightConv and DynamicConv models
- Long Short-Term Memory (LSTM) networks
- Transformer (self-attention) networks
- Vaswani et al. (2017): Attention Is All You Need
- Ott et al. (2018): Scaling Neural Machine Translation
- Edunov et al. (2018): Understanding Back-Translation at Scale
- New Baevski and Auli (2018): Adaptive Input Representations for Neural Language Modeling
- New Shen et al. (2019): Mixture Models for Diverse Machine Translation: Tricks of the Trade
Fairseq features:
- multi-GPU (distributed) training on one machine or across multiple machines
- fast generation on both CPU and GPU with multiple search algorithms implemented:
- beam search
- Diverse Beam Search (Vijayakumar et al., 2016)
- sampling (unconstrained and top-k)
- large mini-batch training even on a single GPU via delayed updates
- mixed precision training (trains faster with less GPU memory on NVIDIA tensor cores)
- extensible: easily register new models, criterions, tasks, optimizers and learning rate schedulers
We also provide pre-trained models for several benchmark translation and language modeling datasets.
- PyTorch version >= 1.0.0
- Python version >= 3.5
- For training new models, you'll also need an NVIDIA GPU and NCCL
Please follow the instructions here to install PyTorch: https://github.com/pytorch/pytorch#installation.
If you use Docker make sure to increase the shared memory size either with
--ipc=host
or --shm-size
as command line options to nvidia-docker run
.
After PyTorch is installed, you can install fairseq with pip
:
pip install fairseq
Installing from source
To install fairseq from source and develop locally:
git clone https://github.com/pytorch/fairseq
cd fairseq
pip install --editable .
Improved training speed
Training speed can be further improved by installing NVIDIA's
apex library with the --cuda_ext
option.
fairseq will automatically switch to the faster modules provided by apex.
The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.
We provide pre-trained models and pre-processed, binarized test sets for several tasks listed below, as well as example training and evaluation commands.
- Translation: convolutional and transformer models are available
- Language Modeling: convolutional models are available
We also have more detailed READMEs to reproduce results from specific papers:
- Schneider et al. (2019): wav2vec: Unsupervised Pre-training for Speech Recognition
- Shen et al. (2019) Mixture Models for Diverse Machine Translation: Tricks of the Trade
- Wu et al. (2019): Pay Less Attention with Lightweight and Dynamic Convolutions
- Edunov et al. (2018): Understanding Back-Translation at Scale
- Edunov et al. (2018): Classical Structured Prediction Losses for Sequence to Sequence Learning
- Fan et al. (2018): Hierarchical Neural Story Generation
- Ott et al. (2018): Scaling Neural Machine Translation
- Gehring et al. (2017): Convolutional Sequence to Sequence Learning
- Dauphin et al. (2017): Language Modeling with Gated Convolutional Networks
- Facebook page: https://www.facebook.com/groups/fairseq.users
- Google group: https://groups.google.com/forum/#!forum/fairseq-users
fairseq(-py) is BSD-licensed. The license applies to the pre-trained models as well. We also provide an additional patent grant.
Please cite as:
@inproceedings{ott2019fairseq,
title = {fairseq: A Fast, Extensible Toolkit for Sequence Modeling},
author = {Myle Ott and Sergey Edunov and Alexei Baevski and Angela Fan and Sam Gross and Nathan Ng and David Grangier and Michael Auli},
booktitle = {Proceedings of NAACL-HLT 2019: Demonstrations},
year = {2019},
}