Code to reproduce the experiments reported in the paper:
Mido Assran, Nicolas Loizou, Nicolas Ballas, and Michael Rabbat, "Stochastic Gradient Push for Distributed Deep Learning," ICML 2019. Official ICML version arxiv version
If you use this code for your research, please cite the paper.
It implements the following algorithms:
- Synchronous Stochastic Gradient Push (SGP), described in the paper
- Overlap Stochastic Gradient Push (OSGP), described in the paper
- AllReduce SGD (AR), standard baseline, also known as Parallel SGD, implemented using PyTorch's
torch.nn.parallel.DistributedDataParallel
- Distributed Parallel SGD (D-PSGD), described in Lian et al., NeurIPS 2017
- Asynchronous Distributed Parallel SGD (AD-PSGD), described in Lian et al., ICML 2018
An example is provided for training a ResNet-50 (He et al., 2015) image classifier on the ImageNet dataset.
All code runs on Python 3.6.7 using PyTorch version 1.0.0.
Our implementations build on the torch.distributed
package in PyTorch, which provides an interface for exchanging tensors between multiple machines. The torch.distributed
package in PyTorch v.1.0.0 can use different backends. We recommmend using NCCL for all algorithms (this is the default).
To install the Stochastic Gradient Push library, via pip:
git clone https://github.com/facebookresearch/stochastic_gradient_push.git
cd stochastic_gradient_push
pip install .
If you want to use the parsing scripts to parse results, you can instead do:
git clone https://github.com/facebookresearch/stochastic_gradient_push.git
cd stochastic_gradient_push
pip install -e .[parse]
There are two main scripts:
gossip_sgd.py
for training using AR, SGP, OSGP, or D-PSGDgossip_sgd_adpsgd.py
for training using AD-PSGD
In order to facilitate launching experiments, we also provide example scripts for submitting jobs using the SLURM workload manager. Note that these will only be directly usable if your cluster also uses SLURM, but hopefully they will be useful, regardless, as examples of how to launch distributed jobs.
The job_scripts/
directory contains the following files:
submit_ADPSGD_ETH.sh
runs the AD-PSGD algorithm over Ethernetsubmit_AR_ETH.sh
runs the AR algorithm over Ethernetsubmit_AR_IB.sh
runs the AR algorithm over InfiniBandsubmit_DPSGD_ETH.sh
runs the D-PSGD algorithm over Ethernetsubmit_DPSGD_IB.sh
runs the D-PSGD algorithm over InfiniBandsubmit_SGP_ETH.sh
runs the SGP algortihm over Ethernetsubmit_SGP_IB.sh
runs the SGP algorithm over InfiniBand
In all cases, the scripts will need to be editied/modified in order to run on your cluster/setup. They also contain instructions on how to modify the script, e.g., to vary the number of nodes or other parameters.
The SGP scripts currently implement Synchronous SGP. To run experiments for Overlap SGP (overlapping communication and computation), change the --overlap
flag to True
.
Note that the current version in the master branch of this repo uses features introduced in PyTorch 1.0. The version of the code used to produce the results in the paper was based on PyTorch 0.5. That version of our code is available under the sgp_pytorch0.5
tag of this repo.
Figures similar to those in the paper can be reproduced, after running the experiments to generate log files, using the script visualization/plotting.py
. This script will also need to be modified to use the same paths to log files you used when running the experiments.
The algorithms SGP, D-PSGD, and AD-PSGD are all implemented as instances of PyTorch's nn.Module
class to facilitate training neural network models. SGP and D-PSGD are implemented in the GossipDataParallel
class in gossip/distributed.py
. The push_sum
argument determines whether to use SGP (if push_sum=True
) or D-PSGD (if push_sum=False
). Overlap SGP is obtained by using the GossipDataParallel
class with push_sum=True
and overlap=True
. AD-PSGD is implemented in the BilatGossipDataParallel
class in gossip/ad_psgd.py
The neural network modules use implementations of PushSum and gossip algorithms for distributed averaging under the hood. These are availble in gossip/gossiper.py
and could be used independently of neural network training for approximate distributed averaging. In addition:
gossip/graph_manager.py
contains code to generate different communication topologies, andgossip/mixing_manager.py
contains code to produce weights of the mixing matrices, given a topology.
See the LICENSE file for details about the license under which this code is made available.