Skip to content

Commit

Permalink
Merge branch 'master' into SethHWeidman-patch-1
Browse files Browse the repository at this point in the history
  • Loading branch information
SethHWeidman authored Sep 12, 2019
2 parents 6bf6bb0 + d214b12 commit 95df504
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 4 deletions.
2 changes: 1 addition & 1 deletion intermediate_source/dist_tuto.rst
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
Writing Distributed Applications with PyTorch
3. Writing Distributed Applications with PyTorch
=============================================
**Author**: `Séb Arnold <https://seba1511.com>`_

Expand Down
7 changes: 4 additions & 3 deletions intermediate_source/model_parallel_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@
applications.
Basic Usage
================================
-----------
"""

######################################################################
Expand Down Expand Up @@ -75,7 +75,7 @@ def forward(self, x):

######################################################################
# Apply Model Parallel to Existing Modules
# =======================
# ----------------------------------------
#
# It is also possible to run an existing single-GPU module on multiple GPUs
# with just a few lines of changes. The code below shows how to decompose
Expand Down Expand Up @@ -235,7 +235,7 @@ def plot(means, stds, labels, fig_name):

######################################################################
# Speed Up by Pipelining Inputs
# =======================
# -----------------------------
#
# In the following experiments, we further divide each 120-image batch into
# 20-image splits. As PyTorch launches CUDA operations asynchronizely, the
Expand Down Expand Up @@ -350,3 +350,4 @@ def forward(self, x):
# for your environment, a proper approach is to first generate the curve to
# figure out the best split size, and then use that split size to pipeline
# inputs.
#

0 comments on commit 95df504

Please sign in to comment.