4444######################################################################
4545# Steps
4646# -----
47- #
47+ #
4848# Steps 1 through 4 set up our data and neural network for training. The
4949# process of zeroing out the gradients happens in step 5. If you already
5050# have your data and neural network built, skip to 5.
51- #
51+ #
5252# 1. Import all necessary libraries for loading our data
5353# 2. Load and normalize the dataset
5454# 3. Build the neural network
5555# 4. Define the loss function
5656# 5. Zero the gradients while training the network
57- #
57+ #
5858# 1. Import necessary libraries for loading our data
5959# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
60- #
60+ #
6161# For this recipe, we will just be using ``torch`` and ``torchvision`` to
6262# access the dataset.
63- #
63+ #
6464
6565import torch
6666
7676######################################################################
7777# 2. Load and normalize the dataset
7878# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
79- #
79+ #
8080# PyTorch features various built-in datasets (see the Loading Data recipe
8181# for more information).
82- #
82+ #
8383
8484transform = transforms .Compose (
8585 [transforms .ToTensor (),
102102######################################################################
103103# 3. Build the neural network
104104# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
105- #
105+ #
106106# We will use a convolutional neural network. To learn more see the
107107# Defining a Neural Network recipe.
108- #
108+ #
109109
110110class Net (nn .Module ):
111111 def __init__ (self ):
@@ -130,9 +130,9 @@ def forward(self, x):
130130######################################################################
131131# 4. Define a Loss function and optimizer
132132# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
133- #
133+ #
134134# Let’s use a Classification Cross-Entropy loss and SGD with momentum.
135- #
135+ #
136136
137137net = Net ()
138138criterion = nn .CrossEntropyLoss ()
@@ -142,14 +142,14 @@ def forward(self, x):
142142######################################################################
143143# 5. Zero the gradients while training the network
144144# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
145- #
145+ #
146146# This is when things start to get interesting. We simply have to loop
147147# over our data iterator, and feed the inputs to the network and optimize.
148- #
148+ #
149149# Notice that for each entity of data, we zero out the gradients. This is
150150# to ensure that we aren’t tracking any unnecessary information when we
151151# train our neural network.
152- #
152+ #
153153
154154for epoch in range (2 ): # loop over the dataset multiple times
155155
@@ -181,13 +181,13 @@ def forward(self, x):
181181# You can also use ``model.zero_grad()``. This is the same as using
182182# ``optimizer.zero_grad()`` as long as all your model parameters are in
183183# that optimizer. Use your best judgment to decide which one to use.
184- #
184+ #
185185# Congratulations! You have successfully zeroed out gradients PyTorch.
186- #
186+ #
187187# Learn More
188188# ----------
189- #
189+ #
190190# Take a look at these other recipes to continue your learning:
191- #
192- # - `Loading data in PyTorch <https://pytorch.org/tutorials/recipes/recipes/loading_data_recipe .html>`__
191+ #
192+ # - `Loading data in PyTorch <https://pytorch.org/tutorials/beginner/basics/data_tutorial .html>`__
193193# - `Saving and loading models across devices in PyTorch <https://pytorch.org/tutorials/recipes/recipes/save_load_across_devices.html>`__
0 commit comments