Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
101 changes: 31 additions & 70 deletions beginner_source/quickstart/build_model_tutorial.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,22 +10,16 @@
# The data has been loaded and transformed we can now build the model.
# We will leverage `torch.nn <https://pytorch.org/docs/stable/nn.html>`_ predefined layers that PyTorch has that can simplify our code.
#
# In the below example, for our FashionMNIT image dataset, we are using a `Sequential`
# container from class `torch.nn. Sequential <https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html>`_
# that allows us to define the model layers inline. In the "Sequential" in-line model building format the ``forward()``
# method is created for you and the modules you add are passed in as a list or dictionary in the order that are they are defined.
#
# Another way to bulid this model is with a class
# using `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html)>`_
# In the below example, for our FashionMNIT image dataset, we are using `nn.Module <https://pytorch.org/docs/stable/generated/torch.nn.Module.html)>`_
# A big plus with using a class that inherits ``nn.Module`` is better parameter management across all nested submodules.
# This gives us more flexibility, because we can construct layers of any complexity, including the ones with shared weights.
#
# Lets break down the steps to build this model below
#

##########################################
# Inline nn.Sequential Example:
# ----------------------------
#############################################
# Import the Packages
# --------------------------
#

import os
Expand All @@ -35,27 +29,31 @@
from torch.utils.data import DataLoader
from torchvision import datasets, transforms

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} device'.format(device))

# model
model = nn.Sequential(
nn.Flatten(),
nn.Linear(28*28, 512),
nn.ReLU(),
nn.Linear(512, 512),
nn.ReLU(),
nn.Linear(512, len(classes)),
nn.Softmax(dim=1)
).to(device)

print(model)

#############################################
# Class nn.Module Example:
# --------------------------
# Get Device for Training
# -----------------------
# Here we check to see if `torch.cuda <https://pytorch.org/docs/stable/notes/cuda.html>`_
# is available to use the GPU, else we will use the CPU.
#
# Example:
#

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} device'.format(device))

##############################################
# Define the Class
# -------------------------
#
# Here we define the `NeuralNetwork` class which inherits from ``nn.Module`` which is the base class for
# building neural network modules. The ``init`` function defines the layers in the neural network
# then it initializes the modules to be called in the ``forward`` function.
# Then we call the ``NeuralNetwork`` class and assign the device. When training
# the model we will call ``model`` and pass the data (x) into the forward function and
# through each layer of our network.
#
#

class NeuralNetwork(nn.Module):
def __init__(self):
Expand All @@ -66,7 +64,6 @@ def __init__(self):
self.output = nn.Linear(512, 10)

def forward(self, x):

x = self.flatten(x)
x = F.relu(self.layer1(x))
x = F.relu(self.layer2(x))
Expand All @@ -76,32 +73,14 @@ def forward(self, x):

print(model)


#############################################
# Get Device for Training
# -----------------------
# Here we check to see if `torch.cuda <https://pytorch.org/docs/stable/notes/cuda.html>`_ is available to use the GPU, else we will use the CPU.
#
# Example:
#

device = 'cuda' if torch.cuda.is_available() else 'cpu'
print('Using {} device'.format(device))

##############################################
# __init__
# -------------------------
#
# The ``init`` function inherits from ``nn.Module`` which is the base class for
# building neural network modules. This function defines the layers in your neural network
# then it initializes the modules to be called in the ``forward`` function.
#

##############################################
# The Model Module Layers
# -------------------------
#
# Lets break down each model layer in the FashionMNIST model.
# Lets break down each model layer in the FashionMNIST model. In the below example we are using a `Sequential`
# container from class `torch.nn. Sequential <https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html>`_
# that allows us to define the model layers inline. In the "Sequential" in-line model building format the ``forward()``
# method is created for you and the modules you add are passed in as a list or dictionary in the order that are they are defined.
#

##################################################
Expand All @@ -127,8 +106,8 @@ def forward(self, x):
# `nn.Linear <https://pytorch.org/docs/stable/generated/torch.nn.Linear.html>`_ to add a linear layer to the model.
# -------------------------------
#
# Now that we have flattened our tensor dimension we will apply a linear layer
# transform that will calculate/learn the weights and the bias.
# Now that we have flattened our tensor dimension we will apply a linear layer. The linear layer is
# a module that applies a linear transformation on the input using it's stored weights and biases.
#
# From the docs:
#
Expand Down Expand Up @@ -166,24 +145,6 @@ def forward(self, x):
print(model)


###################################################
# Forward Function
# --------------------------------
#
# In the class implementation of the neural network we define a ``forward`` function.
# Then call the ``NeuralNetwork`` class and assign the device. When training the model we will call ``model``
# and pass the data (x) into the forward function and through each layer of our network.
#
#
def forward(self, x):
x = self.flatten(x)
x = F.relu(self.layer1(x))
x = F.relu(self.layer2(x))
x = self.output(x)
return F.softmax(x, dim=1)
model = NeuralNetwork().to(device)


################################################
# In the next section you will learn about how to train the model and the optimization loop for this example.
#
Expand Down
9 changes: 9 additions & 0 deletions beginner_source/quickstart/data_quickstart_tutorial.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
"""
.. raw:: html

<div>
<a href="https://torchtutorialstaging.z5.web.core.windows.net/beginner/quickstart/tensor_tutorial.html">Tensors &gt; </a>
<a href="https://torchtutorialstaging.z5.web.core.windows.net/beginner/quickstart/data_quickstart_tutorial.html">Datasets & Dataloaders</a>
</div>

.. raw:: html

Datasets & Dataloaders
===================
"""
Expand Down
9 changes: 9 additions & 0 deletions beginner_source/quickstart/tensor_tutorial.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,13 @@
"""
.. raw:: html

<div>
<a href="https://torchtutorialstaging.z5.web.core.windows.net/beginner/quickstart/tensor_tutorial.html">Tensors</a>
</div>

.. raw:: html


Tensors and Operations
----------------------
**Tensor** is the basic computational unit in PyTorch. It is very
Expand Down