Skip to content

Commit 5485db3

Browse files
authored
Merge pull request #20 from cassieview/seth-blitz
img fix, opp fix, formatting updates
2 parents 398a0ed + 7e2881d commit 5485db3

File tree

8 files changed

+35
-47
lines changed

8 files changed

+35
-47
lines changed
File renamed without changes.

beginner_source/quickstart/build_model_tutorial.py

Lines changed: 1 addition & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -114,15 +114,7 @@ def forward(self, x):
114114
#
115115
# From the docs:
116116
#
117-
# torch.nn.Linear(in_features: int, out_features: int, bias: bool = True)
118-
#
119-
# in_features – size of each input sample
120-
#
121-
# out_features – size of each output sample
122-
#
123-
# bias – If set to False, the layer will not learn an additive bias. Default: True
124-
#
125-
# Lets take a look at the resulting data example with the flatten layer and linear layer added:
117+
# `torch.nn.Linear(in_features: int, out_features: int, bias: bool = True)`
126118
#
127119

128120
input = training_data[0][0]

beginner_source/quickstart/data_quickstart_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,8 +11,8 @@
1111
#
1212

1313
###############################################################
14-
# .. figure:: /_static/img/quickstart/typesofdata.png
15-
# :alt: typesofdata
14+
# .. figure:: /_static/img/quickstart/typesdata.png
15+
# :alt: typesdata
1616
#
1717

1818
############################################################

beginner_source/quickstart/optimization_tutorial.py

Lines changed: 28 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -38,25 +38,29 @@
3838
#
3939
# The optimziation loop is comprized of three main subloops in PyTorch.
4040
#
41-
# .. figure:: /_static/img/quickstart/optimization_loops.png
41+
42+
############################################################
43+
# .. figure:: /_static/img/quickstart/optimizationloops.png
4244
# :alt:
4345
#
44-
#
46+
47+
#############################################################
4548
# 1. The Train Loop - Core loop iterates over all the epochs
4649
# 2. The Validation Loop - Validate loss after each weight parameter update and can be used to gauge hyper parameter performance and update them for the next batch.
4750
# 3. The Test Loop - is used to evaluate our models performance after each epoch on traditional metrics to show how much our model is generalizing from the train and validation dataset to the test dataset it's never seen before.
4851
#
4952

50-
for epoch in range(num_epochs): # Optimization Loop
53+
for epoch in range(num_epochs):
54+
# Optimization Loop
5155
# Train loop over batches
52-
model.train() # set model to train
53-
# Model Update Code
54-
model.eval() # After exiting batch loop set model to eval to speed up evaluation and not track gradients (this is explained below)
55-
# Validation Loop
56-
# - Put sample validation metric logging and hyperparameter update code here
56+
model.train() # set model to train
57+
# Model Update Code
58+
model.eval() # After exiting batch loop set model to eval to speed up evaluation and not track gradients (this is explained below)
59+
# Validation Loop
60+
# - Put sample validation metric logging and hyperparameter update code here
5761
# After exiting train loop set model to eval to speed up evaluation and not track gradients (this is explained below)
5862
# Test Loop
59-
# - Put sample test metric logging and hyperparameter update code here
63+
# - Put sample test metric logging and hyperparameter update code here
6064

6165
######################################################
6266
# Loss
@@ -67,40 +71,31 @@
6771

6872
preds = model(inputs)
6973
loss = cost_function(preds, labels)
70-
71-
######################################################
72-
# AutoGrad and Optimizer (We might want to split this when we go more in depth on autograd )
73-
# -----------------
74-
#
75-
# By default each tensor maintains a graph of every operation applied on it unless otherwise specified using the torch.no_grad() command.
76-
#
77-
# `Autograd graph <https://discuss.pytorch.org/uploads/default/original/1X/c7e0a44b7bcebfb41315b56f8418ce37f0adbfeb.png>`_
78-
#
79-
# PyTorch uses this graph to automatically update parameters with respect to our models loss during training. This is done with one line loss.backwards(). Once we have our gradients the optimizer is used to propgate the gradients from the backwards command to update all the parameters in our model.
80-
81-
optimizer.zero_grad() # make sure previous gradients are cleared
82-
loss.backward() # calculates gradients with respect to loss
74+
# Make sure previous gradients are cleared
75+
optimizer.zero_grad()
76+
# Calculates gradients with respect to loss
77+
loss.backward()
8378
optimizer.step()
8479

8580
######################################################
86-
# The standard method for optimization is called Stochastic Gradient Descent, to learn more check out this awesome video by `3blue1brown <https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi>`_. There are many different optimizers and variations of this method in PyTorch such as ADAM and RMSProp that work better for different kinds of models, they are out side the scope of this Blitz, but can check out the full list of optimizers[here](https://pytorch.org/docs/stable/optim.html)
81+
# The standard method for optimization is called Stochastic Gradient Descent, to learn more check out this awesome video by `3blue1brown <https://www.youtube.com/playlist?list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi>`_. There are many different optimizers and variations of this method in PyTorch such as ADAM and RMSProp that work better for different kinds of models, they are out side the scope of this Blitz, but can check out the full list of optimizers `here <https://pytorch.org/docs/stable/optim.html>`_
8782

8883
######################################################
8984
# Putting it all together lets look at a basic optimization loop
9085
# -----------------
9186
#
9287
# Initilize optimizer and example cost function
9388
#
94-
# # For loop to iterate over epoch
95-
# - Train loop over batches
96-
# - Set model to train mode
97-
# - Calculate loss using
98-
# - clear optimizer gradient
99-
# - loss.backword
100-
# - optimizer step
101-
# - Set model to evaluate mode and start validation loop
102-
# - calculate validation loss and update optimizer hyper parameters
103-
# - Set model to evaluate test loop
89+
# For loop to iterate over epoch
90+
# - Train loop over batches
91+
# - Set model to train mode
92+
# - Calculate loss using
93+
# - clear optimizer gradient
94+
# - loss.backword
95+
# - optimizer step
96+
# - Set model to evaluate mode and start validation loop
97+
# - calculate validation loss and update optimizer hyper parameters
98+
# - Set model to evaluate test loop
10499

105100

106101
##################################################################

beginner_source/quickstart/save_load_run_tutorial.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,6 +93,7 @@
9393
##################################################################
9494
# More help with the PyTorch Quickstart
9595
# ----------------------------------------
96+
#
9697
# | `Tensors <tensor_tutorial.html>`_
9798
# | `DataSets and DataLoaders <data_quickstart_tutorial.html>`_
9899
# | `Transformations <transforms_tutorial.html>`_

beginner_source/quickstart/tensor_tutorial.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@
2525

2626

2727
######################################################################
28-
# ..note: When using CPU for computations, tensors converted from arrays
28+
# .. note:: When using CPU for computations, tensors converted from arrays
2929
# share the same memory for data. Thus, changing the underlying array will
3030
# also affect the tensor.
3131
#
@@ -187,7 +187,7 @@
187187

188188

189189
######################################################################
190-
# ..note: ``view`` is similar to ``reshape`` operation in NumPy. There
190+
# .. note:: ``view`` is similar to ``reshape`` operation in NumPy. There
191191
# is also a ``reshape`` method available in PyTorch, and it is more
192192
# powerful than ``view``, because it can also reshape non-contiguous
193193
# arrays by copying them to the new shape. However, in vast majority of

beginner_source/quickstart/transforms_tutorial.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@
6161
# For the feature transforms we have an array of transforms to process our image data for training. The first transform in the array is `transforms.ToTensor()` this is from class [torchvision.transforms.ToTensor](https://pytorch.org/docs/stable/torchvision/transforms.html#torchvision.transforms.ToTensor). We need to take our images and turn them into a tensor. (To learn more about Tensors check out [this]() resource.) The ToTensor() transformation is doing more than converting our image into a tensor. Its also normalizing our data for us by scaling the images to be between 0 and 1.
6262
#
6363
#
64-
# ..note: ToTensor only normalized image data that is in PIL mode of (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8. In the other cases, tensors are returned without scaling.
64+
# .. note:: ToTensor only normalized image data that is in PIL mode of (L, LA, P, I, F, RGB, YCbCr, RGBA, CMYK, 1) or if the numpy.ndarray has dtype = np.uint8. In the other cases, tensors are returned without scaling.
6565
#
6666
#
6767
# Check out the other `TorchVision Transforms <https://pytorch.org/docs/stable/torchvision/transforms.html>`_

0 commit comments

Comments
 (0)