Want to overcome what feels like writers block for coders? Want to start to remember the library calls? Want to just know how to code your next neural net without having to think too much about the syntax and library calls?
This repository contains mini programming challenges aimed to gradually stretch your working skills and understanding of tensorflow basics.
Each challenge has a corresponding solution in the solutions folder. If you need a hint or some starter code to give you structure there are templates in the template folder which have only the outline of the solution files.
These are skills building exercises. It is suggested that you go through all the challenges in Levels where each Level becomes increasingly more difficult. If a Level takes you more than three hours to complete then consider repeating the Level another day before going onto the next Level.
Try to do the challenges using as much reference as you need such as the templates, searching for information on the web, and referencing the solution as needed.
Same as Level 1 but do not reference the solution folder.
Same as Level 2 but also do not refer to the template folder.
Do all the challenges without any reference including searching the web. Try substituting relu for sigmoid and compare the difference in learning rate and loss.
If you've completed Level 4 within three hours then consider yourself graduated. It's time to move onto more complex real life applications.
Consider visiting https://www.tensorflow.org and going through each tutorial in a similar manner, graduating yourself from Level 1 up to Level 5. There is a unix bash script in the solutions folder solutions/convert-solution-to-template.sh that you could use on any code to convert it to a template, like so:
./solutions/convert-solution-to-template.sh < solution-file-name.py > template-file-name.py
( Python >3.6 is not supported by tensorflow at time of writing)
Install Python 3.6 if you don't have it already. On a mac use
brew install https://raw.githubusercontent.com/Homebrew/homebrew-core/f2a764ef944b1080be64bd88dca9a1d80130c558/Formula/python.rb
Create a virtual env and install the dependencies
virtualenv --python python3.6 ~/env/learn-to-tensorflow
source ~/env/learn-to-tensorflow/bin/activate
pip install -r requirements.txt
On future sessions just use
source ~/env/learn-to-tensorflow/bin/activate
These first challenges focus on the fundamentals by writing our graphs and models from scratch.
Create a very basic tensorflow program that adds one to a variable and uses a loop to run the addition ten times.
Multiply two matricies together.
input1 = [[3. , 5.]]
input2 = [[7.],[1.]]
The result should be 26.
Solve the xor problem using
input = [[0,0],[1,1],[1,0],[0,1]]
output = [[0], [0], [1], [1]]
And break the problem into the following layers
- the 1x2 input layer
- 2x3 + bias hidden sigmoid layer
- 3x1 + bias sigmoid output layer
- calculate loss as the sum of the squares of y - y_
- use gradient decent (set to 1.0) to minimize loss
Run iteratively 500 times and print all the variable data.
Add regularization to 02-xor-1d.py
One type of regularization is to minimize the values of the transformation matricies, such as the as the average or sum of the square of m1 and the square of m2. The regularization term will need to be scaled to work with the loss term. A scaling factor can be found experimentally. Try multiplying your regularization by 0.01 to begin then experiment with different values.
Solve the xor problem using
input = [[0.,0.],[1.,1.],[1.,0.],[0.,1.]]
output = [[1.,0.],[1.,0.],[0.,1.],[0.,1.]]
Replicate 04-xor-2d, but instead of using constants for input and output, use feeds.
Improve 05-feed.py to save the session information at the end of training and to use the saved session information if it exists instead of training.
Improve 05-feed.py by adding batch normalization after the hidden layer. Batch normalization adds stability and decreases training time. Compare the output of 05-feed.py and 07-batch-norm.py
The amount of accuracy gain, even for such a simple network, is great.
In Tensorflow 1.14, Keras layers have been added to simplify much of the construction.
Use tf.keras.Sequential and tf.keras.Dense to redo 02-xor-1d.py
And break the problem into the following layers
- 2x3 + bias hidden sigmoid layer (using tf.keras.Dense)
- 3x1 + bias sigmoid output layer (using tf.keras.Dense)
- calculate loss as the binary crossentropy
- use gradient decent (set to 1.0) to minimize loss
Run the training for 1000 iterations
Same as 10-basic-layers.py but with batch normalization, but run with 500 iterations instead of 1000 and compare the results.
Retrieve info form model and see that the internal representation is akin to 02-xor-1d.py
Use tf.keras to save and restore the calculated model