This repository is a collection of my useful resources of Deep Learning in Tensorflow, in 5 sections:
- Machine Learning and Deep Learning Basics in Math and Numpy
- Deep Learning Basics in Math, Numpy and Scikit-Learn
- Deep Learning Basics in Tensoflow
- Deep Learning Advanced in Tensoflow - CNN and Tensorboard
- Deep Learning Advanced in Tensoflow - RNN, LSTM and RBM
It would be really grateful to contribute to/clone this repository, commercial uses are not welcomed. Thanks for the help of Prof.Brian Kulis and Prof.Kate Saenko and TFs of CS591-S2 (Deep Learning) at Boston University. Of course also thanks to Google's Open Source Tensorflow!
All results in Jupyter-Notebook are trained under GTX 1070, training CPUs may cost much more time.
This readme file is supported by readme2tex
Section 1 Content - Machine Learning and Deep Learning Basics in Math and Numpy (click to view full notebook)
- Coding requirements:
# Python 3.5+
import numpy as np
import matplotlib.pyplot as plt
from scipy.spatial.distance import cosine
import matplotlib.cm as cm
-
Closed-Form Maximum Likelihood mathematical derivation:
-
Gradient for Maximum Likelihood Estimation mathematical derivation:
-
Matrix Derivatives mathematical derivation:
-
Logistic Regression mathematical derivation
-
Logistic Regression implementation
Section 2 Content - Deep Learning Basics in Math, Numpy and Scikit-Learn (click to view full notebook)
- Coding requirements:
# Python 3.5+
import numpy as np
import matplotlib.pyplot as plt
from scipy.misc import imread
from sklearn.datasets import fetch_mldata
-
Cross-Entropy and Softmax mathematical derivation:
-
Simple Regularization Methods:
-
L2 regularization
-
L1 regularization
-
-
Backprop in a simple MLP - Multi-layer perceptron's mathematical derivation:
- XOR problem - A Neural network to solve the XOR problem: (This is a really good example to help us understand the essence of neural networks)
-
Implementing a simple MLP - Implement a MLP by hand in numpy and scipy
-
Common useful activation functions implementation escaped from numerical accuracy problems:
-
softplus function:
import numpy as np def softplus(x): return np.logaddexp(0, x) def derivative_softplus(x): return np.exp(-np.logaddexp(0,-z))
-
sigmoid function:
import numpy as np def sigmoid(x): return np.exp(-np.logaddexp(0, -x)) def derivative_sigmoid(x): return np.multiply(np.exp(-np.logaddexp(0, -x)), (1.-np.exp(-np.logaddexp(0, -x))))
-
relu function:
import numpy as np def relu(x): return np.maximum(0, x) def derivative_relu(x): for i in range(0, len(x)): for k in range(len(x[i])): if x[i][k] > 0: x[i][k] = 1 else: x[i][k] = 0 return x
-
-
Forward pass implementation
-
Backward pass implementation
-
Test MLP on MNIST dataset and its visualization
-
- Coding requirements:
# Python 3.5+
import numpy as np
# tensorflow-gpu==1.0.1 or tensorflow==1.0.1
import tensorflow as tf
from matplotlib import pyplot as plt
# Scikit-learn's TSNE is relatively slow, use BHTSNE as a faster alternative:
# https://github.com/dominiek/python-bhtsne
from sklearn.manifold import TSNE
-
MNIST Softmax Classifier Demo in TensorFlow
-
Building Neural Networks with the power of
Variable Scope
- MLP in TensorFlow:With the power of variable scope, we can implement a very flexible MLP in tensorflow without hard-code the layers and weights:
def mlp(x, hidden_sizes, activation_fn=tf.nn.relu):
'''
Inputs:
x: an input tensor of the images in the current batch [batch_size, 28x28]
hidden_sizes: a list of the number of hidden units per layer. For example: [5,2] means 5 hidden units in the first layer, and 2 hidden units in the second (output) layer. (Note: for MNIST, we need hidden_sizes[-1]==10 since it has 10 classes.)
activation_fn: the activation function to be applied
Output:
a tensor of shape [batch_size, hidden_sizes[-1]].
'''
if not isinstance(hidden_sizes, (list, tuple)):
raise ValueError("hidden_sizes must be a list or a tuple")
# Number of layers
L = len(hidden_sizes)
for l in range(L):
with tf.variable_scope("layer"+str(l)):
# Create variable named "weights".
if l == 0:
weights = tf.get_variable("weights", shape= [x.shape[1], hidden_sizes[l]], dtype=tf.float32, initializer=None)
else:
weights = tf.get_variable("weights", shape= [hidden_sizes[l-1], hidden_sizes[l]], dtype=tf.float32, initializer=None)
# Create variable named "biases".
biases = tf.get_variable("biases", shape=[hidden_sizes[l]], dtype=tf.float32, initializer=None)
# Pre-Actiation Layer
if l == 0:
pre_activation = tf.add(tf.matmul(x, weights), biases)
else:
pre_activation = tf.add(tf.matmul(activated_layer, weights), biases)
# Activated Layer
if l == L-1:
activated_layer = pre_activation
else:
activated_layer = activation_fn(pre_activation)
return activated_layer
- Siamese Network in TensorFlow
- Visualize learned features of Siamese Network with T-SNE
Section 4 Content - Deep Learning Advanced in Tensoflow - CNN and Tensorfoard (click to view full notebook)
- Coding requirements:
# Python 3.5+
import numpy as np
import scipy
import scipy.io
# tensorflow-gpu==1.0.1 or tensorflow==1.0.1
import tensorflow as tf
from matplotlib import pyplot as plt
# Scikit-learn's TSNE is relatively slow, use BHTSNE as a faster alternative:
# https://github.com/dominiek/python-bhtsne
from sklearn.manifold import TSNE
-
Building and training a convolutional network in Tensorflow with
tf.layers/tf.contrib
-
Building and training a convolutional network by hand in Tensorflow with
tf.nn
-
Saving and Reloading Model Weights in Tensorflow
-
Fine-tuning a pre-trained network
-
Visualizations using Tensorboard:
-
Visualize Filters/Kernels
-
Visualize Loss
-
Visualize Accuracy
-