A bare-bones implementation of deep learning neural networks using numpy
. toynn
is not comparable to powerful deep learning libraries
such as as Tensorflow
, Keras
, PyTorch
and many others. It's goal is not to train production quality deep learning
neural networks. It is a personal project born out of a desire to, understand deep learning neural networks by coding them up and gain insights into various tricks that make them so powerful. I also used it to gain
some practice in prototyping and shipping machine learning algorithms utilzing python/git/CI ecosystem.
- Sigmoid and ReLU layers
- Batch Gradient Descent
- Constant, He and Xavier weight initializations
- Dropout
- L1 and L2 Regularization
- Mini-batch gradient descent
- MIT license
The implementation of deep learning neural networks is this package is inspired from lectures and codes in Andrew Ng's Deep Learning Specialization on Coursera.