Simple neural network implementation in numpy with a PyTorch-like API
Originally made for an assignment for the course "Machine Intelligence" at PES University. Although the commits are recent, most of the code was written during the course (Oct/Nov 2020) and moved from a different repo.
This is not meant to be used for serious workloads. You can use it as a learning tool though. For example, here is a list of things you could try learning from the code here:
- Modular implementation of neural networks - each layer is a module with many trainable parameters. Refer nn.py
- This implementation is also very extensible - you can make your modules with various behaviour, such as Dense (a fully connected layer) and even something meta like Sequential (a chain of layers).
- Similarly, activation functions, loss functions and optimisers are also modular and extensible.
- Usage of Einstein summation operations in numpy (and in general). Here's a nice reference for Einstein summation.
- Type annotations in python - the codebase is almost completely type-annotated. This makes the code a little easier to maintain and improves the editing experience significantly for users of the library. Although, mypy does report a few errors, most of the type annotations are correct (PRs are welcome to fix this).
I don't plan to develop this further, but if you want to learn, you can try implementing the following (either in your own fork or send a PR!):
- More activation functions.
numpytorch/activations.py
has a limited set of activation functions, there are many more you can add. - More loss functions.
numpytorch/losses.py
has only one loss function (binary cross-entropy). - More optimisers.
numpytorch/optim.py
has only one optimiser (Stochastic Gradient Descent, SGD) with support for L2 regularization and momentum. The ADAM optimiser would be a nice addition. - Automatic differentiation. Currently, backward passes (derivatives) have to be hand-coded into all the activation functions, layers, etc. Integrating some kind of automatic differentiation library (like autograd or autodidact) would make this a lot less painful to customize. You could also try writing your own automatic differentiation library, that will be a fun project! (ref)
- Other fancy layers like convolution, recurrent cells, etc.
Team members Aayush and Bhargav for helping.
numpytorch is MIT Licensed