Link to the paper: https://arxiv.org/abs/1606.06160.
Partial implementation supporting 1-bit weights and k-bit activations only.
Contains a single example, on MNIST.
The scripts trains a LeNet run python qnn_mnist.py --ab k
(where k is the activation bits). For use with the Finnthesizer in my QNN-MO-PYNQ Fork, run the python qnn_mnist.py --export
to create a compatible NPZ archive.