Stork is a library designed for the training of spiking neural networks (SNNs). In contrast to conventional deep learning methods, SNNs operate on spikes instead of continuous activation functions, this is why stork extends PyTorch's auto-differentiation capabilities with surrogate gradients (Zenke & Ganguli, 2018) to enable the training of SNNs with backpropagation through time (BPTT).
Stork supports leaky integrate-and-fire (LIF) neurons including adaptive LIF neurons and different kinds of synaptic connections allowing to use e.g. Dalian and Convolutional layers as well as constructing network architectures with recurrent or skip connections. For each neuron group, customizable activity regularizers are available to e.g. apply homeostatic plasticity.
Furthermore, stork uses per default initialization in the fluctuation-driven regime, what enhances SNN training especially in deep networks.
If you find this library useful and use it for your research projects, please cite
Bibtex Citation:
@article{rossbroich_fluctuation-driven_2022,
title = {Fluctuation-driven initialization for spiking neural network training},
author = {Rossbroich, Julian and Gygax, Julia and Zenke, Friedemann},
doi = {10.1088/2634-4386/ac97bb},
journal = {Neuromorphic Computing and Engineering},
year = {2022},
}
- Create and activate a virtual environment.
- Download stork or clone the repository with
git clone <git@github.com>:fmi-basel/stork.git
- Change into the stork directory.
cd stork
- Install the requirements with
pip install -r requirements.txt
- Install
stork
withpip install -e .
The examples
directory contains notebooks and Python scripts that contain examples of different complexities.
- 00_FluctuationDrivenInitialization: This example will provide some intuition about the idea behind fluctuation-driven initialization and will reproduce the panels from Fig. 1
- 01_Shallow_SNN_Randman: This demonstrates how to train an SNN with a single hidden layer on the RANDom MANifold (Randman) dataset.
- 02_Deep_SNN_SHD: This example demonstrates how to train a deep feedforward SNN with multiple hidden layers on the Spiking Heidelberg Digits (SHD) dataset.
- 03_Deep_ConvSNN_SHD: Here we provide an example of a deep recurrent convolutional SNN on the SHD dataset. This example will introduce the use of layer to create convolutional layers.
- 04_DalesLaw_SNN_SHD: This notebook demonstrates how to implement a Dalian network, meaning networks with separate populations of excitatory and inhibitory neurons (i.e. the synaptic connections are sign constrained), by using the
DalianLayer
class from the layer module. - 05_Deep_ConvSNN_DVS-Gestures: Similar to 03_Deep_ConvSNN_SHD:, but for the DVS128 Gesture dataset.
The development of Stork was supported by the Swiss National Science Foundation [grant number PCEFP3_202981].