You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Would be nice to support spiking activations within RNNs such as the keras-lmu or an LSTM. Ideally this would just be a matter of supplying a different activation argument to the RNN. Currently this can be done with a stochastic (i.e., stateless) spiking model (e.g., activation = StochasticSpiking("tanh")):
# The following code is made available under the KerasSpiking license:# https://www.nengo.ai/keras-spiking/license.htmlimporttensorflowastfdefpseudo_gradient(forward, backward):
"""Multiplexes between one of the forward or backward tensors."""# The following trick can be used to supply a pseudo gradient. It works as follows:# on the forwards pass, the backward terms cancel out; on the backwards pass, only# the backward gradient is let through. This is similar to using tf.custom_gradient,# but that is prone to silent errors since certain parts of the graph might not# exist in the context that any tf.gradients are evaluated, which can lead to None# gradients silently getting through.returnbackward+tf.stop_gradient(forward-backward)
classStochasticSpiking:
"""Turn a static activation into a spiking one using stochastic rounding. Similar to ``nengo.neurons.StochasticSpiking``. The effective spike rate is controlled by ``dt``. Specifically, if ``a = f(x)`` is the static activity, then at most ``ceil(abs(a * dt))`` spikes are generated per time-step. Uses the gradient of the underlying activation function on the backward pass. """def__init__(self, wrapped_activation, dt=1):
self._activation=tf.keras.activations.get(wrapped_activation)
self._dt=dtdef__call__(self, x):
a=self._activation(x)
y=a*self._dt# Apply stochastic rounding: y -> y_rounded.n=tf.math.floor(y)
r=y-ny_rounded=n+tf.cast(tf.random.uniform(shape=tf.shape(x)) <r, dtype=x.dtype)
a_rounded=y_rounded/self._dtreturnpseudo_gradient(forward=a_rounded, backward=a)
Otherwise I've been unable to get the SpikingActivation layer or SpikingActivationCell to work within RNNs.
The text was updated successfully, but these errors were encountered:
arvoelke
changed the title
Support spiking activations within RNNs
Support RNNs (i.e., a way to use spiking activations within RNNs)
Feb 19, 2021
Overloading this issue a bit but another way the above is useful is you can change activation._dt on-the-fly without changing any of the weights, which lets you scale the firing rates up and down to adaptively balance the energy and precision. Details are in https://arxiv.org/abs/2002.03553 and are patent pending together with the above code.
Would be nice to support spiking activations within RNNs such as the
keras-lmu
or an LSTM. Ideally this would just be a matter of supplying a differentactivation
argument to the RNN. Currently this can be done with a stochastic (i.e., stateless) spiking model (e.g.,activation = StochasticSpiking("tanh")
):Otherwise I've been unable to get the
SpikingActivation
layer orSpikingActivationCell
to work within RNNs.The text was updated successfully, but these errors were encountered: