You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the Keras-to-Loihi example, we currently have biases turned off for all the conv layers. Loihi can handle biases, so it would be nice to not have to do this.
I think the best way to do this would be to have a converter option that moves the biases to the neurons, rather than keeps them as a separate node. With this option on, it would mean that training the network in NengoDL would be slightly different than training in Keras, because we'd have more biases (i.e. each neuron would have a bias). I think this isn't a problem, though. Any network converted from Keras would still be identical if it's not trained at all in NengoDL.
Another option would be to do this automatically if we're in inference_only mode. We talked about this briefly in #137. However, I prefer the additional control of having the explicit converter option (i.e. you can still train the network in NengoDL with individual neuron biases, if you want).
A workaround for this would be to use the converter as-is with use_bias=False on all Keras layers, and then go through the network and turn on trainable for all ensembles (since this would make the biases trainable again). For the purposes of an example, though, that's a little confusing. Also, it doesn't allow training the network in Keras and then converting to Nengo while keeping existing biases on the neurons.
The text was updated successfully, but these errors were encountered:
In the Keras-to-Loihi example, we currently have biases turned off for all the conv layers. Loihi can handle biases, so it would be nice to not have to do this.
I think the best way to do this would be to have a converter option that moves the biases to the neurons, rather than keeps them as a separate node. With this option on, it would mean that training the network in NengoDL would be slightly different than training in Keras, because we'd have more biases (i.e. each neuron would have a bias). I think this isn't a problem, though. Any network converted from Keras would still be identical if it's not trained at all in NengoDL.
Another option would be to do this automatically if we're in
inference_only
mode. We talked about this briefly in #137. However, I prefer the additional control of having the explicit converter option (i.e. you can still train the network in NengoDL with individual neuron biases, if you want).A workaround for this would be to use the converter as-is with
use_bias=False
on all Keras layers, and then go through the network and turn ontrainable
for all ensembles (since this would make the biases trainable again). For the purposes of an example, though, that's a little confusing. Also, it doesn't allow training the network in Keras and then converting to Nengo while keeping existing biases on the neurons.The text was updated successfully, but these errors were encountered: