Regression Tutorial #1: random thresholds? #199
-
Hi,
I get why you would want to randomly initialize the decay rate, since it is then learned by the network, but I'm not quite sure of why you would randomly initialize all the thresholds for each neuron, if that value is not learned. I checked the referenced material (your paper about Spiking Neural Networks for Nonlinear Regression) but I didn't find anything there. Is it just an empirical choice that leads to better results, or is there some insight that I am missing? Thank you. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Great question! It hasn't actually been empirically tested in much rigour. This paper by Perez-Nieves et al. demonstrated that randomly initialized decay rates outperforms a global decay rate. Our guess is that having a diverse set of dynamics amongst neurons can help model different types of data. A slow decay = good for long-range time dependencies. A fast decay = short-term time dependencies. Applying similar logic to random thresholds, perhaps the weights attached to neurons with small thresholds will become stronger if particular features need high firing rates. Just theorizing - but hopefully this gives you some intuition on why it might be useful! |
Beta Was this translation helpful? Give feedback.
Great question! It hasn't actually been empirically tested in much rigour.
This paper by Perez-Nieves et al. demonstrated that randomly initialized decay rates outperforms a global decay rate.
Our guess is that having a diverse set of dynamics amongst neurons can help model different types of data. A slow decay = good for long-range time dependencies. A fast decay = short-term time dependencies.
Applying similar logic to random thresholds, perhaps the weights attached to neurons with small thresholds will become stronger if particular features need high firing rates.
Just theorizing - but hopefully this gives you some intuition on why it might be useful!