You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi there! First off, this is a fantastic project, thanks for making it available! I played around with it some after your first series of blog posts (way back in like 2015 or something? Gosh time flies) and have been watching it from the sidelines for a while. Super cool, and happy it's still alive...and powering a database now!
In any case, I was hacking on a small toy "spiking neural net" project and the further I got into it, the more it felt like a good use-case for DD. Been re-familiarizing myself with DD, the docs/book and blog articles... but before I get too deep in the weeds I had a few high-level questions.
A "spiking" neural net is, essentially, trying to mimic more biologically-plausible neurons:
A densely connected network of nodes, and some connections can be recurrent or cyclical
Binary "activation". Unlike modern artificial nets which process continuous values, spiking nets only "activate" after a threshold is reached and their output is a simple binary event. Information is encoded in time rather than in a continuous value
Node activations are pretty sparse. So a network might be say 80% connected, but only 5% is "active" at any particular time. This feels very DD to me :)
Some more advanced networks incorporate learning/plasticity rules. E.g. if a node "activates", all the nodes that contributed to the activation during the last time step (or last n time steps) are boosted. Conversely, if this node did not activate, all those that contributed to the "non-event" will be weakened
It feels like DD could be used for the network: build a graph of connections and associate a value and threshold with each node and a weight on each edge, introduce "inputs" to the graph and let activations burble through the graph. Most of the examples use simple tuples, but I'm assuming it is relatively trivial to use a struct instead to hold more complicated information for each value? I see the monoid-bfs example uses a MinSum struct that implements various traits, so I'm assuming I can follow that model
The last bullet is my main question though...I'm not sure if there is a way to inspect the history of a node to see what happened during prior epochs?
I know DD tracks this data, but also not sure if it's possible to retrieve... or possibly too dangerous/difficult to actually use? I could probably store that information as a local history on the node itself, but that seems redundant since DD internally tracks that data in form of diffs anyway, right?
Alternatively, I was thinking that maybe two events could be propogated: the activation at t, and a second event with update_at(t+1) which is essentially a "I activated you last time step" event. So instead of looking backwards, a post-it note shows up in the future with the reminder to check.
Thoughts?
The text was updated successfully, but these errors were encountered:
Hi there! First off, this is a fantastic project, thanks for making it available! I played around with it some after your first series of blog posts (way back in like 2015 or something? Gosh time flies) and have been watching it from the sidelines for a while. Super cool, and happy it's still alive...and powering a database now!
In any case, I was hacking on a small toy "spiking neural net" project and the further I got into it, the more it felt like a good use-case for DD. Been re-familiarizing myself with DD, the docs/book and blog articles... but before I get too deep in the weeds I had a few high-level questions.
A "spiking" neural net is, essentially, trying to mimic more biologically-plausible neurons:
n
time steps) are boosted. Conversely, if this node did not activate, all those that contributed to the "non-event" will be weakenedIt feels like DD could be used for the network: build a graph of connections and associate a value and threshold with each node and a weight on each edge, introduce "inputs" to the graph and let activations burble through the graph. Most of the examples use simple tuples, but I'm assuming it is relatively trivial to use a struct instead to hold more complicated information for each value? I see the
monoid-bfs
example uses aMinSum
struct that implements various traits, so I'm assuming I can follow that modelThe last bullet is my main question though...I'm not sure if there is a way to inspect the history of a node to see what happened during prior epochs?
I know DD tracks this data, but also not sure if it's possible to retrieve... or possibly too dangerous/difficult to actually use? I could probably store that information as a local history on the node itself, but that seems redundant since DD internally tracks that data in form of diffs anyway, right?
Alternatively, I was thinking that maybe two events could be propogated: the activation at
t
, and a second event withupdate_at(t+1)
which is essentially a "I activated you last time step" event. So instead of looking backwards, a post-it note shows up in the future with the reminder to check.Thoughts?
The text was updated successfully, but these errors were encountered: