You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could Spare Evolutionary Training be implemented as a training method? From reading up on it, from my understanding it is somewhat similar to Adam, but I likely am grossly oversimplifying things.
That's an interesting method, before saying anything I'll need to read the paper first. I did only a quick look through it and I think there might be a few problems with integration. For example,
With SET, the bipartite ANN layers start from a random sparse topology (i.e. Erdös–Rényi random graph24), evolving through a random process during the training phase towards a scale-free topology.
NeuPy doesn't support efficient sparse connections and the whole implementation might be quite problematic and very inefficient.
It might work as a standalone network with a fixed architecture like in case of the SET-RBM (since in that case it's much easier to make it efficient), but then again, I have to read through the paper in order to be sure.
Could Spare Evolutionary Training be implemented as a training method? From reading up on it, from my understanding it is somewhat similar to Adam, but I likely am grossly oversimplifying things.
I have included some reading material if there is interest:
https://phys.org/news/2018-06-ai-method-power-artificial-neural.html
I am up for trying to tackle this for a pull request if you would like assistance.
The text was updated successfully, but these errors were encountered: