-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Roadmap
v2: Simplicity and Performance First
It must be easy enough to teach a child and fast enough to cause innovation.
Feature | Status | Notes |
---|---|---|
FeedForward composition | 95% | |
Recurrent composition | 85% | Still needs a DataFormatter component |
Activation layers | 100% | |
Standard layers | 100% | |
Convolution layers | 90% | Already built, but the GPU algorithms are being tested against CPU algorithms in Convnetjs here: https://github.com/BrainJS/brain.js-cnn-integrity |
JSON out | 95% | Done, but would like to test across the board once all layers and networks are finished |
JSON in | 95% | Done, but would like to test across the board once all layers and networks are finished |
GPU browser | 100% | Done for WebGL2, with WebGL1, and then CPU fallback. http://gpu.rocks |
GPU node | 99.99% |
|
Layer tests | 90% | Essentially done, but lets aim for 100% code coverage. |
Documentation | 90% | |
End to end test coverage | 50% | |
Demos & Site upgrade | 100% | |
Upgrade recurrent API to use new API | 0% |
Currently we want to focus our efforts on creating a "layer playground", where we have tons of layers, and they can work with any network type, be it feedforward, or recurrent.
We've put in a considerable amount of work to achieve gpu acceleration, and will eventually be fully gpu, in all networks. The desired library where much of the work has been done is http://gpu.rocks.
The concepts of recurrent and feedforward have always seemed like completely different networks when really there are a few very simple things that make them different. We want to make them so easy anyone can use them:
new FeedForward({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { FeedForward, layer } from 'brain.js';
const { input, feedForward, output } = layer;
const net = new FeedForward({
inputLayer: () => input({ width: 2 })
hiddenLayers: [
input => feedForward({ width: 3 }, input),
]
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
new Recurrent({
inputLayer: () => { /* return an instantiated layer here */ }
hiddenLayers: [
(input, recurrentInput) => { /* return an instantiated layer here */ },
/* more layers? by all means... */
/* `input` here is the output from the previous layer */
/* `recurrentInput` here is the output from the previous recurse, or zeros on first go */
]
outputLayer: (input) => { /* return an instantiated layer here */ }
});
import { Recurrent, layer } from 'brain.js';
const { input, lstm, peephole, output } = layer;
const net = new Recurrent({
inputLayer: () => input({ width: 2 }),
hiddenLayers: [
(input, recurrentInput) => lstm({ height: 3 }, recurrentInput, input)
],
outputLayer: input => output({ width: 1 }, input)
});
net.train([
{ input: [0, 0], output: [0] },
{ input: [0, 1], output: [1] },
{ input: [1, 0], output: [1] },
{ input: [1, 1], output: [0] }
]);
net.run([0, 0]); // [0]
net.run([0, 1]); // [1]
net.run([1, 0]); // [1]
net.run([1, 1]); // [0]
brain.recurrent
provided a nice means of learning to simplify how to expose the concept of recurrent to the public, in v2 recurrent will essentially become the Recurrent
class, so we can remove brain.recurrent
, and continue development there.
More to come.
v4: Unsupervised learning, spiking neural networks, distributed learning, GAN's