-
Notifications
You must be signed in to change notification settings - Fork 280
Layers
Normally you won't work with single neurons, but use Layers instead. A layer is basically an array of neurons, they can do pretty much the same things as neurons do, but it makes the programming process faster.
To create a layer you just have to specify its size (the number of neurons in that layer).
var myLayer = new Layer(5);
A layer can project a connection to another neuron, layer or network. You have to provide the layer that you want to connect to and the connection type (optional):
var A = new Layer(5);
var B = new Layer(3);
A.project(B, Methods.Connection.ALL_TO_ALL); // All the neurons in layer A now project a connection to all the neurons in layer B
Layers can also self-connect:
A.project(A, Methods.Connection.ONE_TO_ONE);
There are three connection types:
-
Methods.Connection.ALL_TO_ALL
: It connects every neuron from layer A, to every neuron in layer B. -
Methods.Connection.ONE_TO_ONE
: It connects each neuron from layer A, to one neuron in layer B. Both layers must be the same size in order to work. -
Methods.Connection.ALL_TO_ELSE
: Useful only in self-connections. It connects every neuron from a layer to all the other neurons in that same layer, except with itself. If this connection type is used in a connection between different layers, it produces the same result asALL_TO_ALL
.
NOTE:If not specified, the connection type is always Methods.Connection.ALL_TO_ALL
when connecting two different layers, and is Methods.Connection.ONE_TO_ONE
when connecting a layer to itself (ie myLayer.project(myLayer)
)
The method project returns a LayerConnection
object, that can be gated by another layer.
A layer can gate a connection between two other layers, or a layers's self-connection.
var A = new Layer(5);
var B = new Layer(3);
var connection = A.project(B);
var C = new Layer(4);
C.gate(connection, Layer.gateType.INPUT_GATE); // now C gates the connection between A and B (input gate)
There are three gateType
's:
-
Layer.gateType.INPUT_GATE
: If layer C is gating connections between layer A and B, all the neurons from C gate all the input connections to B. -
Layer.gateType.OUTPUT_GATE
: If layer C is gating connections between layer A and B, all the neurons from C gate all the output connections from A. -
Layer.gateType.ONE_TO_ONE
: If layer C is gating connections between layer A and B, each neuron from C gates one connection from A to B. This is useful for gating self-connected layers. To use this kind of gateType, A, B and C must be the same size.
When a layer activates, it just activates all the neurons in that layer in order, and returns an array with the outputs. It accepts an array of activations as parameter (for input layers):
var A = new Layer(5);
var B = new Layer(3);
A.project(B);
A.activate([1,0,1,0,1]); // [1,0,1,0,1]
B.activate(); // [0.3280457, 0.83243247, 0.5320423]
After an activation, you can teach the layer what should have been the correct output (a.k.a. train). This is done by backpropagating the error. To use the propagate method you have to provide a learning rate, and a target value (array of floats between 0 and 1).
For example, if I want to train layer B to output [0,0]
when layer A activates [1,0,1,0,1]
:
var A = new Layer(5);
var B = new Layer(2);
A.project(B);
var learningRate = .3;
for (var i = 0; i < 20000; i++)
{
// when A activates [1, 0, 1, 0, 1]
A.activate([1,0,1,0,1]);
// train B to activate [0,0]
B.activate();
B.propagate(learningRate, [0,0]);
}
// test it
A.activate([1,0,1,0,1]);
B.activate(); // [0.004606949693864496, 0.004606763721459169]
Disconnects ALL the neurons in the layer, or disconnects the entire layer from another node. If the layer is in a network, the network might break. Currently working on a connect()
function which will reconnect the neurons in a desired way.
var A = new Layer(5);
var B = new Architect.Perceptron(2,4,2);
A.project(B); // A and B now form a connection
A.disconnect(B); // A and B are disconnected
If you want to disconnect a layer completely (this is one-sided):
var A = new Layer(2);
var B = new Neuron();
var C = new Architect.Perceptron(5,2,3);
A.project(B); // A projects to B
A.project(C); // A projects to C
A.disconnect(); // all connections are removed
You can set the squashing function and bias of all the neurons in a layer by using the method set:
myLayer.set({
squash: Neuron.squash.TANH,
bias: 0
})
The method neurons()
returns an array with all the neurons in the layer, in activation order.
Returns an object with all the connections of the layer.