You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I stumbled upon this library and I thought it'd be fun to see if I could use tweak one of the examples to learn multiplication of 2 for numbers between 1 and 12. I have tried Perceptron with a single hidden layer with a few different dimensions but they all seem to give the same result. The network is only correct in calculating 2x2, 2x3, 2x4 and 2x5. The way I normalised the learning inputs / outputs was by dividing them all by 144.
I was just wondering if anyone can give me some suggestions on how to setup the network in a way that would provide some better results or some references that might help me understand why the result was so poor.
Thanks in advance
The text was updated successfully, but these errors were encountered:
First use LSTM with 2 hidden layers which allows the NN to learn anything. Than normalize the right way between 0.01 and 0.99. Lazy normalizations do not pay out.
If your error does not go below let say 0.0001 check your training code. It should be pretty much similar to the XOR example.
You also need to train numbers above 12 as error by training them as a result of 1000 or some high value which will significant increase your training time. Remember to normalize up to 1000 in that case.
`
function tdp(n){
return Math.round(100*n)/100;
}
var max=30;
var s=[];
var norm=tdp(1.02maxmax);
var inorm=1/norm;
for(var i = 0;i<=max;i++){
for(var j =0;j<=max;j++){
var n = {};
n.input=[tdp(iinorm),tdp(jinorm)];
n.output=[(0.01+ijinorm)];
console.log(tdp(0.01+ijinorm));
s.push(n);
}
}
var myLSTM = new synaptic.Architect.LSTM(2,2,1);
myLSTM.trainer.train(s);
console.log((myLSTM.activate([tdp(2inorm+0.01),tdp(3inorm+0.01)])[0]-0.01)*norm);`
Hey,
I stumbled upon this library and I thought it'd be fun to see if I could use tweak one of the examples to learn multiplication of 2 for numbers between 1 and 12. I have tried Perceptron with a single hidden layer with a few different dimensions but they all seem to give the same result. The network is only correct in calculating 2x2, 2x3, 2x4 and 2x5. The way I normalised the learning inputs / outputs was by dividing them all by 144.
I was just wondering if anyone can give me some suggestions on how to setup the network in a way that would provide some better results or some references that might help me understand why the result was so poor.
Thanks in advance
The text was updated successfully, but these errors were encountered: