You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
for i in0..self.data.x{for j in0..self.data.y{for k in0..self.data.z{self.data.map[[i, j, k]] += (elem[[k]] - self.data.map[[i, j, k]])* g[[i, j]];}let norm = norm(self.data.map.index_axis(Axis(0), i).index_axis(Axis(0), j));for k in0..self.data.z{self.data.map[[i, j, k]] /= norm;}}}
Were causing me issues because norm was ending up as zero, causing a divide by zero to make self.data.map[[i, j, k]] be f64::NaN and resulting in funky results later down the line (I've got NaN values in my input features)
I'm not sure what the purpose of the normalization is? I understand that it would ensure each neuron's weights sum to 1, but I can't find where this is recommended.
On my own fork I've wrapped the normalisation with a check to make sure norm >0 and that seems to have solved the issues, although I'm not sure how valid it is.
The text was updated successfully, but these errors were encountered:
So these lines in the update method:
Were causing me issues because
norm
was ending up as zero, causing a divide by zero to makeself.data.map[[i, j, k]]
bef64::NaN
and resulting in funky results later down the line (I've got NaN values in my input features)I'm not sure what the purpose of the normalization is? I understand that it would ensure each neuron's weights sum to 1, but I can't find where this is recommended.
On my own fork I've wrapped the normalisation with a check to make sure
norm >0
and that seems to have solved the issues, although I'm not sure how valid it is.The text was updated successfully, but these errors were encountered: