You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
since the exp term values to zero as the input-hidden weights are set to the training input during the training. Hence, the output should be one and thus the final network output is basically the hidden-output weights which are set to the training targets during training. Thus, the mse should be zero...
The text was updated successfully, but these errors were encountered:
I haven't used GRNN in awhile, so my knowledge might be a bit rusty, but as far as I remember the model creates a gaussian around each training sample (basically using it as a center) and standard deviation is specified as a hyperparameter. When you make prediction for one of the training sample a distance based on the gaussian kernel will be calculated with respect to each of the training samples. Then these distances are normalized and target values Y associated with each training sample are being averaged using weights and since distance will be non-zero to other training sample the overall prediction will be a mixture of all training samples. I believe MSE should be zero only if you have either one sample and you use it for training and prediction (or any other sample that has exactly the same target value) or MSE will approach zero when standard deviation approaches zero.
if your run the following code you will get mse value non zero while in the original GRNN, the training error in GRNN should be zero since
since the exp term values to zero as the input-hidden weights are set to the training input during the training. Hence, the output should be one and thus the final network output is basically the hidden-output weights which are set to the training targets during training. Thus, the mse should be zero...
The text was updated successfully, but these errors were encountered: