You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the current cross entropy implementation, the loss value is divided by N. In principle, this should be the batch size. But since we are working with a batch size of one and thus call the function with a one-dimensional array, we are currently dividing by the number of classes which makes no sense at all.
Deleting the division would solve the problem for batch size one (= dividing by one). But to support different batch sizes in the future, a more sophisticated solution would be nice.
The text was updated successfully, but these errors were encountered:
In the current cross entropy implementation, the loss value is divided by N. In principle, this should be the batch size. But since we are working with a batch size of one and thus call the function with a one-dimensional array, we are currently dividing by the number of classes which makes no sense at all.
Deleting the division would solve the problem for batch size one (= dividing by one). But to support different batch sizes in the future, a more sophisticated solution would be nice.
The text was updated successfully, but these errors were encountered: