You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current implementation assumes a single prediction per input, which is plausible for many classification tasks. However, there is a stream of problems where the output might be a string of labels, as in Speech Recognition.
Assuming CTC Loss is used for speech recognition where predictions may have different length than the ground-truth labels, how to properly compute ECE in such cases?
The text was updated successfully, but these errors were encountered:
The current implementation assumes a single prediction per input, which is plausible for many classification tasks. However, there is a stream of problems where the output might be a string of labels, as in Speech Recognition.
Assuming CTC Loss is used for speech recognition where predictions may have different length than the ground-truth labels, how to properly compute ECE in such cases?
The text was updated successfully, but these errors were encountered: