-
Notifications
You must be signed in to change notification settings - Fork 402
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible bug in binary classification calibration_error
#1105
Comments
Added a little example to better illustrate my point. By the way, using just a 0-vector would have been a simpler example, but it turns out the torch.clip(confidences, 1e-6, 1.0) |
Hi, First, in the example provided the metric is giving a score of 0.9942. As the metric is an calibration error the optimum would be 0 and not 1 and it therefore seems correct that the metric is giving a high score as the example is clearly not well calibrated. Secondly I ran the example through an third party package https://github.com/fabiankueppers/calibration-framework which gives the same result as our implementation (we are actually using it for testing now). Therefore, there does not seem to be an error in the implementation. |
I also have the same problem. How could the target be possible equal to accuracy? |
🐛 Bug
In
calibration_error()
, the accuracies in the binary classification setting are not correctly computed I think. It just returns thetargets
. I am guessing this should rather returntarget == preds.round().int()
or something similar? Am I missing something?Code example
The model confidently predicts the wrong class, but is rewarded with a near perfect calibration score.
Environment
TorchMetrics version (and how you installed TM, e.g.
conda
,pip
, build from source):mamba
Python & PyTorch Version (e.g., 1.0):
Any other relevant information such as OS (e.g., Linux):
The text was updated successfully, but these errors were encountered: