How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks
Repository for the replication of the paper How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks, published in The 40th Conference on Uncertainty in Artificial Intelligence.
From a vector
where
import torch
import post_hoc
g = post_hoc.MaxLogit_pNorm(logits,p)
The optimization of p
can be made with a grid search using
p = post_hoc.optimize.p(logits,risk, metric = metric)
where risk
is a tensor with the defined risk for each prediction and metric
is the metric to be minimized. This procedure allows the fallback to the Maximum Softmax Probability (MSP), considered the baseline for confidence estimation, in cases where the MaxLogit-pNorm harms the confidence estimation. Alternatively, the optimization can be made directly with the calculation of the confidence:
import torch
import post_hoc
g = post_hoc.MaxLogit_pNorm(logits,p = 'optimal', **kwargs_optimize)
All conducted experiments are available in experiments/notebooks. Functions for all confidence estimators can be found in utils/measures, while metrics, such as the Normalized Area Under the Risk Coverage Curve (NAURC), are in utils/metrics.
All models considered in the experiments can be found in experiments/models. Cifar100 and OxfordPets models were trained using the receipe present in experiments/train.py, while ImageNet pre-trained models are forked from torchvision and timm repositories.
To cite this paper, please use
@misc{cattelan2024fix,
title={How to Fix a Broken Confidence Estimator: Evaluating Post-hoc Methods for Selective Classification with Deep Neural Networks},
author={Luís Felipe P. Cattelan and Danilo Silva},
booktitle={The 40th Conference on Uncertainty in Artificial Intelligence},
year={2024},
url={https://openreview.net/forum?id=IJBWLRCvYX}
}