Skip to content

Code for "Training, Architecture, and Prior for Deterministic Uncertainty Methods" ICLR 2023 Workshop on Trustworthy ML

License

Notifications You must be signed in to change notification settings

orientino/dum-components

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Training, Architecture, and Prior for Deterministic Uncertainty Methods

This work is the repository for the ICLR 2023 workshop paper Training, Architecture, and Prior for Deterministic Uncertainty Methods.

Abstract: Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets. To this end, Deterministic Uncertainty Methods (DUMs) is a promising model family capable to perform uncertainty estimation in a single forward pass. This work investigates important design choices in DUMs: (1) we show that training schemes decoupling the core architecture and the uncertainty head schemes can significantly improve uncertainty performances. (2) we demonstrate that the core architecture expressiveness is crucial for uncertainty performance and that additional architecture constraints to avoid feature collapse can deteriorate the trade-off between OOD generalization and detection. (3) Contrary to other Bayesian models, we show that the prior defined by DUMs do not have a strong effect on the final performances.

Install

conda env create -n dum --file environment.yml
python setup.py develop

Run

For simple but complete examples, run a notebook in notebook/run_*.ipynb. From these notebooks, you can change the dataset and the DUM's hyperparameters to run more complex tasks.

Citation

If you use the code in this repository, consider citing our work:

@misc{dums-components,
  title = {Training, Architecture, and Prior for Deterministic Uncertainty Methods},
  author = {Charpentier, Bertrand and Zhang, Chenxiang and Günnemann, Stephan},
  publisher = {ICLR Workshop on Pitfalls of limited data and computation for Trustworthy ML},
  year = {2023},
}

About

Code for "Training, Architecture, and Prior for Deterministic Uncertainty Methods" ICLR 2023 Workshop on Trustworthy ML

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published