This is code for "Probabilistic Concept Bottleneck Models."
ArXiv | OpenReview
Part of code is borrowed from Evaluating Weakly Supervised Object Localization Methods Right, Probabilistic Cross-Modal Embedding, and Polysemous Visual-Semantic Embedding (PVSE).
Interpretable models are designed to make decisions in a human-interpretable manner. Representatively, Concept Bottleneck Models (CBM) follow a two-step process of concept prediction and class prediction based on the predicted concepts. CBM provides explanations with high-level concepts derived from concept predictions; thus, reliable concept predictions are important for trustworthiness. In this study, we address the ambiguity issue that can harm reliability. While the existence of a concept can often be ambiguous in the data, CBM predicts concepts deterministically without considering this ambiguity. To provide a reliable interpretation against this ambiguity, we propose Probabilistic Concept Bottleneck Models (ProbCBM). By leveraging probabilistic concept embeddings, ProbCBM models uncertainty in concept prediction and provides explanations based on the concept and its corresponding uncertainty. This uncertainty enhances the reliability of the explanations. Furthermore, as class uncertainty is derived from concept uncertainty in ProbCBM, we can explain class uncertainty by means of concept uncertainty. Code is publicly available at https://github.com/ejkim47/prob-cbm.
For the dataset (CUB), please refer to the page. Please change the 'data_root' and 'metadataroot' in a config file.
An example command line for the train+eval:
python main.py --config ./configs/config_exp.yaml --gpu {gpu_num}