For an easy-to-use implementation (nice API, examples & documentatio), see BioroboticsLab/IBA.
This is the source code for the paper "Restricting the Flow: Information Bottlenecks for Attribution" accepted as an Oral at ICLR2020.
Iterations of the Per-Sample Bottleneck
-
Clone this repository:
$ git clone https://github.com/attribution-bottleneck/attribution-bottleneck-pytorch.git && cd attribution-bottleneck-pytorch
-
Create a conda environment with all packages:
$ conda create -n new environment --file requirements.txt
-
Using your new conda environment, install this repository with pip:
$ pip install .
-
Download the model weights from the release page and unpack them in the repository root directory:
$ tar -xvf bottleneck_for_attribution_weights.tar.gz
Optional:
-
If you want to retrain the Readout Bottleneck, place the imagenet dataset under
data/imagenet
. You might just create a link withln -s [image dir] data/imagenet
. -
Test it with:
$ python ./scripts/eval_degradation.py resnet50 8 Saliency test
We provide some jupyter notebooks to demonstrate the usage of both per-sample and readout bottleneck.
example_per-sample.ipynb
: Usage of the Per-Sample Bottleneck on an example imageexample_readout.ipynb
: Usage of the Readout Bottleneck on an example imagecompare_methods.ipynb
: Visually compare different attribution methods on an example image
The scripts to reproduce our evaluation can be found in the scripts directory. Following attributions are implemented:
For the bounding box task, replace the model with either vgg16
or resnet50
.
$eval_bounding_boxes.py [model] [attribution]
For the degradation task, you also have specify the tile size. In the paper, we
used 8
and 14
.
$ eval_degradation.py [model] [tile size] [attribution]
The results on sensitivity-n can be calculated with:
eval_sensitivity_n.py [model] [tile size] [attribution]
If you use this code, please consider citing our work:
@inproceedings{
schulz2020iba,
title={Restricting the Flow: Information Bottlenecks for Attribution},
author={Schulz, Karl and Sixt, Leon and Tombari, Federico and Landgraf, Tim},
booktitle={International Conference on Learning Representations},
year={2020},
url={https://openreview.net/forum?id=S1xWh1rYwB}
}