Repository of materials to reproduce the results in the article "Visual pathways from the perspective of cost functions and multi-task deep neural networks". Link to paper on bioRxiv: http://biorxiv.org/content/early/2017/06/06/146472
The purpose of this repository is two-fold.
- Reproduce the results and visualizations of the paper
- Use the source code as base or inspiration to use in your own analysis of deep neural networks. The core of the proposed method is the marginalization of parameters to estimate the contribution of feature representations to a class or task (see this function). While the implementation holds some comments I advise to have a look at the appendix of the paper which describes the method in detail.
While the general implementation should be able to handle all networks trained in Caffe, Torch7 and PyTorch, the specific setup is limited to Torch7 models, the limiting variable being the meta data describing the mapping between output units and classes. The meta data is exclusively used in this line.
The following python packages are required to use this package:
Below are the representation-to-task-contributions over training time. See the article in section 3 for more details.
Related Tasks
conv1 | conv2 | conv3 | conv4 | conv5 |
---|---|---|---|---|
Unrelated Tasks
conv1 | conv2 | conv3 | conv4 | conv5 |
---|---|---|---|---|
To reproduce the visualizations in section 3 of the article, follow these steps:
- Clone or download this project
git clone https://github.com/mlosch/FeatureSharing.git && cd FeatureSharing
- Download pretrained models and images via
bash scripts/download_pretrained_models.sh
- Run marginalization of parameters. Depending on your system this may take days. You can alter the number of samples in the script to a small value (20 still gives good results) to speed up the process in exchange for accuracy.
Note: If your system does not contain a graphics card, you must remove the line--usegpu
in the script.
bash scripts/generate_conditionals.sh
- Calculate the weighted evidence from the generated data
bash scripts/analysis_featurecontribution.sh
- Render visualizations. They are saved in
data/processed/
bash scripts/visualize_featurecontribution.sh
The full training data (and pretrained models) can be downloaded here:
We used the fb.resnet.torch package in conjunction with Torch7 to train our models.
The images have been overlayed with labels via the script apply_random_label.py
in the scripts
directory.
Given for example the imagenet dataset this script does the job with the following parameters:
python scripts/apply_random_label.py -d path/to/imagenet -l data/labels/label_list.txt -o path/to/outputfolder