Skip to content

Optim-wip: Improve loss objective testing coverage #951

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 184 commits into
base: optim-wip
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
184 commits
Select commit Hold shift + click to select a range
34bedcf
Add the CLIP ResNet 50x4 model
ProGamerGov Apr 26, 2022
e9598ea
Merge branch 'optim-wip' into optim-wip-clip-vis
ProGamerGov May 13, 2022
599d8e1
Update CLIP model for new testing & linting
ProGamerGov May 13, 2022
f3d3e1d
deployment of pyfmt with usort 1.0
amyreese May 15, 2022
33d2b75
apply import merging for fbcode (8 of 11)
amyreese May 15, 2022
d27e6c2
Add CLIP loss objectives
ProGamerGov May 17, 2022
77850c7
Fix Mypy error
ProGamerGov May 17, 2022
a4eee84
Fix Mypy errors
ProGamerGov May 17, 2022
452979b
Add Optimization With Transparency tutorial
ProGamerGov May 18, 2022
e2e58da
Improve loss objective docs + batch_index
ProGamerGov May 18, 2022
c905352
Fix NeuronActivation docs
ProGamerGov May 18, 2022
a702728
SGD Linear Model Fixes for Lime (#938)
vivekmig May 18, 2022
8c28dad
Fix FacetLoss docs
ProGamerGov May 18, 2022
32fc693
Improve VectorLoss docs
ProGamerGov May 18, 2022
c211032
Fix version check bug (#940)
ProGamerGov May 19, 2022
7b78aaa
Improve loss objective testing
ProGamerGov May 21, 2022
69a73b2
Fix mypy test errors
ProGamerGov May 21, 2022
caffe7c
Fix mypy tests
ProGamerGov May 21, 2022
973aacc
Update test_loss.py
ProGamerGov May 22, 2022
6e6f4e6
Add more tests
ProGamerGov May 22, 2022
221c72b
Fix weird value mismatch
ProGamerGov May 22, 2022
862ddce
Add batch_index tests to new objectives
ProGamerGov May 22, 2022
b196bbe
Miscellaneous Fixes
ProGamerGov May 25, 2022
c656658
Add Model Preparation Tutorial
ProGamerGov May 25, 2022
0e7d0f4
Improve vector function
ProGamerGov May 25, 2022
3b67bb0
Improve the `FacetLoss` objective
ProGamerGov May 28, 2022
4c51ef1
Add CLIP objectives to `__all__`
ProGamerGov May 28, 2022
36df47e
Separate some loss tests
ProGamerGov May 28, 2022
31cb2a9
Fix mistake in FacetLoss docs
ProGamerGov May 28, 2022
cfa9d9f
Update CustomModules tutorial for new changes
ProGamerGov May 31, 2022
264a8ad
Support non default input sizes in FacetLoss
ProGamerGov Jun 2, 2022
00927a1
Improve loss docs
ProGamerGov Jun 2, 2022
16f2177
Add additional tests
ProGamerGov Jun 3, 2022
63843b5
Add packaging library to setup.py
ProGamerGov Jun 5, 2022
9305b10
Add support for captum toplevel-import. (#912)
dkrako Jun 9, 2022
1489f3f
Fix comment in saliency (#970)
ZhiyuanChen Jun 10, 2022
4aa7e48
Fix `set_all_random_seeds` testing function (#974)
ProGamerGov Jun 12, 2022
0ece0af
Add missing type hints in accordance with PEP-0484 (#976)
ProGamerGov Jun 14, 2022
8b82a37
Add alias for ImageTensor.open()
ProGamerGov Jun 16, 2022
afc4759
Update minimum PyTorch version in README (#977)
ProGamerGov Jun 20, 2022
857f26c
Add CompositeLoss to __all__
ProGamerGov Jun 20, 2022
9e9a6be
Add Conv2dSame to __all__
ProGamerGov Jun 21, 2022
0270383
Fix doc formatting for Sphinx
ProGamerGov Jun 21, 2022
73eedd1
Fix docs for Sphinx
ProGamerGov Jun 23, 2022
c45f694
Minor fixes
ProGamerGov Jun 23, 2022
650927e
Add missing input_param attribute to InputOptimization info
ProGamerGov Jun 23, 2022
4cf8cfc
Fix test errors
ProGamerGov Jun 23, 2022
90f9592
Fix mypy error
ProGamerGov Jun 23, 2022
7d77c72
Add ODS Logging to Captum (#971)
vivekmig Jun 24, 2022
5e3a80f
Add docs for loss_fn in sum_loss_list
ProGamerGov Jun 24, 2022
613baa9
Add docs for loss testing helper
ProGamerGov Jun 25, 2022
6f41b20
Remove loss_wrapper requirement for loss objectives
ProGamerGov Jun 25, 2022
5f849aa
Fix Sphinx loss doc duplication bug
ProGamerGov Jun 26, 2022
ca3b5f9
Update _common.py
ProGamerGov Jun 27, 2022
e87c975
Improve ImageTensor, Optimization, & submodule docs for Sphinx
ProGamerGov Jun 27, 2022
2b665f2
Improve CLIP loss docs for Sphinx
ProGamerGov Jun 27, 2022
5837745
Improve loss docs for Sphinx
ProGamerGov Jun 27, 2022
d3a2cca
Improve vector function docs for Sphinx
ProGamerGov Jun 28, 2022
2a592f0
Adjust spacing in doc variables
ProGamerGov Jun 28, 2022
e80b42e
Fix spacing in docs
ProGamerGov Jun 29, 2022
8ceecaf
Improve dataset docs
ProGamerGov Jun 29, 2022
0491cca
Improve Sphinx docs
ProGamerGov Jun 29, 2022
86f24ba
Improve ImageTensor docs
ProGamerGov Jun 29, 2022
975550e
Add 'Feature Visualization' keyword
ProGamerGov Jul 1, 2022
7530ae5
Improve ImageTensor & Dataset docs (#552)
ProGamerGov Jul 2, 2022
4a62c0b
Improve docs
ProGamerGov Jul 5, 2022
e63cee8
Improve docs
ProGamerGov Jul 5, 2022
06db64f
Improve dataset docs
ProGamerGov Jul 5, 2022
b84980a
Add time series visualization function (#980)
smaeland Jul 5, 2022
10d2379
Add missing Places365 InceptionModule docs
ProGamerGov Jul 6, 2022
b376466
Improve Optimization docs
ProGamerGov Jul 9, 2022
1821a2d
http -> https
ProGamerGov Jul 10, 2022
adaf367
Improve InputOptimization docs
ProGamerGov Jul 10, 2022
509accd
Improve loss docs
ProGamerGov Jul 14, 2022
407f769
Improve DeepDream docs
ProGamerGov Jul 15, 2022
6f10b76
Improve doc grammar
ProGamerGov Jul 15, 2022
eb5a961
Fix nn.Module type hints
ProGamerGov Jul 15, 2022
a66e7f5
Fix InputOptimization docs
ProGamerGov Jul 16, 2022
f781265
Fix loss doc type formating
ProGamerGov Jul 16, 2022
4426003
Fix clip objective doc type formatting
ProGamerGov Jul 16, 2022
936bc84
Update _common.py
ProGamerGov Jul 16, 2022
42b18ca
Add more assert checks
ProGamerGov Jul 17, 2022
96e2f8d
Add aliases to InputOptimization and ImageTensor docs
ProGamerGov Jul 17, 2022
f31b8ca
Improve MaxPool2dRelaxed docs
ProGamerGov Jul 17, 2022
199509e
Improve docstring type formatting
ProGamerGov Jul 18, 2022
2480b69
Fix loss docstring type hint formatting
ProGamerGov Jul 18, 2022
a9eabfd
Fix loss docstring type hint formatting
ProGamerGov Jul 18, 2022
f2f1d5d
Fix bug in skip_layers
ProGamerGov Jul 18, 2022
a61461b
Improve optimization docs
ProGamerGov Jul 20, 2022
0ecff5d
Improve InputOptimization.optimize's docstring
ProGamerGov Jul 20, 2022
aeb058d
Improve InputOptimization docs
ProGamerGov Jul 21, 2022
1faadcd
Fix doc spacing
ProGamerGov Jul 21, 2022
a7fb6d9
Max line length doesn't apply to urls
ProGamerGov Jul 21, 2022
2cfa21b
Add Optim to run_mypy.sh
ProGamerGov Jul 22, 2022
6a40ca6
Don't link directly to arXiv PDF files in algorithms.md (#995)
ProGamerGov Jul 25, 2022
65b4a84
Remove the `insights` module from the main `__init__.py` file (#992)
ProGamerGov Jul 25, 2022
9e1538e
Fix some docstrings (#996)
ProGamerGov Jul 26, 2022
1c50b87
Fix grammar
ProGamerGov Jul 27, 2022
27b702e
Fix spelling
ProGamerGov Jul 27, 2022
7924b87
Remove Optim from run_mypy.sh for now
ProGamerGov Jul 27, 2022
3a2194f
Remove loss_wrapper tests
ProGamerGov Jul 27, 2022
819a0a8
Remove `loss_wrapper`
ProGamerGov Jul 27, 2022
61a0be9
Fix lint errors
ProGamerGov Jul 27, 2022
31b5707
Fix lint errors
ProGamerGov Jul 27, 2022
07c7593
Fix Mypy type hints
ProGamerGov Jul 28, 2022
16dd3cf
Fix formatting
ProGamerGov Jul 28, 2022
f2f7ea5
Fix typehint mistake
ProGamerGov Jul 28, 2022
9107496
Fix failing conda (#1000)
NarineK Jul 28, 2022
7e2dbf9
callable -> Callable
ProGamerGov Jul 28, 2022
ca84f7b
Docstring Improvements
ProGamerGov Jul 29, 2022
1a10252
modify tracin self influence helpers (#994)
99warriors Aug 1, 2022
a08883f
allow self influence iteration options (#1002)
99warriors Aug 1, 2022
fb6db3b
Fix branch
ProGamerGov Aug 1, 2022
a0ee122
Notebook support for tqdm (#1001)
Meghpal Aug 10, 2022
03cea17
callable -> Callable
ProGamerGov Aug 11, 2022
a93a5cd
Add gpu support to tracincp rand projection (#969)
NarineK Aug 12, 2022
9263ae1
update TracInCP tutorial to use `train_dataset` argument
99warriors Aug 17, 2022
37516c1
Improve version checking (#999)
ProGamerGov Aug 17, 2022
312acd8
Fix failing conda test, & fix a GPU test bug (#1009)
ProGamerGov Aug 19, 2022
dbb5a31
Fixed bug in House_Prices_Regression_Interpret tutorial (#1014)
dbish Aug 23, 2022
6565280
change tracin progress test (#1007)
99warriors Aug 23, 2022
12a847b
Added missing `-> None:` type hint to applicable tests (#1006)
ProGamerGov Aug 27, 2022
81858f3
add docstring style in developer guide (#1016)
aobo-y Aug 31, 2022
5a5eb78
doc: rectify layerlrp example (#1017)
YBooks Sep 7, 2022
ff2b403
expose `intermediate_quantities` in `TracInCPFastRandProj`, new
99warriors Sep 15, 2022
b8eff98
Fix multiple Sphinx warnings & docstrings (#985)
ProGamerGov Sep 19, 2022
30a8874
Re-sync with internal repository (#1028)
facebook-github-bot Sep 21, 2022
fd03b9b
add back intro video in website (#1029)
aobo-y Sep 21, 2022
b43c70a
fix docstring - tuple of (#1031)
aobo-y Sep 22, 2022
448eac1
Add layer support for TracInCP (#1030)
NarineK Sep 23, 2022
f9ca198
update doc style (#1032)
aobo-y Sep 24, 2022
38cbd30
Changing parameters for integrated gradient algorithm
Sep 26, 2022
061f2e4
Add logging for captum.robust (#1026)
vivekmig Sep 29, 2022
e2b4455
Reverting integrated gradients
Sep 29, 2022
50b2e98
let `SampleGradientWrapper` accept `layer_modules` (#1036)
Oct 3, 2022
fa4c89e
fix docstring - multiple or (#1034)
aobo-y Oct 13, 2022
6c9045d
check sphinx build in ci (#1033)
aobo-y Oct 13, 2022
35a257c
Fix wheel build (#1021)
randombenj Oct 14, 2022
97a32b0
Assign variables (#1013)
tolgacangoz Oct 17, 2022
cde34a2
Fix GPU test sane-utils install error (#1015)
ProGamerGov Oct 18, 2022
5f878af
refactor feature ablation (#1047)
aobo-y Oct 20, 2022
2689f21
Add support for nested progressbar to tracincp even when tqdm is not …
cyrjano Nov 2, 2022
5543b4a
Return more helpful type error messages in _select_targets(). (#1052)
dkrako Nov 4, 2022
56abc96
Fixes mypy errors. (#1058)
cyrjano Nov 7, 2022
83b60a7
Fix broken unittests. (#1062)
cyrjano Nov 10, 2022
a7610be
convert forward return to tensor in FeatureAblation (#1049)
aobo-y Nov 15, 2022
ada8c0d
move STG to captum module (#1064)
aobo-y Nov 16, 2022
6fc48e8
make clamp optional in STG's get_gate_values (#1070)
aobo-y Nov 16, 2022
4bd7127
add load_pretrained in stg (#1076)
aobo-y Nov 16, 2022
3977f79
Fix failing kernel shap tests (#1061)
NarineK Nov 17, 2022
c076410
Self Influence only on `self_influence` method. (#1069)
cyrjano Nov 18, 2022
dc9a8ce
fix conda env solver config (#1084)
aobo-y Dec 7, 2022
4de6602
Changing Housing Regression tutorial to use California housing datase…
diegoolano Dec 8, 2022
c29155b
improve error msg of invalid input types (#1083)
aobo-y Dec 8, 2022
cb44edd
Avoid calling `flatten` in `TracInCP.self_influence` for efficiency (…
diegoolano Dec 9, 2022
288cd3a
Switch from register_full_backward_hooks to tensor hooks (#979)
vivekmig Dec 10, 2022
508ee12
add test loss (#1073)
Dec 10, 2022
e7b58af
add `compute_intermediate_quantities` to `TracInCP` (#1068)
Dec 10, 2022
dcb87d3
Support different reg_reduction in Captum STG (#1090)
aobo-y Dec 14, 2022
8bc12c8
Fix failing GPU errors for influential examples (#1081)
NarineK Dec 15, 2022
3f9a7f5
check ufmt before flake8 in ci (#1089)
aobo-y Dec 16, 2022
7c228ac
add stg pages in sphinx (#1092)
aobo-y Dec 16, 2022
a1007a8
Mask for Adversarial Attacks (#1043)
jgonik Dec 19, 2022
fe13596
update tracin influence API (#1072)
Dec 19, 2022
ecc81e6
update stg sphinx to include inherited methods (#1095)
aobo-y Dec 19, 2022
761a219
validate forward_fun output shape in FeatureAblation (#1091)
aobo-y Dec 20, 2022
eeb8667
Clean install_via_conda (#1097)
aobo-y Dec 22, 2022
b398d52
Updating version to 0.6.0 (#1094)
aobo-y Dec 23, 2022
cc5f468
Add split_channels parameter to LayerGradCam.attribute (#1086)
dzenanz Jan 4, 2023
2c9dcc1
improve documentation of STG (#1100)
aobo-y Jan 17, 2023
832443f
Replace she/he with they (#1106)
jessijzhao Jan 31, 2023
c978d17
LRP tutorial added for Model Interpretation Tutorial (#1105)
Feb 1, 2023
010f76d
Quiet conda install & remove conda solver (#1108)
aobo-y Feb 3, 2023
50f7bdd
Fix captum's internal failing test cases
NarineK Mar 24, 2023
dd610df
Update setup.py
ProGamerGov Apr 1, 2023
7e3ebae
Merge pull request #635 from ProGamerGov/patch-22
ProGamerGov Mar 30, 2025
040ca9f
Merge pull request #636 from ProGamerGov/optim-wip-clip-vis
ProGamerGov Mar 30, 2025
b98c20e
Merge pull request #637 from ProGamerGov/optim-wip-clip-loss-objectives
ProGamerGov Mar 30, 2025
94b90d6
Merge pull request #638 from ProGamerGov/optim-wip-transparency
ProGamerGov Mar 30, 2025
a1e0633
Merge pull request #639 from ProGamerGov/optim-wip-loss-docs
ProGamerGov Mar 30, 2025
32e4f33
Merge pull request #642 from ProGamerGov/optim-wip-model-tutorial
ProGamerGov Mar 30, 2025
ebf133d
Merge pull request #643 from ProGamerGov/optim-wip-custom-modules
ProGamerGov Mar 30, 2025
613e052
Merge pull request #640 from ProGamerGov/optim-wip-version-check
ProGamerGov Mar 30, 2025
99d5b00
Merge branch 'master-0-new-1' into optim-wip-loss-testing
ProGamerGov Mar 30, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
15 changes: 13 additions & 2 deletions .circleci/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ commands:
steps:
- run:
name: "Run sphinx"
command: sphinx-build -T --keep-going sphinx/source sphinx/build
command: sphinx-build -WT --keep-going sphinx/source sphinx/build

configure_github_bot:
description: "Configure Docusaurus GitHub bot"
Expand Down Expand Up @@ -99,6 +99,15 @@ commands:
python -m pip install --upgrade pip
python -m pip install -e .[dev]

build_package:
description: "Build the source and wheel packages"
steps:
- run:
name: "Build captum"
command: |
python -m pip install build
python -m build

py_3_7_setup:
description: "Set python version to 3.7 and install pip and pytest"
steps:
Expand All @@ -115,6 +124,7 @@ commands:
- run:
name: "Install CUDA"
command: |
sudo dpkg --configure -a
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64/cuda-ubuntu2004.pin
sudo mv cuda-ubuntu2004.pin /etc/apt/preferences.d/cuda-repository-pin-600
wget https://developer.download.nvidia.com/compute/cuda/11.4.2/local_installers/cuda-repo-ubuntu2004-11-4-local_11.4.2-470.57.02-1_amd64.deb
Expand All @@ -131,8 +141,8 @@ jobs:
steps:
- checkout
- pip_install
- lint_flake8
- ufmt_check
- lint_flake8
- sphinx

test_py36_pip:
Expand Down Expand Up @@ -210,6 +220,7 @@ jobs:
- install_cuda
- py_3_7_setup
- simple_pip_install
- build_package
- unit_tests

auto_deploy_site:
Expand Down
23 changes: 11 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ Captum can also be used by application engineers who are using trained models in

**Installation Requirements**
- Python >= 3.6
- PyTorch >= 1.2
- PyTorch >= 1.6


##### Installing the latest release
Expand Down Expand Up @@ -159,8 +159,7 @@ model.eval()
Next, we need to define simple input and baseline tensors.
Baselines belong to the input space and often carry no predictive signal.
Zero tensor can serve as a baseline for many tasks.
Some interpretability algorithms such as `Integrated
Gradients`, `Deeplift` and `GradientShap` are designed to attribute the change
Some interpretability algorithms such as `IntegratedGradients`, `Deeplift` and `GradientShap` are designed to attribute the change
between the input and baseline to a predictive class or a value that the neural
network outputs.

Expand Down Expand Up @@ -472,23 +471,23 @@ You can watch the recorded talk [here](https://www.youtube.com/watch?v=ayhBHZYje
* `SmoothGrad`: [SmoothGrad: removing noise by adding noise, Daniel Smilkov et al. 2017](https://arxiv.org/abs/1706.03825)
* `NoiseTunnel`: [Sanity Checks for Saliency Maps, Julius Adebayo et al. 2018](https://arxiv.org/abs/1810.03292)
* `NeuronConductance`: [How Important is a neuron?, Kedar Dhamdhere et al. 2018](https://arxiv.org/abs/1805.12233)
* `LayerConductance`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/pdf/1807.09946.pdf)
* `DeepLift`, `NeuronDeepLift`, `LayerDeepLift`: [Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017](https://arxiv.org/pdf/1704.02685.pdf) and [Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018](https://openreview.net/pdf?id=Sy21R9JAW)
* `NeuronIntegratedGradients`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/pdf/1807.09946.pdf)
* `LayerConductance`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/abs/1807.09946)
* `DeepLift`, `NeuronDeepLift`, `LayerDeepLift`: [Learning Important Features Through Propagating Activation Differences, Avanti Shrikumar et al. 2017](https://arxiv.org/abs/1704.02685) and [Towards better understanding of gradient-based attribution methods for deep neural networks, Marco Ancona et al. 2018](https://openreview.net/pdf?id=Sy21R9JAW)
* `NeuronIntegratedGradients`: [Computationally Efficient Measures of Internal Neuron Importance, Avanti Shrikumar et al. 2018](https://arxiv.org/abs/1807.09946)
* `GradientShap`, `NeuronGradientShap`, `LayerGradientShap`, `DeepLiftShap`, `NeuronDeepLiftShap`, `LayerDeepLiftShap`: [A Unified Approach to Interpreting Model Predictions, Scott M. Lundberg et al. 2017](http://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions)
* `InternalInfluence`: [Influence-Directed Explanations for Deep Convolutional Networks, Klas Leino et al. 2018](https://arxiv.org/pdf/1802.03788.pdf)
* `InternalInfluence`: [Influence-Directed Explanations for Deep Convolutional Networks, Klas Leino et al. 2018](https://arxiv.org/abs/1802.03788)
* `Saliency`, `NeuronGradient`: [Deep Inside Convolutional Networks: Visualising
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014](https://arxiv.org/pdf/1312.6034.pdf)
* `GradCAM`, `Guided GradCAM`: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017](https://arxiv.org/abs/1610.02391.pdf)
* `Deconvolution`, `Neuron Deconvolution`: [Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014](https://arxiv.org/pdf/1311.2901.pdf)
* `Guided Backpropagation`, `Neuron Guided Backpropagation`: [Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015](https://arxiv.org/pdf/1412.6806.pdf)
Image Classification Models and Saliency Maps, K. Simonyan, et. al. 2014](https://arxiv.org/abs/1312.6034)
* `GradCAM`, `Guided GradCAM`: [Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju et al. 2017](https://arxiv.org/abs/1610.02391)
* `Deconvolution`, `Neuron Deconvolution`: [Visualizing and Understanding Convolutional Networks, Matthew D Zeiler et al. 2014](https://arxiv.org/abs/1311.2901)
* `Guided Backpropagation`, `Neuron Guided Backpropagation`: [Striving for Simplicity: The All Convolutional Net, Jost Tobias Springenberg et al. 2015](https://arxiv.org/abs/1412.6806)
* `Feature Permutation`: [Permutation Feature Importance](https://christophm.github.io/interpretable-ml-book/feature-importance.html)
* `Occlusion`: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/abs/1311.2901)
* `Shapley Value`: [A value for n-person games. Contributions to the Theory of Games 2.28 (1953): 307-317](https://apps.dtic.mil/dtic/tr/fulltext/u2/604084.pdf)
* `Shapley Value Sampling`: [Polynomial calculation of the Shapley value based on sampling](https://www.sciencedirect.com/science/article/pii/S0305054808000804)
* `Infidelity and Sensitivity`: [On the (In)fidelity and Sensitivity for Explanations](https://arxiv.org/abs/1901.09392)

More details about the above mentioned [algorithms](https://captum.ai/docs/algorithms) and their pros and cons can be found on our [web-site](https://captum.ai/docs/algorithms_comparison_matrix).
More details about the above mentioned [attribution algorithms](https://captum.ai/docs/attribution_algorithms) and their pros and cons can be found on our [web-site](https://captum.ai/docs/algorithms_comparison_matrix).

## License
Captum is BSD licensed, as found in the [LICENSE](LICENSE) file.
9 changes: 8 additions & 1 deletion captum/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,10 @@
#!/usr/bin/env python3
import captum.attr as attr # noqa
import captum.concept as concept # noqa
import captum.influence as influence # noqa
import captum.log as log # noqa
import captum.metrics as metrics # noqa
import captum.robust as robust # noqa

__version__ = "0.5.0"

__version__ = "0.6.0"
24 changes: 13 additions & 11 deletions captum/_utils/av.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def __init__(
identifier: Optional[str] = None,
layer: Optional[str] = None,
num_id: Optional[str] = None,
):
) -> None:
r"""
Loads into memory the list of all activation file paths associated
with the input `model_id`.
Expand Down Expand Up @@ -80,7 +80,7 @@ def __getitem__(self, idx: int) -> Union[Tensor, Tuple[Tensor, ...]]:
av = torch.load(fl)
return av

def __len__(self):
def __len__(self) -> int:
return len(self.files)

AV_DIR_NAME: str = "av"
Expand Down Expand Up @@ -211,9 +211,9 @@ def save(
AV.generate_dataset_activations from batch index.
It assumes identifier is same for all layers if a list of
`layers` is provided.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
act_tensors (Tensor or List of Tensor): A batch of activation vectors.
act_tensors (tensor or list of tensor): A batch of activation vectors.
This must match the dimension of `layers`.
num_id (str): string representing the batch number for which the activation
vectors are computed
Expand Down Expand Up @@ -299,13 +299,15 @@ def _manage_loading_layers(
for the `layer` are stored.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
load_from_disk (bool, optional): Whether or not to load from disk.
Default: True
identifier (str or None): An optional identifier for the layer
activations. Can be used to distinguish between activations for
different training batches.
num_id (str): An optional string representing the batch number for which the
activation vectors are computed
num_id (str, optional): An optional string representing the batch number
for which the activation vectors are computed.

Returns:
List of layer names for which activations should be generated
Expand Down Expand Up @@ -357,9 +359,9 @@ def _compute_and_save_activations(
define all of its layers as attributes of the model.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
inputs (tensor or tuple of tensors): Batch of examples for
inputs (Tensor or tuple[Tensor, ...]): Batch of examples for
which influential instances are computed. They are passed to the
input `model`. The first dimension in `inputs` tensor or tuple of
tensors corresponds to the batch size.
Expand All @@ -368,7 +370,7 @@ def _compute_and_save_activations(
different training batches.
num_id (str): An required string representing the batch number for which the
activation vectors are computed
additional_forward_args (optional): Additional arguments that will be
additional_forward_args (Any, optional): Additional arguments that will be
passed to `model` after inputs.
Default: None
load_from_disk (bool): Forces function to regenerate activations if False.
Expand Down Expand Up @@ -433,7 +435,7 @@ def generate_dataset_activations(
define all of its layers as attributes of the model.
model_id (str): The name/version of the model for which layer activations
are being computed and stored.
layers (str or List of str): The layer(s) for which the activation vectors
layers (str or list[str]): The layer(s) for which the activation vectors
are computed.
dataloader (torch.utils.data.DataLoader): DataLoader that yields Dataset
for which influential instances are computed. They are passed to
Expand Down
94 changes: 73 additions & 21 deletions captum/_utils/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,27 @@
from torch.nn import Module


def _parse_version(v: str) -> Tuple[int, ...]:
"""
Parse version strings into tuples for comparison.

Versions should be in the form of "<major>.<minor>.<patch>", "<major>.<minor>",
or "<major>". The "dev", "post" and other letter portions of the given version will
be ignored.

Args:

v (str): A version string.

Returns:
version_tuple (tuple[int]): A tuple of integer values to use for version
comparison.
"""
v = [n for n in v.split(".") if n.isdigit()]
assert v != []
return tuple(map(int, v))


class ExpansionTypes(Enum):
repeat = 1
repeat_interleave = 2
Expand Down Expand Up @@ -154,9 +175,10 @@ def _format_tensor_into_tuples(
if inputs is None:
return None
if not isinstance(inputs, tuple):
assert isinstance(
inputs, torch.Tensor
), "`inputs` must have type " "torch.Tensor but {} found: ".format(type(inputs))
assert isinstance(inputs, torch.Tensor), (
"`inputs` must be a torch.Tensor or a tuple[torch.Tensor] "
f"but found: {type(inputs)}"
)
inputs = (inputs,)
return inputs

Expand Down Expand Up @@ -500,9 +522,11 @@ def _select_targets(output: Tensor, target: TargetType) -> Tensor:
]
)
else:
raise AssertionError("Target element type in list is not valid.")
raise AssertionError(
f"Target element type {type(target[0])} in list is not valid."
)
else:
raise AssertionError("Target type %r is not valid." % target)
raise AssertionError(f"Target type {type(target)} is not valid.")


def _contains_slice(target: Union[int, Tuple[Union[int, slice], ...]]) -> bool:
Expand Down Expand Up @@ -660,20 +684,48 @@ def _get_module_from_name(model: Module, layer_name: str) -> Any:

def _register_backward_hook(
module: Module, hook: Callable, attr_obj: Any
) -> torch.utils.hooks.RemovableHandle:
# Special case for supporting output attributions for neuron methods
# This can be removed after deprecation of neuron output attributions
# for NeuronDeepLift, NeuronDeconvolution, and NeuronGuidedBackprop
# in v0.6.0
if (
hasattr(attr_obj, "skip_new_hook_layer")
and attr_obj.skip_new_hook_layer == module
):
return module.register_backward_hook(hook)
) -> List[torch.utils.hooks.RemovableHandle]:
grad_out: Dict[device, Tensor] = {}

if torch.__version__ >= "1.9":
# Only supported for torch >= 1.9
return module.register_full_backward_hook(hook)
else:
# Fallback for previous versions of PyTorch
return module.register_backward_hook(hook)
def forward_hook(
module: Module,
inp: Union[Tensor, Tuple[Tensor, ...]],
out: Union[Tensor, Tuple[Tensor, ...]],
) -> None:
nonlocal grad_out
grad_out = {}

def output_tensor_hook(output_grad: Tensor) -> None:
grad_out[output_grad.device] = output_grad

if isinstance(out, tuple):
assert (
len(out) == 1
), "Backward hooks not supported for module with >1 output"
out[0].register_hook(output_tensor_hook)
else:
out.register_hook(output_tensor_hook)

def pre_hook(module, inp):
def input_tensor_hook(input_grad: Tensor):
if len(grad_out) == 0:
return
hook_out = hook(module, input_grad, grad_out[input_grad.device])

if hook_out is not None:
return hook_out[0] if isinstance(hook_out, tuple) else hook_out

if isinstance(inp, tuple):
assert (
len(inp) == 1
), "Backward hooks not supported for module with >1 input"
inp[0].register_hook(input_tensor_hook)
return inp[0].clone()
else:
inp.register_hook(input_tensor_hook)
return inp.clone()

return [
module.register_forward_pre_hook(pre_hook),
module.register_forward_hook(forward_hook),
]
Loading