Skip to content

Commit

Permalink
Add Sequential IG method (#222)
Browse files Browse the repository at this point in the history
  • Loading branch information
gsarti authored Oct 18, 2023
1 parent 72febbb commit d53be07
Show file tree
Hide file tree
Showing 10 changed files with 506 additions and 20 deletions.
25 changes: 14 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -130,27 +130,29 @@ Use the `inseq.list_feature_attribution_methods` function to list all available

#### Gradient-based attribution

- `saliency`: [Saliency](https://arxiv.org/abs/1312.6034) (Simonyan et al., 2013)
- `saliency`: [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https://arxiv.org/abs/1312.6034) (Simonyan et al., 2013)

- `input_x_gradient`: [Input x Gradient](https://arxiv.org/abs/1312.6034) (Simonyan et al., 2013)
- `input_x_gradient`: [Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps](https://arxiv.org/abs/1312.6034) (Simonyan et al., 2013)

- `integrated_gradients`: [Integrated Gradients](https://arxiv.org/abs/1703.01365) (Sundararajan et al., 2017)
- `integrated_gradients`: [Axiomatic Attribution for Deep Networks](https://arxiv.org/abs/1703.01365) (Sundararajan et al., 2017)

- `deeplift`: [DeepLIFT](https://arxiv.org/abs/1704.02685) (Shrikumar et al., 2017)
- `deeplift`: [Learning Important Features Through Propagating Activation Differences](https://arxiv.org/abs/1704.02685) (Shrikumar et al., 2017)

- `gradient_shap`: [Gradient SHAP](https://dl.acm.org/doi/10.5555/3295222.3295230) (Lundberg and Lee, 2017)
- `gradient_shap`: [A unified approach to interpreting model predictions](https://dl.acm.org/doi/10.5555/3295222.3295230) (Lundberg and Lee, 2017)

- `discretized_integrated_gradients`: [Discretized Integrated Gradients](https://aclanthology.org/2021.emnlp-main.805/) (Sanyal and Ren, 2021)
- `discretized_integrated_gradients`: [Discretized Integrated Gradients for Explaining Language Models](https://aclanthology.org/2021.emnlp-main.805/) (Sanyal and Ren, 2021)

- `sequential_integrated_gradients`: [Sequential Integrated Gradients: a simple but effective method for explaining language models](https://aclanthology.org/2023.findings-acl.477/) (Enguehard, 2023)

#### Internals-based attribution

- `attention`: [Attention Weight Attribution](https://arxiv.org/abs/1409.0473) (Bahdanau et al., 2014)
- `attention`: Attention Weight Attribution, from [Neural Machine Translation by Jointly Learning to Align and Translate](https://arxiv.org/abs/1409.0473) (Bahdanau et al., 2014)

#### Perturbation-based attribution

- `occlusion`: [Occlusion](https://link.springer.com/chapter/10.1007/978-3-319-10590-1_53) (Zeiler and Fergus, 2014)
- `occlusion`: [Visualizing and Understanding Convolutional Networks](https://link.springer.com/chapter/10.1007/978-3-319-10590-1_53) (Zeiler and Fergus, 2014)

- `lime`: [LIME](https://arxiv.org/abs/1602.04938) (Ribeiro et al., 2016)
- `lime`: ["Why Should I Trust You?": Explaining the Predictions of Any Classifier](https://arxiv.org/abs/1602.04938) (Ribeiro et al., 2016)

#### Step functions

Expand Down Expand Up @@ -262,9 +264,10 @@ Inseq has been used in various research projects. A list of known publications t
<details>
<summary><b>2023</b></summary>
<ol>
<li> <a href="https://arxiv.org/abs/2302.13942">Inseq: An Interpretability Toolkit for Sequence Generation Models</a> (Sarti et al., 2023) </li>
<li> <a href="https://aclanthology.org/2023.acl-demo.40/">Inseq: An Interpretability Toolkit for Sequence Generation Models</a> (Sarti et al., 2023) </li>
<li> <a href="https://arxiv.org/abs/2302.14220">Are Character-level Translations Worth the Wait? Comparing Character- and Subword-level Models for Machine Translation</a> (Edman et al., 2023) </li>
<li> <a href="https://arxiv.org/abs/2305.15908">Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps?</a> (Mousavi et al., 2023) </li>
<li> <a href="https://aclanthology.org/2023.nlp4convai-1.1/">Response Generation in Longitudinal Dialogues: Which Knowledge Representation Helps?</a> (Mousavi et al., 2023) </li>
<li> <a href="https://arxiv.org/abs/2310.01188">Quantifying the Plausibility of Context Reliance in Neural Machine Translation</a> (Sarti et al., 2023)</li>
</ol>

</details>
7 changes: 4 additions & 3 deletions docs/source/main_classes/feature_attribution.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,6 @@ Gradient Attribution Methods
:members:


.. warning::
The DiscretizedIntegratedGradientsAttribution class is currently exhibiting inconsistent behavior, so usage should be limited until further notice. See PR `# 114 <https://github.com/inseq-team/inseq/pull/114>`__ for additional info.

.. autoclass:: inseq.attr.feat.DiscretizedIntegratedGradientsAttribution
:members:

Expand All @@ -50,6 +47,10 @@ Gradient Attribution Methods
:members:


.. autoclass:: inseq.attr.feat.SequentialIntegratedGradientsAttribution
:members:


Layer Attribution Methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Expand Down
2 changes: 2 additions & 0 deletions inseq/attr/feat/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
LayerGradientXActivationAttribution,
LayerIntegratedGradientsAttribution,
SaliencyAttribution,
SequentialIntegratedGradientsAttribution,
)
from .internals_attribution import AttentionWeightsAttribution, InternalsAttributionRegistry
from .perturbation_attribution import (
Expand All @@ -37,4 +38,5 @@
"AttentionWeightsAttribution",
"OcclusionAttribution",
"LimeAttribution",
"SequentialIntegratedGradientsAttribution",
]
18 changes: 17 additions & 1 deletion inseq/attr/feat/gradient_attribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
from ..attribution_decorators import set_hook, unset_hook
from .attribution_utils import get_source_target_attributions
from .feature_attribution import FeatureAttribution
from .ops import DiscretetizedIntegratedGradients
from .ops import DiscretetizedIntegratedGradients, SequentialIntegratedGradients

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -212,6 +212,22 @@ def __init__(self, attribution_model):
self.method = Saliency(self.attribution_model)


class SequentialIntegratedGradientsAttribution(GradientAttributionRegistry):
"""Sequential Integrated Gradients attribution method.
Reference: https://aclanthology.org/2023.findings-acl.477/
Original implementation: https://github.com/josephenguehard/time_interpret/blob/main/tint/attr/seq_ig.py
"""

method_name = "sequential_integrated_gradients"

def __init__(self, attribution_model, multiply_by_inputs: bool = True, **kwargs):
super().__init__(attribution_model)
self.method = SequentialIntegratedGradients(self.attribution_model, multiply_by_inputs)
self.use_baselines = True


# Layer methods


Expand Down
2 changes: 0 additions & 2 deletions inseq/attr/feat/internals_attribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,6 @@

from captum._utils.typing import TensorOrTupleOfTensorsGeneric
from captum.attr._utils.attribution import Attribution
from captum.log import log_usage

from ...data import MultiDimensionalFeatureAttributionStepOutput
from ...utils import Registry
Expand All @@ -43,7 +42,6 @@ class AttentionWeights(Attribution):
def has_convergence_delta() -> bool:
return False

@log_usage()
def attribute(
self,
inputs: TensorOrTupleOfTensorsGeneric,
Expand Down
2 changes: 2 additions & 0 deletions inseq/attr/feat/ops/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
from .discretized_integrated_gradients import DiscretetizedIntegratedGradients
from .lime import Lime
from .monotonic_path_builder import MonotonicPathBuilder
from .sequential_integrated_gradients import SequentialIntegratedGradients

__all__ = [
"DiscretetizedIntegratedGradients",
"MonotonicPathBuilder",
"Lime",
"SequentialIntegratedGradients",
]
2 changes: 0 additions & 2 deletions inseq/attr/feat/ops/discretized_integrated_gradients.py
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,6 @@
from captum.attr._core.integrated_gradients import IntegratedGradients
from captum.attr._utils.batching import _batch_attribution
from captum.attr._utils.common import _format_input_baseline, _reshape_and_sum, _validate_input
from captum.log import log_usage
from torch import Tensor

from ....utils import INSEQ_ARTIFACTS_CACHE
Expand Down Expand Up @@ -87,7 +86,6 @@ def get_inputs_baselines(scaled_features_tpl: Tuple[Tensor, ...], n_steps: int)
)
return inputs, baselines

@log_usage()
def attribute( # type: ignore
self,
inputs: MultiStepEmbeddingsTensor,
Expand Down
1 change: 0 additions & 1 deletion inseq/attr/feat/ops/lime.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,6 @@ def __init__(
)
self.attribution_model = attribution_model

# @log_usage
def attribute(
self,
inputs: TensorOrTupleOfTensorsGeneric,
Expand Down
Loading

0 comments on commit d53be07

Please sign in to comment.