Interpretability for sequence generation models 🐛 🔍
-
Updated
Apr 25, 2025 - Python
Interpretability for sequence generation models 🐛 🔍
Easy-to-use MIRAGE code for faithful answer attribution in RAG applications. Paper: https://aclanthology.org/2024.emnlp-main.347/
surrogate quantitative interpretability for deepnets
Attribution (or visual explanation) methods for understanding video classification networks. Demo codes for WACV2021 paper: Towards Visually Explaining Video Understanding Networks with Perturbation.
Code for the paper: Towards Better Understanding Attribution Methods. CVPR 2022.
Metrics for evaluating interpretability methods.
The source code for the journal paper: Spatio-Temporal Perturbations for Video Attribution, TCSVT-2021
Hacking SetFit so that it works with integrated gradients.
squid repository for manuscript analysis
Code for my Master Thesis titled "Exploring Data Augmentation Methods through Attribution".
Add a description, image, and links to the attribution-methods topic page so that developers can more easily learn about it.
To associate your repository with the attribution-methods topic, visit your repo's landing page and select "manage topics."