Skip to content

Implementation of PatchSAE as presented in "Sparse autoencoders reveal selective remapping of visual concepts during adaptation"

License

dynamical-inference/patchsae

Repository files navigation

PatchSAE: Sparse Autoencoders Reveal Selective Remapping of Visual Concepts During Adaptation

Website & Demo Paper OpenReview Hugging Face Demo

PatchSAE visualization

🚀 Quick Navigation

🛠 Getting Started

Set up your environment with these simple steps:

# Create and activate environment
conda create --name patchsae python=3.12
conda activate patchsae

# Install dependencies
cd patchsae
pip install -r requirements.txt

When running any scripts, make sure to always set the PYTHONPATH. For example, to run the demo in app.py:

PYTHONPATH=./ python src/demo/app.py

🎮 Interactive Demo

Online Demo on Hugging Face 🤗 Website & Demo

Explore our pre-computed images and SAE latents without any installation!

💡 The demo may experience slowdowns due to network constraints. For optimal performance, consider disabling your VPN if you encounter any delays.

Screen.Recording.2025-09-30.at.3.42.05.PM.mov
Demo interface

Local Demo: Try Your Own Images

Want to experiment with your own images? Follow these steps:

1. Setup Local Demo

First, download the necessary files:

You can download the files using gdown as follows:

💡 Need gdown? Install it with: conda install conda-forge::gdown or pip install gdown

# Activate environment
conda activate patchsae

# Download necessary files (35MB + 513MB)
gdown 1NJzF8PriKz_mopBY4l8_44R0FVi2uw2g  # out.zip
gdown 1reuDjXsiMkntf1JJPLC5a3CcWuJ6Ji3Z  # data.zip

# Extract files
unzip data.zip
unzip out.zip

Your folder structure should look like:

patchsae/
├── configs/
├── data/      # From data.zip
├── out/       # From out.zip
├── src/
│   └── demo/
│       └── app.py
├── tasks/
├── requirements.txt
└── ... (other files)

2. Launch the Demo

PYTHONPATH=./ python src/demo/app.py

⚠️ Note:

  • First run will download datasets from HuggingFace automatically (About 30GB in total)
  • Demo runs on CPU by default
  • Access the interface at http://127.0.0.1:7860 (or the URL shown in terminal)

📊 PatchSAE Training and Analysis

📝 Status Updates

  • Jan 13, 2025: Training & Analysis code work properly. Minor error in data loading by class when using ImageNet.
  • Jan 09, 2025: Analysis code works. Updated training with evaluation during training, fixed optimizer bug.
  • Jan 07, 2025: Added analysis code. Reproducibility tests completed (trained on ImageNet, tested on Oxford-Flowers).
  • Jan 06, 2025: Training code updated. Reproducibility testing in progress.
  • Jan 02, 2025: Training code incomplete in this version. Updates coming soon.

📜 License & Credits

Reference Implementations

License Notice

Our code is distributed under an MIT license, please see the LICENSE file for details. The NOTICE file lists license for all third-party code included in this repository. Please include the contents of the LICENSE and NOTICE files in all re-distributions of this code.


Citation

If you find our code or models useful in your work, please cite our paper:

@inproceedings{
  lim2025patchsae,
  title={Sparse autoencoders reveal selective remapping of visual concepts during adaptation},
  author={Hyesu Lim and Jinho Choi and Jaegul Choo and Steffen Schneider},
  booktitle={The Thirteenth International Conference on Learning Representations},
  year={2025},
  url={https://openreview.net/forum?id=imT03YXlG2}
}