This is the official implementation of the paper Unleashing the Power of Visual Prompting At the Pixel Level.
Clone this repo:
git clone https://github.com/UCSC-VLAA/EVP
cd EVP
Our code is built on:
torch>=1.10.1 torchvision>=0.11.2
Then install dependencies by:
pip install -r requirments.txt
pip install git+https://github.com/openai/CLIP.git
See DATASET.md for detailed instructions and tips.
- Train the Enhanced Visual Prompting on CIFAR100:
python main.py
- Test the Enhanced Visual Prompting:
python main.py --evaluate
We propose a simple pre-processing step to match the pre-trained classes and the downstream classes for non-CLIP model.
- Train the Enhanced Visual Prompting for the non-CLIP Model:
python main.py --non_CLIP
- Test the Enhanced Visual Prompting for the non-CLIP Model:
python main.py --non_CLIP --evaluate
@article{wu2024evp,
title = {Unleashing the Power of Visual Prompting At the Pixel Level},
author = {Wu, Junyang and Li, Xianhang and Wei, Chen and Wang, Huiyu and Yuille, Alan and Zhou, Yuyin and Xie, Cihang},
journal = {TMLR},
year = {2024}
}
Junyang Wu
- email: SJTUwjy@sjtu.edu.cn
Xianhang Li
- email: xli421@ucsc.edu
If you have any question about the code and data, please contact us directly.