Skip to content

Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers

Notifications You must be signed in to change notification settings

tangtaogo/AutoView

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 

Repository files navigation

AutoView

Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers
Tao Tang∗, Changlin Li∗, Guangrun Wang, Kaicheng Yu, Xiaojun Chang, Xiaodan Liang

(*: equal contribution, : corresponding author)


Introduction

Framework

We propose AutoView, a self-regularized adversarial AutoAugment method, to learn views for self-supervised vision transformers.

  • First, we reduce the search cost of AutoView to nearly zero by learning views and network parameters simultaneously in a single forward-backward step, minimizing and maximizing the mutual information among different augmented views, respectively.
  • Then, to avoid information collapse caused by the lack of label supervision, we propose a self-regularized loss term to guarantee the information propagation.
  • Additionally, we present a curated augmentation policy search space for self-supervised learning, by modifying the generally used search space designed for supervised learning.
  • On ImageNet, our AutoView achieves remarkable improvement over RandAug baseline (+10.2% k-NN accuracy), and consistently outperforms sota manually tuned view policy by a clear margin. Extensive experiments show that AutoView pretraining also benefits downstream tasks and improves model robustness.

Visualization

vis

Getting Started

git clone https://github.com/Trent-tangtao/AutoView.git

This is a preliminary release. We have not carefully organized everything now.

Citation

If you find AutoView is useful in your research or applications, please consider giving us a star 🌟 and citing it by the following BibTeX entry.

@article{tang2022learning,
  title={Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers}, 
  author={Tao Tang and Changlin Li and Guangrun Wang and Kaicheng Yu and Xiaojun Chang and Xiaodan Liang},
  journal={arXiv preprint arXiv:2210.08458},
  year={2022}
}

About

Learning Self-Regularized Adversarial Views for Self-Supervised Vision Transformers

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published