A curated list of recent papers on efficient vision transformers.
Transformers belong to the big family of deep learning (deep neural networks). Conceivably, we would see many papers under the umbrella of efficient transformers are actually reusing the techniques invented before for efficient deep learning. Welcome to check the paper collection on efficient deep learning!
- 2020.09-Efficient Transformers: A Survey
2019
2020
2021
- 2021-KDDw-Vision Transformer Pruning
- 2021-TCPS-TPrune: Efficient transformer pruning for mobile devices
- 2021.05-MLPruning: A Multilevel Structured Pruning Framework for Transformer-based Model [Code]
- 2021.07-Learning Efficient Vision Transformers via Fine-Grained Manifold Distillation
- 2021.09-HFSP: A Hardware-friendly Soft Pruning Framework for Vision Transformers
- 2021.11-Pruning Self-attentions into Convolutional Layers in Single Path [Code]
- 2021.11-A Memory-saving Training Framework for Transformers [Code]
2022
- 2022-AAAI-Less is More: Pay Less Attention in Vision Transformers
- 2022-ICLR-Unified Visual Transformer Compression
- 2022-CVPR-Patch Slimming for Efficient Vision Transformers
- 2022-CVPR-MiniViT: Compressing Vision Transformers with Weight Multiplexing
- 2022-ECCV-An Efficient Spatio-Temporal Pyramid Transformer for Action Detection
- 2022-NIPS-Fast Vision Transformers with HiLo Attention [Code]
- 2022-NIPS-EcoFormer: Energy-Saving Attention with Linear Complexity [Code]
- 2022-NIPS-EfficientFormer: Vision Transformers at MobileNet Speed [Code]
- Awesome-Visual-Transformer
- Efficient-Deep-Learning
- Awesome-NAS
- Awesome-Pruning
- Awesome-Knowledge-Distillation
- MS AI-System open course
- caffe-int8-convert-tools
- Neural-Networks-on-Silicon
- Embedded-Neural-Network
- model_compression
- model-compression (in Chinese)
- Efficient-Segmentation-Networks
- AutoML NAS Literature
- Papers with code
- ImageNet Benckmark
- Self-supervised ImageNet Benckmark
- NVIDIA Blog with Sparsity Tag