Aditya Kusupati, Vivek Ramanujan*, Raghav Somani*, Mitchell Worstsman*, Prateek Jain, Sham Kakade and Ali Farhadi
This repository contains code for the CNN experiments presented in the ICML 2020 paper along with more functionalities.
This code base is built upon the hidden-networks repository modified for STR, DNW and GMP experiments.
The RNN experiments in the paper are done by modifying FastGRNNCell
in EdgeML using the methods discussed in the paper.
- Clone this repository.
- Using
Python 3.6
, create avenv
withpython -m venv myenv
and runsource myenv/bin/activate
. You can also useconda
to create a virtual environment. - Install requirements with
pip install -r requirements.txt
forvenv
and appropriateconda
commands forconda
environment. - Create a data directory
<data-dir>
. To run the ImageNet experiments there must be a folder<data-dir>/imagenet
that contains the ImageNettrain
andval
folders that contains images of each class in a seperate folder.
STRConv
along with other custom convolution modules can be found in utils/conv_type.py
. Users can take STRConv
and use it in most of the PyTorch based models as it inherits from nn.Conv2d
or also mentioned here as DenseConv
.
This codebase contains model architectures for ResNet18, ResNet50 and MobileNetV1 and support to train them on ImageNet-1K. We have provided some config
files for training ResNet50 and MobileNetV1 which can be modified for other architectures and datasets. To support more datasets, please add new dataloaders to data
folder.
Training across multiple GPUs is supported, however, the user should check the minimum number of GPUs required to scale ImageNet-1K.
ResNet50: python main.py --config configs/largescale/resnet50-dense.yaml --multigpu 0,1,2,3
MobileNetV1: python main.py --config configs/largescale/mobilenetv1-dense.yaml --multigpu 0,1,2,3
Train models with STR on ImageNet-1K:
ResNet50: python main.py --config configs/largescale/resnet50-str.yaml --multigpu 0,1,2,3
MobileNetV1: python main.py --config configs/largescale/mobilenetv1-str.yaml --multigpu 0,1,2,3
To reproduce the results in the paper, please modify the config
files appropriately using the hyperparameters from the appendix of STR paper.
DNW: python main.py --config configs/largescale/resnet50-dnw.yaml --multigpu 0,1,2,3
GMP: python main.py --config configs/largescale/resnet50-gmp.yaml --multigpu 0,1,2,3
Please note that GMP implementation is not thoroughly tested, so caution is advised.
Modify the config
files to tweak the performance and sparsity levels in both DNW and GMP.
STR models are not compatible with the traditional dense models for simple evaluation and usage as transfer learning backbones. DNW and GMP models are compatible to the dense model.
Every experiment creates a directory inside runs
folder (which will be created automatically) along with the tensorboard logs, initial model state (for LTH experiments) and best model (model_best.pth
).
The runs
folder also has dumps of the csv with final and best accuracies along with layer-wise sparsity distributions and thresholds in case of STR. The code checkpoints after every epoch giving a chance to resume training when pre-empted, the extra functionalities can be explored through python main.py -h
.
ResNet50: python main.py --config configs/largescale/resnet50-dense.yaml --multigpu 0,1,2,3 --pretrained <ResNet50-STR-Model> --dense-conv-model
MobileNetV1: python main.py --config configs/largescale/mobilenetv1-dense.yaml --multigpu 0,1,2,3 --pretrained <MobileNetV1-STR-Model> --dense-conv-model
These models use the names provided in the corresponding config
files being used but can also be modified using --name
argument in the command line.
If you want to evaluate a pretrained STR model provided below, you can either use the model as is or convert it to a dense model and use the dense model evaluation. To encourage uniformity, please try to convert the STR models to dense or use the dense compatible models if provided.
Dense Model Evaluation: python main.py --config configs/largescale/<arch>-dense.yaml --multigpu 0,1,2,3 --pretrained <Dense-Compatible-Model> --evaluate
STR Model Evaluation: python main.py --config configs/largescale/<arch>-str.yaml --multigpu 0,1,2,3 --pretrained <STR-Model> --evaluate
If it is hard to hand-code all the budgets into a method like DNW, you can use the budget transfer functionalities of the repo. The pre-trained models provided have to be in the native STR model format and not in a converted/compatible Dense model format. You should change this piece of code to support the Dense format as well.
Transfer to DNW: python main.py --config configs/largescale/<arch>-dnw.yaml --multigpu 0,1,2,3 --pretrained <STR-Model> --ignore-pretrained-weights --use-budget
Transfer to GMP: python main.py --config configs/largescale/<arch>-gmp.yaml --multigpu 0,1,2,3 --pretrained <STR-Model> --ignore-pretrained-weights --use-budget
You should modify the corresponding config
files for DNW and GMP to increase accuracy by changing the hyperparameters.
All the models provided here are trained on ImageNet-1K according to the settings in the paper.
These models are straightforward to train using this repo and their pre-trained models are in most of the popular frameworks. For the sake of reproducibility, pretrained dense models are provided.
Architecture | Params | Sparsity (%) | Top-1 Acc (%) | FLOPs | Model Links |
---|---|---|---|---|---|
ResNet50 | 25.6M | 0.00 | 77.01 | 4.09G | Dense |
MobileNetV1 | 4.21M | 0.00 | 71.95 | 569M | Dense |
We are providing links to 6 models for ResNet50 and 2 models for MobileNetV1. These models represent the sparsity regime they belong to. Each model has two versions of model links to download, the first one is the vanilla STR model and the second one is the STR model converted to be compatible with Dense models and for transfer learning. Please contact Aditya Kusupati in case you need a specific model and are not able to train it from scratch. All the sparsity budgets for every model in the paper are present in the appendix, in case all you need is the non-uniform sparsity budget.
No. | Params | Sparsity (%) | Top-1 Acc (%) | FLOPs | Model Links |
---|---|---|---|---|---|
1 | 4.47M | 81.27 | 76.12 | 705M | STR, Dense |
2 | 2.49M | 90.23 | 74.31 | 343M | STR, Dense |
3 | 1.24M | 95.15 | 70.23 | 162M | STR, Dense |
4 | 0.99M | 96.11 | 67.78 | 127M | STR, Dense |
5 | 0.50M | 98.05 | 61.46 | 73M | STR, Dense |
6 | 0.26M | 98.98 | 51.82 | 47M | STR, Dense |
No. | Params | Sparsity (%) | Top-1 Acc (%) | FLOPs | Model Links |
---|---|---|---|---|---|
1 | 1.04M | 75.28 | 68.35 | 101M | STR, Dense |
2 | 0.46M | 89.01 | 62.10 | 42M | STR, Dense |
Note: If you find any STR model to be 2x the size of its Dense compatible model, it might be because of an old implementation that might have resulted in a model that replicated the weights.
The folder budgets
contains the csv files containing all the non-uniform sparsity budgets STR learnt for ResNet50 on ImageNet-1K across all the sparsity regimes along with baseline budgets for 90% sparse ResNet50 on ImageNet-1K. In case, you are not able to use the pretraining models to extract sparsity budgets, you can directly import the same budgets using these files.
If you find this project useful in your research, please consider citing:
@inproceedings{Kusupati20
author = {Kusupati, Aditya and Ramanujan, Vivek and Somani, Raghav and Wortsman, Mitchell and Jain, Prateek and Kakade, Sham and Farhadi, Ali},
title = {Soft Threshold Weight Reparameterization for Learnable Sparsity},
booktitle = {Proceedings of the International Conference on Machine Learning},
month = {July},
year = {2020},
}