An implementation of SENet, proposed in Squeeze-and-Excitation Networks by Jie Hu, Li Shen and Gang Sun, who are the winners of ILSVRC 2017 classification competition.
Now SE-ResNet (18, 34, 50, 101, 152/20, 32) and SE-Inception-v3 are implemented.
-
python cifar.py
runs SE-ResNet20 with Cifar10 dataset. -
python imagenet.py IMAGENET_ROOT
runs SE-ResNet50 with ImageNet(2012) dataset.- You need to prepare dataset by yourself
- First download files and then follow the instruction.
- The number of GPUs and workers, the learning rate is fixed so check and change them if needed.
For SE-Inception-v3, the input size is required to be 299x299 as original Inception.
ResNet20 | SE-ResNet20 | |
---|---|---|
max. test accuracy | 92% | 93% |
The initial learning rate and mini-batch size are different from the original version because of my computational resource (0.6 to 0.1 and 1024 to 128 respectively).
ResNet | SE-ResNet | |
---|---|---|
max. test accuracy(top1) | 79.26 %(*) | 71.66 %(**) |
-
(*): He, K., Zhang, X., Ren, S., & Sun, J. (2015). Deep Residual Learning for Image Recognition.
-
(**): If you need this weight, let me know.