Unofficial PyTorch implementation
- CrossTransformers: spatially-aware few-shot transfer
- Carl Doersch, Ankush Gupta, Andrew Zisserman
- NeurIPS 2020
- CrossTransformers Architecture
- SimCLR episodes
- Resnet34, output feature map 14x14 by using dilated conv
- Higher image resolution (224x224)
- Strong data augmentation following [2]
- Normalized gradient descent
- 50% episodes of uniform category sampling
- First step: Pretraining feature extractor on train categories, early stop by linear classifier accuracy on validation categories.
- CTX sanity check on miniImagenet
- CTX on Meta-Dataset [1]
- CTX + SimCLR Eps
- CTX + SimCLR Eps + Aug
- miniImagenet experiments based out of DN4 codebase.
[1] Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples.
[2] Optimized generic feature learning for few-shot classification across domains.