This repository contains the official PyTorch self-supervised pretraining, finetuning, and evaluation codes for
MCSSL: Towards Multi-Concept Self-Supervised Learning.
python -m torch.distributed.launch --nproc_per_node=8 --use_env main_MCSSL.py --batch_size 64 --epochs 800 --data_location 'path/to/imageNet/trainingimgs'
Architecture | # paramters | Finetuning Accuracy | download |
---|---|---|---|
ViT-S/16 | 22M | 82.4 % | checkpoint |
ViT-B/16 | 85M | 84.0 % | checkpoint |
We rely on the finetuning strategy of Deit
This repository is built in top of the SiT and the DINO repository.
If you use this code for a paper, please cite:
@article{atito2021mc,
title={MC-SSL0. 0: towards multi-concept self-supervised learning},
author={Atito, Sara and Awais, Muhammad and Farooq, Ammarah and Feng, Zhenhua and Kittler, Josef},
journal={arXiv preprint arXiv:2111.15340},
year={2021}
}