- Paper : https://arixv.org/abs/2011.10043
- Pytorch 1.7.0, cuda 10.1
- Used 4 GPUS (V100) for training.
- Fix pixcontrast modules
-
CLI
git clone https://github.com/Sungman-Cho/Propagate-Yourself-Pytorch.git source install_packages.sh
-
PixPro training
python train.py --multiprocessing-distributed --batch_size=512 --loss=pixpro
-
PixContrast training
python train.py --multiprocessing-distributed --batch_size=512 --loss=pixcontrast
-
Make your current directory
downstream
. -
Convert a trained PixPro model to detectron2's format:
python convert-pretrain-to-detectron2.py '$your_checkpoint.pth.tar' pixpro.pkl
-
Convert a trained Pixcontrast model to detectorn2's format:
python convert-pretrain-to-detectron2.py '$your_checkpoint.pth.tar' pixcontrast.pkl
-
Training Epochs: 24K iter
-
Image size : [480,800] in train, 800 at inference.
-
Backbone : R50-C4
-
Training
# baseline training source train_voc_base.sh # pixpro training source train_voc_pixpro.sh # pixcontrast training source train_voc_pixcontrast.sh
-
Followed 1x settings (detectron2)
-
Backbone : R50-C4
-
Training
# baseline training source train_coco_base.sh # pixpro training source train_coco_pixpro.sh # pixcontrast training source train_coco_pixcontrast.sh