PyTorch implementation of Fully Convolutional Networks, main code modified from pytorch-fcn.
- pytorch
- torchvision
- ignite
- yacs
- tensorboardX
- tensorflow (for tensorboard)
The designed architecture follows this guide PyTorch-Project-Template, you can check each folder's purpose by yourself.
You can open the terminal and run the bash command to get VOC2012 dataset
bash get_data.sh
or you can just copy this url download by yourself
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
Most of the configuration files that we provide are in folder configs
. You just need to modify dataset root
, vgg model weight
and output directory
. There are a few possibilities:
You can modify train_fcn32s.yml
first and run following code
python3 tools/train_fcn.py --config_file='configs/train_fcn32s.yml'
You can change configuration parameter such as learning rate or max epochs in command line.
python3 tools/train_fcn.py --config_file='configs/train_fcn32s.yml' SOLVER.BASE_LR 0.0025 SOLVER.MAX_EPOCHS 8
We are training these models on VOC2012 train.txt and testing on val.txt, and we also use torchvision pretrained vgg16 rather than caffe pretrained. So the results maybe are different from the origin paper.
Model | Epoch | Mean IU |
---|---|---|
FCN32s | 13 | 55.1 |
FCN16s | 8 | 54.8 |
FCN8s | 7 | 55.7 |
FCN8sAtOnce | 11 | 53.6 |