Team Member: Jingyuan Ma, Zhi Ye, Yongqi Wang
This repository contains the tools and models for the the course project of Computational Intelligence Lab (Spring 2019): Road Segmentaion.
In our setting, the models are being run inside a Docker container (using the default tag: latest
)
To use the docker container:
docker pull ufoym/deepo:latest
# change the volume mount before
docker run -it --name cu100 -v ./cil-2019:/home/cil-2019 ufoym/deepo bash
docker ps -a
# CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# ae2f6abc24d5 ufoym/deepo "bash" 2 hours ago Up About an hour 6006/tcp eager_ganguly
docker ps -a -q
# ae2f6abc24d5
# entering the bash with container ID
docker exec -it cu100 bash
# follow instructions below
- PyTorch >= 1.0
pip3 install torch torchvision
- Easydict
pip3 install easydict
- tqdm
pip3 install tqdm
A sample workflow:
git clone https://github.com/wyq977/cil-2019.git
cd cil-2019
cd model/final
# train with CUDA device 0
python train.py -d 0
# eval using the default last epoh
python eval.py -d 0 -p ./val_pred
# generate predicted groundtruth
python pred.py -d 0 -p ./pred
# generate submission.csv
python ../../cil-road-segmentation-2019/mask_to_submission.py --name submission -p ./pred/
# submit the submission.csv generated
Model dir:
├── config.py
├── dataloader.py
├── eval.py
├── network.py
├── pred.py
└── train.py
With a tab-separated files specifying the path of images and groundtruth, train.txt
, val.txt
, test.txt
.
train.txt
or val.txt
:
path-of-the-image path-of-the-groundtruth
Noted that in the test.txt
:
path-of-the-image path-of-the-image
A handy script (writefile.py
) using the package glob can be found inside the dataset directory.
Currently, distributed training from torch.distributed.launch
is not supported.
To specify which CUDA device used for training, one can parse the index to train.py
A simple use case using the first CUDA device:
python train.py -d 0
Training can be restored from saved checkpoints
python train.py -d 0 -c log/snapshot/epoch-last.pth
Similar to training
python pred.py -d 0 -p ../../cil-road-segmentation-2019/pred/ -e log/snapshot/epoch-last.pth
python pred.py -d 0 -p ../../cil-road-segmentation-2019/val_pred/ -e log/snapshot/epoch-last.pth
cd ../../cil-road-segmentation-2019/
python mask_to_submission.py --name submission -p pred/
├── README.md
├── cil-road-segmentation-2019 # datasets and submission script
├── docs
├── utils # helper function and utils for model
├── log
└── model
Under model
directory, one can train, predict groundtruth (test images) and evaluate a model, details usage see the usage section above.
Different helpers functions used in constructing models, training, evaluation and IO operations regarding to pytorch
could be found under utils
folder. Functions or modules adapted from TorchSeg is clearly marked and referenced in the files.
- Projects description
- Road seg
- Road seg kaggle sign in
- Link for dataset.zip
- Course
- How to write paper
- https://scicomp.ethz.ch/wiki/Leonhard
- https://scicomp.ethz.ch/wiki/CUDA_10_on_Leonhard#Available_frameworks
- https://scicomp.ethz.ch/wiki/Using_the_batch_system#GPU
- Submit the final report: https://cmt3.research.microsoft.com/ETHZCIL2019
- Signed form here: http://da.inf.ethz.ch/teaching/2019/CIL/material/Declaration-Originality.pdf
- Kaggle: https://inclass.kaggle.com/c/cil-road-segmentation-2019