Skip to content

zhiye9/cil-2019

Repository files navigation

Road segmentation

Team Member: Jingyuan Ma, Zhi Ye, Yongqi Wang

This repository contains the tools and models for the the course project of Computational Intelligence Lab (Spring 2019): Road Segmentaion.

Prerequisites

In our setting, the models are being run inside a Docker container (using the default tag: latest)

To use the docker container:

docker pull ufoym/deepo:latest
# change the volume mount before
docker run -it --name cu100 -v ./cil-2019:/home/cil-2019 ufoym/deepo bash
docker ps -a 
# CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
# ae2f6abc24d5        ufoym/deepo         "bash"              2 hours ago         Up About an hour    6006/tcp            eager_ganguly
docker ps -a -q
# ae2f6abc24d5
# entering the bash with container ID
docker exec -it cu100 bash
# follow instructions below
  • PyTorch >= 1.0
    • pip3 install torch torchvision
  • Easydict
    • pip3 install easydict
  • tqdm
    • pip3 install tqdm

Usage

A sample workflow:

git clone https://github.com/wyq977/cil-2019.git
cd cil-2019
cd model/final
# train with CUDA device 0
python train.py -d 0
# eval using the default last epoh
python eval.py -d 0 -p ./val_pred
# generate predicted groundtruth
python pred.py -d 0 -p ./pred
# generate submission.csv
python ../../cil-road-segmentation-2019/mask_to_submission.py --name submission -p ./pred/
# submit the submission.csv generated

Model dir:

├── config.py
├── dataloader.py
├── eval.py
├── network.py
├── pred.py
└── train.py

Prepare data

With a tab-separated files specifying the path of images and groundtruth, train.txt, val.txt, test.txt.

train.txt or val.txt:

path-of-the-image   path-of-the-groundtruth

Noted that in the test.txt:

path-of-the-image   path-of-the-image

A handy script (writefile.py) using the package glob can be found inside the dataset directory.

Training

Currently, distributed training from torch.distributed.launch is not supported.

To specify which CUDA device used for training, one can parse the index to train.py

A simple use case using the first CUDA device:

python train.py -d 0

Training can be restored from saved checkpoints

python train.py -d 0 -c log/snapshot/epoch-last.pth

Predictive groudtruth labels

Similar to training

python pred.py -d 0 -p ../../cil-road-segmentation-2019/pred/ -e log/snapshot/epoch-last.pth

Evalaute

python pred.py -d 0 -p ../../cil-road-segmentation-2019/val_pred/ -e log/snapshot/epoch-last.pth

Create submission.csv

cd ../../cil-road-segmentation-2019/
python mask_to_submission.py --name submission -p pred/

Structure

├── README.md
├── cil-road-segmentation-2019 # datasets and submission script
├── docs
├── utils # helper function and utils for model
├── log
└── model

Under model directory, one can train, predict groundtruth (test images) and evaluate a model, details usage see the usage section above.

Different helpers functions used in constructing models, training, evaluation and IO operations regarding to pytorch could be found under utils folder. Functions or modules adapted from TorchSeg is clearly marked and referenced in the files.

Logistics

Links:

  1. Projects description
  2. Road seg
  3. Road seg kaggle sign in
  4. Link for dataset.zip
  5. Course
  6. How to write paper

Computational resources

  1. https://scicomp.ethz.ch/wiki/Leonhard
  2. https://scicomp.ethz.ch/wiki/CUDA_10_on_Leonhard#Available_frameworks
  3. https://scicomp.ethz.ch/wiki/Using_the_batch_system#GPU

Project submission

  1. Submit the final report: https://cmt3.research.microsoft.com/ETHZCIL2019
  2. Signed form here: http://da.inf.ethz.ch/teaching/2019/CIL/material/Declaration-Originality.pdf
  3. Kaggle: https://inclass.kaggle.com/c/cil-road-segmentation-2019

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published