Keras implementation of LinkNet: Exploiting Encoder Representations for Efficient Semantic Segmentation, ported from the lua-torch (LinkNet) and PyTorch (pytorch-linknet) implementation, both created by the authors.
Dataset | Classes 1 | Input resolution | Batch size | Mean IoU (%) |
---|---|---|---|---|
CamVid | 12 | 960x480 | 2 | 47.152 |
Cityscapes | 20 | 1024x512 | 2 | 53.373 |
1 Includes the unlabeled/void class.
2 Test set.
3 Validation set.
- Python 3 and pip.
- Set up a virtual environment (optional, but recommended).
- Install dependencies using pip:
pip install -r requirements.txt
.
Run main.py
, the main script file used for training and/or testing the model. The following options are supported:
python main.py [-h] [--mode {train,test,full}] [--resume]
[--initial-epoch INITIAL_EPOCH] [--no-pretrained-encoder]
[--weights-path WEIGHTS_PATH] [--batch-size BATCH_SIZE]
[--epochs EPOCHS] [--learning-rate LEARNING_RATE]
[--lr-decay LR_DECAY] [--lr-decay-epochs LR_DECAY_EPOCHS]
[--dataset {camvid,cityscapes}] [--dataset-dir DATASET_DIR]
[--workers WORKERS] [--verbose {0,1,2}] [--name NAME]
[--checkpoint-dir CHECKPOINT_DIR]
For help on the optional arguments run: python main.py -h
python main.py -m train --checkpoint-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/
python main.py -m train --resume True --initial-epoch 10 --checkpoint-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/
python main.py -m test --checkpoint-dir save/folder/ --name model_name --dataset name --dataset-dir path/root_directory/
data
: Contains code to load the supported datasets.metrics
: Evaluation-related metrics.models
: LinkNet model definition.checkpoints
: By default,main.py
will save models in this folder. The pre-trained encoder (ResNet18) trained on ImageNet can be found here.
args.py
: Contains all command-line options.main.py
: Main script file used for training and/or testing the model.callbacks.py
: Custom callbacks are defined here.