Skip to content

Beselines for HAIR: A Dataset of Historic Aerial Images of Riverscapes for Semantic Segmentation

Notifications You must be signed in to change notification settings

SaeidShamsaliei/HAIR

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 

Repository files navigation

HAIR: A Dataset of Historic Aerial Images of Riverscapes for Semantic Segmentation

Training

MagNet

Training the MagNet has two steps:

Training backbone networks

  1. Make the virtual environment with requirements.
  2. Activate the environment. For example:
cd <path to the directory>/MagNet-main/

source <env name>/bin/activate
  1. Modify the config file at <path to the directory/MagNet-main/backbone/experiments/deepglobe_river>
  2. Run the script
# To train without Stochastic Weight Averaging:
python train.py --cfg experiments/deepglobe_river/hairs_config_seed4.yaml --seed 4

# To use Stochastic Weight Averaging:
python train_swa.py --cfg experiments/deepglobe_river/hairs_config_seed4.yaml --seed 4

Training refinement modules

modify the parameters of the <path to the directory/MagNet-main/scripts/deepglobe_river/train_magnet.sh

Inference

Modify the script MagNet-main/prediction_from_dir.py. The parameters are similar to the original MagNet.

python prediction_from_dir.py --progressive_flag 1 \
--dataset deepglobe_river \
--scales 612-612,1224-1224,2448-2448 \
--crop_size 612 612 \
--input_size 508 508 \
--model fpn \
--pretrained <your path> \
--pretrained_refinemen <your path> \
--num_classes 6 \
--n_points 0.75 \
--n_patches -1 \
--smooth_kernel 11 \
--sub_batch_size 1 \

All the models except MagNet

Training the networks

1.Make the environment by making a docker container or setting up a conda environment using the requirements. In case of using docker:

# move to the directory
cd <path to the directory>/Other_models/

# To build the docker container
docker build -t saeids/segmentation segmentation/

# To start the container
docker run --rm -it -v $-v $<path to workscape data>:/workspace/data -v $<path to source code>:/workspace/code -v <path to data>:/data -p <your port of choice>:8888 --gpus device=<your device of choice> saeids/tf_gpu_v1 /bin/bash
  1. Activate the environment. For example:
conda activate tf_gpu_env
  1. Install all the required libraries
# If using the docker:
sh ./workspace/code/Other_models/source/installation.sh
  1. Modify the training script Other_models/source/run_from_dir_pretrain.py for pre-trained on grayscaled DeepGlobe, and Other_models/source/run_from_dir_with_args.py for training from scratch.

  2. Train the model

cd <path to the directory>/Other_models/source
# train
python run_from_dir_pretrain.py
# or
python run_from_dir_pretrain.py

Inference

  1. Modify the config file at Other_models/source/configs/prediction_config.yaml

  2. run the script

# make sure the directory is correct
cd <path to the directory>/Other_models/source

python run_predictions_costum.py

Evaluation

MagNet

  1. Make sure to follow the inference and have the predictions for all the images
  2. Modify the paths at the script Other_models/source/Nips_test_miou_magnet.py
  3. Run the script:
# make sure the directory is correct
cd <path to the directory>/Other_models/source

python Nips_test_miou_magnet.py

All the models except MagNet

  1. Modify the paths at the script Other_models/source/Nips_test_miou.py
  2. Run the script:
# make sure the directory is correct
cd <path to the directory>/Other_models/source

python Nips_test_miou.py

Grayscale DeepGlobe Pre-trained weights

The weights of pre-trained models on grayscaled DeepGlobe can be found in pretrained weights.

About

Beselines for HAIR: A Dataset of Historic Aerial Images of Riverscapes for Semantic Segmentation

Resources

Stars

Watchers

Forks

Packages

No packages published