Skip to content

Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]

License

Notifications You must be signed in to change notification settings

giddyyupp/AdvSegLoss

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

AdvSegLoss

We provide PyTorch implementation for:

Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]

Arxiv

AdvSegLoss:

Visual comparison with other methods:

comparison

FID score comparison with other methods:

fid_comparison

Prerequisites

  • Linux, macOS or Windows
  • Python 3.7
  • CPU or NVIDIA GPU + CUDA CuDNN

Getting Started

Downloading Datasets

Please refer to datasets.md for details.

Installation

  • Clone this repo:
git clone https://github.com/giddyyupp/AdvSegLoss.git
cd AdvSegLoss
conda install pytorch=1.7.0 torchvision cudatoolkit=11.0 -c pytorch

Or install all deps using pip:

pip install -r requirements.txt

Download panoptic segmentation model from detectron model zoo, and put it under models/segmentation/detectron2.

After installations completed, you need to change two files in detectron2. These changes are shared in the ./models/segmentation/detectron2/changes.diff file.

AdvSegLoss train/test

  • Download a dataset (e.g. bedroom) and generate edge maps:

Follow steps in the datasets.md to download, and prepare datasets.

  • Train a model with unpaired training:
#!./scripts/train_advsegloss.sh
python train.py --checkpoints_dir ./checkpoints --name ade20k_hed_advsegloss_both --dataroot ./datasets/ade20k_hed --model cycle_gan --segmentation --segmentation_output "both" --direction "AtoB" --dataset_mode "unaligned"
  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097. To see more intermediate results, check out ./checkpoints/maps_cyclegan/web/index.html
  • Test the model:
#!./scripts/test_advsegloss.sh
python test.py --checkpoints_dir ./checkpoints --name ade20k_hed_advsegloss_both --dataroot ./datasets/ade20k_hed --model test --segmentation --segmentation_output "both" --direction "AtoB" --dataset_mode "unaligned"

The test results will be saved to a html file here: ./results/ade20k_hed_advsegloss_both/latest_test/index.html.

You can find more scripts at scripts directory.

Apply a pre-trained model

  • You can download pretrained models using following link

Put a pretrained model under ./checkpoints/{name}_pretrained/100_net_G.pth.

  • Then generate the results using
python test.py --dataroot datasets/ade20k_hed/testB --name {name}_pretrained --model test --segmentation --segmentation_output "both" --direction "AtoB" --dataset_mode "unaligned"

The results will be saved at ./results/. Use --results_dir {directory_path_to_save_result} to specify the results directory.

Best practice for training and testing your models.

Before you post a new question, please first look at the above Q & A and existing GitHub issues.

Acknowledgments

Our code is based on Ganilla.

The numerical calculations reported in this work were fully performed at TUBITAK ULAKBIM, High Performance and Grid Computing Center (TRUBA resources).

About

Official Pytorch implementation of Adversarial Segmentation Loss for Sketch Colorization [ICIP 2021]

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published