A simple implementation of pix2pix specified for an image translation task. A referencing project is pytorch-CycleGAN-and-pix2pix.
It was trained on facades
and edges2shoes
datasets.
To train a model from scratch put a dataset into a home folder. Then run:
python main.py --from_checkpoint=False --training=True --testing=False
Firstly, we need to download pretrained versions of a model.
To download a checkpoint choose a dataset: DATASET=facades
, DATASET=edges
or DATASET=both
. Then run the following commands:
cd scripts
bash download_checkpoints.sh
To evaluate a model run:
python main.py --training=False --testing=True
Here are several examples obtained for these checkpoints:
- Left image: source modality;
- Middle image: predicted modality;
- Right image: target modality;