This repo holds the official codes of paper: "Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer", which is accepted in CVPR 2021.
Artistic style transfer aims at migrating the style from an example image to a content image. Currently, optimization- based methods have achieved great stylization quality, but expensive time cost restricts their practical applications. Meanwhile, feed-forward methods still fail to synthesize complex style, especially when holistic global and local patterns exist. Inspired by the common painting process ofdrawing a draft and revising the details, this paper introduce a novel feed- forward method Laplacian Pyramid Network (LapStyle). LapStyle first transfers global style pattern in low-resolution via a Drafting Network. It then revises the local details in high-resolution via a Revision Network, which hallucinates a residual image according to the draft and the image textures extracted by Laplacian filtering. Higher resolution details can be easily generated by stacking Revision Networks with multiple Laplacian pyramid levels. The final stylized image is obtained by aggregating outputs ofall pyramid levels. We also introduce a patch discriminator to better learn local pattern adversarially. Experiments demonstrate that our method can synthesize high quality stylized images in real time, where holistic style patterns are properly transferred.
Here four style images: | StarryNew | Stars | Ocean | Circuit|
python applications/tools/lapstyle.py --content_img_path ${PATH_OF_CONTENT_IMG} --style_image_path ${PATH_OF_STYLE_IMG}
--content_img_path (str)
: path to content image.--style_image_path (str)
: path to style image.--output_path (str)
: path to output image dir, default value:output_dir
.--weight_path (str)
: path to model weight path, ifweight_path
isNone
, the pre-training model will be downloaded automatically, default value:None
.--style (str)
: style of output image, ifweight_path
isNone
,style
can be chosen instarrynew
,circuit
,ocean
andstars
, default value:starrynew
.
To train LapStyle, we use the COCO dataset as content image set. You can choose one style image from starrynew, ocean, stars or circuit. Or you can choose any style image you like. Before training or testing, remember modify the data path of style image in the config file.
Datasets used in example is COCO, you can also change it to your own dataset in the config file.
Note that train of lapstyle model does not currently support Windows system.
(1) Train the Draft Network of LapStyle under 128*128 resolution:
python -u tools/main.py --config-file configs/lapstyle_draft.yaml
(2) Then, train the Revision Network of LapStyle under 256*256 resolution:
python -u tools/main.py --config-file configs/lapstyle_rev_first.yaml --load ${PATH_OF_LAST_STAGE_WEIGHT}
(3) Further, you can train the second Revision Network under 512*512 resolution:
python -u tools/main.py --config-file configs/lapstyle_rev_second.yaml --load ${PATH_OF_LAST_STAGE_WEIGHT}
When testing, you need to change the parameter validate/save_img
in the configuration file to true
to save the output image.
To test the trained model, you can directly test the "lapstyle_rev_second", since it also contains the trained weight of previous stages:
python tools/main.py --config-file configs/lapstyle_rev_second.yaml --evaluate-only --load ${PATH_OF_WEIGHT}
Style | Stylized Results |
---|---|
We also provide several trained models.
model | style | path |
---|---|---|
lapstyle_circuit | circuit | lapstyle_circuit |
lapstyle_ocean | ocean | lapstyle_ocean |
lapstyle_starrynew | starrynew | lapstyle_starrynew |
lapstyle_stars | stars | lapstyle_stars |
@article{lin2021drafting,
title={Drafting and Revision: Laplacian Pyramid Network for Fast High-Quality Artistic Style Transfer},
author={Lin, Tianwei and Ma, Zhuoqi and Li, Fu and He, Dongliang and Li, Xin and Ding, Errui and Wang, Nannan and Li, Jie and Gao, Xinbo},
booktitle={Computer Vision and Pattern Recognition (CVPR)},
year={2021}
}