Skip to content

A style-attention-void-aware style transfer model that learns the blank-leaving information during the style transfer.

License

Notifications You must be signed in to change notification settings

dehezhang2/Style-Attention-Void-Aware-Style-Transfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

24 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Final Year Project -- Style-Attention-Void-Aware Style Transfer

Youtube | Report

A style-attention-void-aware style transfer model that learns the blank-leaving information during the style transfer.

Overview

Screen Shot 2021-04-20 at 2.28.32 PM

Arbitrary-Style-Per-Model fast neural style transfer has shown great potential in the academic field. Although state-of-the-art algorithms have great visual effect and efficiency, they are unable to address the blank-leaving (or void) information in specific artworks (e.g. traditional Chinese artworks). The available algorithms always try to maintain the similarity of details in the images before and after transformation, but certain details are often left blank in the artworks.

This is my final year project, which aims to utilize the style attention map to learn the voidness information during the style transfer process. The main contributions of this project are a novel self-attention algorithm to extract the voidness information in the content and style image, and a novel style transfer module guided by the attention mask to swap the style.

Installation

  • Environment: Ubuntu 20.04, NVIDIA GeForce GTX 1080 TI

    conda env create -f env.yml
    conda activate Sava
  • Download the datasets

    • Content dataset: MS-COCO is used to train the self-attention and SAVA-Net.
    • Style dataset: WikiArt is used to train the SAVA-Net.

Usage

Test

  1. Clone this repository

    git clone https://github.com/dehezhang2/Final_Year_Project.git
    cd Final_Year_Project
  2. Prepare your content image and style image, and save the content image to ./testing_data/content the style iamge to ./testing_data/style. I also provide some in these two directories.

  3. Open the graphic user interface

    • Run the command line

      cd ./codes/transfer/
      streamlit run demo.py
    • Click the URL (or use forwarded ports)

      Screen Shot 2021-04-20 at 2.18.28 PM

  4. Choose the content and style images

    Screen Shot 2021-04-20 at 2.19.35 PM

  5. Click the Start Transfer button, and the attention maps, attention masks, and the relative frequency map of the content and style images will be visualised. The output will be shown.

    Screen Shot 2021-04-20 at 2.20.35 PM

  6. You can find the transfer output and attention maps in ./testing_data/result.

  7. Feel free to add more images to the ./testing_data/content/ and ./testing_data/style/ folder to explore the result!

Train

  1. Clone this repository

    git clone https://github.com/dehezhang2/Final_Year_Project.git
    cd Final_Year_Project
  2. Download the training datasets, and change the file structure

    • All the content image should be in the directory ./training_data/content_set/val2014
    • All the style image should be in the directory ./training_data/style_set/val2014
  3. Filter the images by using two python files

    cd ./codes/data_preprocess/
    python filter.py
    python filter_percentage.py
  4. We have two training phases:

    Screen Shot 2021-04-20 at 3.26.32 PM

    • Phase I training: train the self-attention module
    cd ./codes/transfer/
    python train_attn.py --dataset_dir ../../training_data/content_set/val2014
    • Phase II training: train the style transfer module
    python train_sava.py --content_dir ../../training_data/content_set/val2014 --style_dir ../../training_data/style_set/val2014 --save_dir ../../models/sava_training_hard

Result

Here is a comparison of self-attention map used in AAMS (a) and our result (b)

Screen Shot 2021-04-20 at 3.26.43 PM

Some results of content-style pairs are shown below (a) is our algorithm with attention masks, (b) is SA-Net:

Screen Shot 2021-04-20 at 1.17.45 PM

Note

Although we have two contributions on the style transfer theory, there are limitations for this project:

  • Principle of some settings cannot be well explained by theory.
    • Feature map projection method (ZCA for attention map, AdaIN for style transfer)
    • Method to train the self-attention module (similar to AAMS)
  • The limitation of computational resource.
    • The VGG decoder may not be properly trained.
    • It is diffcult to add attention loss to match the statistics of the style and output attention maps.
    • It is difficult to divide the attention map into more clusters

Acknowledgement

  • I express gratitude to AAMS and SA-Net, we benefit a lot from both their papers and codes.
  • Thanks to Dr. Jing LIAO. She has provided many insightful suggestions, such as the use of style attention, soft correlation mask, and attention loss to match the voidness statistics. I would like to express my sincere appreciation to Kaiwen Xue, who has provided many intelligent ideas on this project and helped me with part of the implementation.

Contact

If you have any questions or suggestions about this project, feel free to contact me by email dehezhang2@gmail.com.

LICENSE

The code is released under the GPL-3.0 license.

About

A style-attention-void-aware style transfer model that learns the blank-leaving information during the style transfer.

Topics

Resources

License

Stars

Watchers

Forks