Skip to content

Official repository for the paper: "Controlling Geometric Abstraction and Texture for Artistic Images"

License

Notifications You must be signed in to change notification settings

MartinBuessemeyer/Artistic-Texture-Control

Repository files navigation

Controlling Geometric Abstraction and Texture for Artistic Images

Official pytorch code for controlling texture and geometric abstraction in artistic images, e.g., results of neural style transfer.

Controlling Geometric Abstraction and Texture for Artistic Images
Martin Büßemeyer*1, Max Reimann*1, Benito Buchheim1, Amir Semmo2, Jürgen Döllner1, Matthias Trapp1
1Hasso Plattner Institute, University of Potsdam, Germany, 2Digitalmasterpieces GmbH, Germany
*equal contribution
in Cyberworlds 2023

Main Idea

We propose a method to control the geometric abstraction (coarse features) and texture (fine details) in artistic images ( such as results of Neural Style Transfer) separately from another. We implement a stylization pipeline that geometrically abstracts images in its first stage and adds back the texture in the second stage. The stages make use of the following abstraction and stylization methods:

  • Geometric Abstraction:
    • Image Segmentation (we use SLIC) or
    • Neural Painters (we use the PaintTransformer)
  • Texture Control:
    • Differentiable effect pipelines (WISE). These represent an image style in the artistic control parameters of image filters.
      We introduce a lightweight differentiable "Arbitrary Style Pipeline" which is capable of representing the previously "lost" texture, and is convenient to edit.
tex_decomp

To acquire the filter parameters of the texture control stage, texture decomposition is executed by using either an optimization-based approach that tunes parameters such that the effect output resembles a target image or by training a network to do the same (both approaches were introduced by WISE).

After texture decomposition, various editing steps can be taken (see below).

Features / Capabilities:

  • Optimization-based texture decomposition:
    • default loss is L1 reconstruction loss to target image, but others can be used such as Gatys style loss
    • slow, non-interactive, but agnostic to method
  • Parameter Prediction Networks (PPNs):
    • The PPN predicts the parameters of the differentiable effect and is trained on one or multiple styles.
    • We demontraste this for Fast / Feed-Forward NST (Johnson NST) and Arbitrary Style Transfer SANet
    • Benefits: fast, usable for interactive applications
    • Johnson NST is used by the provided editing prototype
  • Editing and Flexibility:
    • Independent editing of geometric abstraction (edit first stage output) and texture (edit parameter masks of second stage)
    • Editing of Styles using text-prompts - here we use CLIP-based losses, similar to CLIPStyler
    • Mixing different styles: Use a different style for geometric abstraction and texture. Quality depends of style combination.
    • When using PPNs: Automatic adaption of texture / parameter masks after changing the geometric abstraction by repredicting the parameter masks
    • Some editing use-cases: (Re)move a misplaced style element, recover content information, locally / globally change the brushstroke texture.
    • The benefits can be explored by using the provided editing prototypes.

Differentiable Effect Pipelines:

Pipelines included in this repository are:

  • Arbitrary Style Pipeline (proposed stylization method, the default for this repository)
  • XDoG (from WISE)

The modes of processing are:

  • Parameter Optimization (Parametric Style Transfer and Checks for individual parameters)
  • Parameter Predicition (Predict parameter masks using a PPN)

NST Editing GUI

This GUI enables fine-granular editing of NSTs using our proposed arbitrary style pipeline.

  • It uses a parameter prediction network (PPN) trained on a single style to predict and edit parameters of a feedfoward style transfer.
  • It lets a user edit the geometric abstraction and texture separately to influence specific aspects of the image

Setup

Install packages from requirements.txt

Run python -m streamlit run tools/arbitrary_style_pipeline_editing_app.py

NST Editing Workflow

We provide a near minimal set of editing tools to show the advantages of our method:

  • Property Masks: To enable quick selection of important regions, we predict a depth and salience mask that can be thresholded.
  • Color Palette Matching: Matches the color palette of the source region to the destination region via histogram matching. Adapt color schema from a different region.
  • Color Interpolation: Interpolates the region with a color or the content image.
  • Copy Region: Copies the source region to the center of the destination region. Copy a style element to another region.
  • Change Level-of-Detail: Executes a re-segmentation on the selected region. Control geometric abstraction size.
  • Re-Predict Parameter Masks: Repredicts the parameter masks for the texture stage. Thus the new masks adapt to the performed geometric abstraction edits.
edit_vid_compressed.mp4

Please also see the supplementary video starting at 10:00 for a short demonstration of an editing workflow to correct NST artifacts.

The geometric abstraction editing tools operate on segments predicted by SLIC. For artistic geometry editing, try out the optimization-based prototype.
Note: There is currently no 'revert' functionality in the prototype. Further, the prototype needs a GPU with > 6 GB RAM for reasonably large images, as several CNNs are loaded into memory.

Optimization-based Editing GUI

This streamlit app demonstrates the optimization-based image synthesis and editing

Setup

Make sure to have the packages specified in tools/global_edits/requirements.txt installed, the exact streamlit and streamlit drawable versions are required. To run the demo, execute in the tools/global_edits/ directory:

python -m streamlit run global_editing_streamlit_app.py

Features

The capabilities are grouped in tabs, and include:

  • global parameter tuning
  • geometric abstraction control (painttransformer or SLIC)
  • (re-) optimization. Optimize with different losses such as
    • target image-based: L1
    • image-based style transfer: Gatys Loss, STROTTS Loss
    • text-based style transfer: CLIPStyler Loss.
  • Prediction with arbitrary parameter prediction network
  • Interpolation of parameters using depth and saliency masks
tear_drop_optim.mp4

Editing example for geometric editing and CLIP-based texture optimization

Please also see the supplementary video starting at 7:05 for a short demonstration of the global editing app.

Optimization and training

Parameter Optimization

Use the script: parameter_optimization/parametic_styletransfer.py Example usage:

python -m parameter_optimization.parametric_styletransfer --content path/to/content_img --style path/to/style_img --img_size
512 --output_dir general/output/dir --experiment_name subfolder/in/output_dir

This will first execute a STROTTS NST with the given content and style. The result is used as input for our arbitrary style pipeline. In the second step, the image segmentation will be used as the first pipeline stage and the parameter masks for the texture stage are optimized.

Display script options: python -m parameter_optimization.parametric_styletransfer --help

Important Script Options:

  • --content path/to/content_img Path to the content image that should be used.
  • --style path/to/style_img Path to the style image that should be used.
  • --loss <one of: 'L1', 'CLIPStyler', 'GatysLoss', 'STROTTS'> The loss that should be used as optimization goal.
  • --nst_variant <one of: 'STROTTS', 'JohnsonNST'> Which NST variant should be used for the initial NST of the content image.
  • --first_stage_type <one of: 'PaintTransformer', 'Segmentation'> Which geometric abstraction should be used.
  • --clipstyler_text "My Text Prompt" The text prompt for the CLIPStyler loss.
  • --n_iterations 500 Number of optimization steps to perform. We recommend at least 100.

Pre-Trained Parameteter Prediction Network Weights

Currently available via google-drive: https://drive.google.com/drive/folders/1mB6dhK-qzy6aESSKgMLBIrO9dTYc2eti?usp=sharing

Training a Parameter Prediction Network

You can train your own Parameter Prediction Networks, if the pretrained models do not suit you. Keep in mind that this requires at least two GPUs with ~20GB GPU RAM. We use the MS COCO dataset for content images during training. Each Single Style Parameter Prediction Network needs a corresponding Johnson NST Network as this network will generate the input for the arbitrary style pipeline.

Training a Johnson NST Network:

python -m parameter_prediction_network.johnson_nst.train--batch_size 16 --lr 5e-4 --logs_dir ./logs 
--style path/to/style --architecture johnson_instance_norm --dataset_path path/to/ms_coco --group_name <group_name> 
--img_size 256 --style_weight 5e10 --grad_clip 1e6 --epochs 12 --disable_logger

Training a Single Style Parameter Prediction Network:

python -m parameter_prediction_network.train --batch_size 16 --lr 5e-4 --logs_dir ./logs 
--style path/to/style_img --architecture johnson_instance_norm --dataset_path path/to/ms_coco --group_name <group_name> 
--img_size 256 --style_weight 5e10 --johnson_nst_model path/to/johnson_nst_weights --num_train_gpus <num_training_gpus - 1> --epochs 12

Important Script Options:

  • --content path/to/content_img Path to the content image that should be used.
  • --style path/to/style_img Path to the style image that should be used.
  • --img_size 256 Size of the content images during training.
  • --style_img_size 256 Size of the style image for loss calculation purposes. Changing the parameter might lead to bad stylization.
  • --architecture johnson_instance_norm Change the network architecture of the trained model. See list of available models: parameter_prediction_network/ppn_architectures/__init__.
  • --num_train_gpus Number of GPUs to use for training. If training PPN, must at least one lower as the actual GPU count as one additional GPU will be used for the initial Johnson NST of the content images.
  • --johnson_nst_model PPN only: Path to the Johnson NST weights, that should be used for the initial NST.

Training a Arbitrary Style Parameter Prediction Network works similar. In addition to the MS COCO dataset, you will need the Wikiart dataset.

Code Acknowledgements

The code of this project is based on the following papers / repositories:

Sources of Images in Experiments Folder

Here is the list of style images we used to train PPNs and in our experiments. The download_style_imgs.sh script will download the files for you and put them into the designated directory experiments/target/popular_styles. The once used by us might be slightly different, since we used a different source that depicts the same painting.

Questions?

Please do not hesitate to open an issue :).

About

Official repository for the paper: "Controlling Geometric Abstraction and Texture for Artistic Images"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages