Chandhini Grandhi, cgrandhi@ucsd.edu
In this project, I decided to work on improving my project 3 - image to image translation using pix2pix mode and combine it with style transfer to generate a stylised image from sketches. The project takes in a dataset of faces of people obtained from CUHK dataset . It consists of two phases: The first model is a Pix2pix Generative Adversarial networks that takes in the image, does the processing required and generates the photos from this. Essentially, this step involves translating edges to faces. The second model is the Neural Style transfer whose content image is the image generated from pix2pix model and style image is chosen by the user.The final generated image is the stylized version of face image generated from edges.
I first built the models and experimented with the dataset. Then, I used some of the images drawn by my friends (Available in data/user-images) and generated standalone faces from user sketches and performed style transfer on them.
The report is available here
Briefly describe the files that are included with your repository:
- data : Input and generated output images of the pix2pix model are located in the data folder
- trained models: single_test contains the trained checkpoint for the pix2pix model
- pix2pixtensorflow : contains the cloned version of pix2pix model
Your code for generating your project:
- pix2pix model: Followed the steps in pix2pix model
- Style transfer model: style_transfer.ipynb
Two versions of results are shown below
- Generated stylized images from the validation dataset during testing
- Generated stylized images from user inputs (my friends)
- The code runs on Google CoLab
- The code requires pip, TensorFlow, OpenCv libraries to run.
References to any papers, techniques, repositories you used: