Skip to content

AI-enabled Interior decoration, powered by CycleGANs. Built for @NextTechLabAP's #9Hacks!

License

Notifications You must be signed in to change notification settings

NextTechLabAP/AI-Interior

Repository files navigation

AI-Interior

AI-enabled Interior decoration app

Using Cycle GAN as discussed in [Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks] (https://arxiv.org/abs/1703.10593) with source code cloned and modified from [xhujoy/CycleGAN-tensorflow/] (https://github.com/xhujoy/CycleGAN-tensorflow/)

We created and trained our own texture2interior model for this task.

The Model

It is a mapping of unpaired images, created by extracting features of one image and transforming and imposing them on another image. A one-to-one mapping between images from input to target domain in the training set is very powerful and can be used in variety of problems for solving real life problems. This is achieved by a type of generative model, specifically a Generative Adversarial Network dubbed CycleGAN. Here are some examples for this model (picture taken from original paper)

Sample models shown in paper

Working Principle

Model

Generator

Discriminator

Above images are taken from the original paper

Our model

We used tensorflow as backend for our model. We used resnet blocks for extracting features.

Generator and Discriminator Genrator genrate genrator images by tranforming one image feature to another image and discriminator does proofreading part (i.e. check image generated by generator is good or bad). The only way to fool dicriminator is if generated recommendation for images is close to original.

It also checks for loss generated by Generator, Discriminator and by Cyclic Loss during the training.

Issues

Right now due to lack of computational power we were only able to train our model upto 20 epoch (which was taking around 10 hours) on a single Nvidia GTX 1080, running TensorFlow 1.4 with Python 3.6 on Windows 10. We will update our code as well as result, when we get better results.

Right now our model is able to detect edges, extracting features and partially imposing on input image. The problem is the lack of dataset for designing and solid colour palette because here we have to change mapping from interior to solid colour palette because there is nothing to extract features from in a solid colour palette.

Results observed were kind of good after 10 epochs with texture palette. Features were being extracted and showing better result with it. After 75 epoch we find out our training model was testing and correcting previous images features that were extracted.So as we increase the number of epochs the better results we get. But due to lack of compoutaional power we were bound as we improve our model we will prvide results.

NOTE

We will update this repositry as our model gets better.

About

AI-enabled Interior decoration, powered by CycleGANs. Built for @NextTechLabAP's #9Hacks!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •