Skip to content
/ BML_2018 Public

Course project for Bayesian Methods 2018, Skoltech

License

Notifications You must be signed in to change notification settings

ne-bo/BML_2018

Repository files navigation

BML_2018

Course project for Bayesian Methods 2018, Skoltech

Paper LEARNING PRIORS FOR ADVERSARIAL AUTOENCODERS

Most deep latent factor models choose simple priors for simplicity, tractability or not knowing what prior to use. Recent studies show that the choice of the prior may have a profound effect on the expressiveness of the model, especially when its generative network has limited capacity. In this paper, we propose to learn a proper prior from data for adversarial autoencoders (AAEs). We introduce the notion of code generators to transform manually selected simple priors into ones that can better characterize the data distribution. Experimental results show that the proposed model can generate better image quality and learn better disentangled representations than AAEs in both supervised and unsupervised settings. Lastly, we present its ability to do cross-domain translation in a text-to-image synthesis task.

Our goal is to reproduce the paper.

Team members:

  • Alexander Safin
  • Natalia Pavlovskaia
  • Polina Belozerova

The report PDF

This repository contains 3 branches corresponding to the teammates.

About

Course project for Bayesian Methods 2018, Skoltech

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published