# BML_2018
Course project for Bayesian Methods 2018, Skoltech 

Paper [LEARNING PRIORS FOR ADVERSARIAL AUTOENCODERS]

Most deep latent factor models choose simple priors for simplicity, tractability or
not knowing what prior to use. Recent studies show that the choice of the prior
may have a profound effect on the expressiveness of the model, especially when
its generative network has limited capacity. In this paper, we propose to learn a
proper prior from data for adversarial autoencoders (AAEs). We introduce the
notion of code generators to transform manually selected simple priors into ones
that can better characterize the data distribution. Experimental results show that
the proposed model can generate better image quality and learn better disentangled
representations than AAEs in both supervised and unsupervised settings. Lastly,
we present its ability to do cross-domain translation in a text-to-image synthesis
task.


Our goal is to reproduce the paper.

Team members:
- Alexander Safin
- Natalia Pavlovskaia
- Polina Belozerova

[The report PDF] 

This repository contains 3 branches corresponding to the teammates.

[LEARNING PRIORS FOR ADVERSARIAL AUTOENCODERS]:https://github.com/ne-bo/BML_2018/blob/master/texts/Learn%20prior%20for%20baeysian%20model%20in%20adversarial%20setting.pdf
[The report PDF]:https://github.com/ne-bo/BML_2018/blob/master/texts/project_report.pdf