Skip to content

Variational autoencoder is an innovation in the field of unsupervised machine learning. Its architecture combines stochastic encoder-decoder modules and deep learning.

Notifications You must be signed in to change notification settings

faltysad/bachelor-thesis-text

Repository files navigation

Variational autoencoder and latent space observation tasks

Variational autoencoder is an innovation in the field of unsupervised machine learning. Its architecture combines stochastic encoder-decoder modules and deep learning. As a result of the internal representation of input data in the form of latent variables, the latent space of the model is formed. By training the variational autoencoder model, structures conveying the semantic meaning of the input data are formed in its latent space. Moreover, only salient features of the input data are captured by the variational autoencoder model, thus reducing its dimensionality. The variational autoencoder finds application in a wide range of generative modeling tasks and is capable of synthesizing completely new data. This thesis is concerned with introducing the theory of the variational autoencoder and mapping its current state of the art. Furthermore, the paper presents possible applications of the variational autoencoder in selected problem domains and demonstrates its use through a practical implementation of an illustrative generative modeling task for image data.

Keywords: variational autoencoder, latent space, generative modeling, machine learning

Illustrative implementation of generative modeling task using variational autoencoder can be found here.

About

Variational autoencoder is an innovation in the field of unsupervised machine learning. Its architecture combines stochastic encoder-decoder modules and deep learning.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published