Skip to content

Pytorch Implementation of variational auto-encoder for MNIST

Notifications You must be signed in to change notification settings

Glabred/pytorch-mnist-vae

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational Auto-Encoder for MNIST

  • Pytorch: 0.4+
  • Python: 3.6+

An Pytorch Implementation of variational auto-encoder (VAE) for MNIST descripbed in the paper:

This repo. is developed based on Tensorflow-mnist-vae.

Results

NOTICE:

tf.nn.dropout(keep_prob=0.9)
torch.nn.Dropout(p=1-keep_prob)

Reproduce

Well trained VAE must be able to reproduce input image.
Figure 5 in the paper shows reproduce performance of learned generative models for different dimensionalities.
The following results can be reproduced with command:

python run_main.py --dim_z <each value> --num_epochs 60
Input image 2-D latent space 5-D latent space 10-D latent space 20-D latent space

Denoising

When training, salt & pepper noise is added to input image, so that VAE can reduce noise and restore original input image.
The following results can be reproduced with command:

python run_main.py --dim_z 20 --add_noise True --num_epochs 40
Original input image Input image with noise Restored image via VAE

Learned MNIST manifold

Visualizations of learned data manifold for generative models with 2-dim. latent space are given in Figure. 4 in the paper.
The following results can be reproduced with command:

python run_main.py --dim_z 2 --num_epochs 60 --PMLR True
Learned MNIST manifold Distribution of labeled data

Usage

Prerequisites

  1. Pytorch
  2. Python packages : numpy, scipy, PIL(or Pillow), matplotlib

Command

python run_main.py --dim_z <latent vector dimension>

Example: python run_main.py --dim_z 20

Arguments

Required :

  • --dim_z: Dimension of latent vector. Default: 20

Optional :

  • --results_path: File path of output images. Default: results
  • --add_noise: Boolean for adding salt & pepper noise to input image. Default: False
  • --n_hidden: Number of hidden units in MLP. Default: 500
  • --learn_rate: Learning rate for Adam optimizer. Default: 1e-3
  • --num_epochs: The number of epochs to run. Default: 20
  • --batch_size: Batch size. Default: 128
  • --PRR: Boolean for plot-reproduce-result. Default: True
  • --PRR_n_img_x: Number of images along x-axis. Default: 10
  • --PRR_n_img_y: Number of images along y-axis. Default: 10
  • --PRR_resize_factor: Resize factor for each displayed image. Default: 1.0
  • --PMLR: Boolean for plot-manifold-learning-result. Default: False
  • --PMLR_n_img_x: Number of images along x-axis. Default: 20
  • --PMLR_n_img_y: Number of images along y-axis. Default: 20
  • --PMLR_resize_factor: Resize factor for each displayed image. Default: 1.0
  • --PMLR_n_samples: Number of samples in order to get distribution of labeled data. Default: 5000

References

The implementation is based on the projects:
[1] https://github.com/oduerr/dl_tutorial/tree/master/tensorflow/vae
[2] https://github.com/fastforwardlabs/vae-tf/tree/master
[3] https://github.com/kvfrans/variational-autoencoder
[4] https://github.com/altosaar/vae

About

Pytorch Implementation of variational auto-encoder for MNIST

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%