A GAN is a type of generative model used for learning a data distribution and then sampling data from the learned distribution. The generator does this work whereas a different component - the discriminator - checks whether the data generated comes from the actual data distribution or has been generated by the generator. That is, it checks whether the input is REAL or FAKE. Hence, GANs can be used for generating images very similar to real-life images. The link to the original paper can be found here.
This repository contains my implementation of the original GAN paper. I've used the MLP (Multi Layer Perceptron) as used by the authors of the paper to map a 100 dimensional latent variable space(noise) to an image which is then fed into the discriminator which is also a MLP. It was trained on the inbuilt MNIST dataset of digits. Obviously, using Convolutional Neural Networks (as in DCGAN) in place of Multi-Layer Perceptron can help in increasing the accuracy of the model though MLPs have also give quite satisfactory results. The hyperparameter k was set equal to 1 as was done by the authors, as it was the least computationally expensive option. TF-GANs can also be used for getting highly efficient implementations.
These are some results from the training process, in which I trained the model for 80000 epochs with 32 examples in each epoch and sampled images from the generator at intervals of 200 epochs.