This repo is a work-in-progress status without code cleanup and refactoring.
This is an implementation of a paper Polyphonic Music Generation with Sequence Generative Adversarial Networks in TensorFlow.
Hard-forked from the official SeqGAN code.
Python 2.7
Tensorflow 1.4 or newer (tested on 1.9)
pip packages: music21 4.1.0, pyyaml, nltk, pathos
python music_seqgan.py
for full training run.
SeqGAN.yaml contains (almost) all hyperparameters that you can play with.
5 sample MIDI sequences are automatically generated per epoch.
The model uses a MIDI version of Nottingham database (http://abc.sourceforge.net/NMD/) as the dataset.
Preprocessed musical word tokens are included in the "dataset" folder.