Transformer-based symbolic music generation based on Music Transformer using REMI midi encoding.
Uses MidiTok for the encoding and model implementation from here.
Dataset used: Lakh MIDI Dataset.
Framework: Pytorch.
This is my first project in transformer-based music generation. Received lots of help from the above research and especially code from MusicTransformer-Pytorch. Also inspired by PopMAG, this may be my next music generation attempt!
See examples directory for midi files of varying length
Anaconda, Pytorch >= 1.2.0, Python >= 3.6
Install dependencies with: conda env create --file environment.yaml
Download and unzip LMD-full from Lakh MIDI Dataset.
Then:
./preprocess.py <midi_files_directory> <processed_dataset_directory>
./train.py <processed_dataset_directory> <checkpoints_directory>
./generate.py <processed_dataset_directory> <checkpoints_directory> --l <max_sequence_length>
To change these, just edit utils/constants.py
batch_size = 16
validation_split = .9
shuffle_dataset = True
random_seed= 42
n_layers = 6
num_heads = 8
d_model = 512
dim_feedforward = 512
dropout = 0.1
max_sequence = 2048
rpr = True
ADAM_BETA_1 = 0.9
ADAM_BETA_2 = 0.98
ADAM_EPSILON= 10e-9