This repository contains recipes for training generative music models on top of the Descript Audio Codec.
you can try vampnet in a co-creative looper called unloop. see this link: https://github.com/hugofloresgarcia/unloop
Requires Python 3.9.
you'll need a Python 3.9 environment to run VampNet. This is due to a known issue with madmom.
(for example, using conda)
conda create -n vampnet python=3.9
conda activate vampnet
install VampNet
git clone https://github.com/hugofloresgarcia/vampnet.git
pip install -e ./vampnet
This repository relies on argbind to manage CLIs and config files.
Config files are stored in the conf/
folder.
The weights for the models are licensed CC BY-NC-SA 4.0
. Likewise, any VampNet models fine-tuned on the pretrained models are also licensed CC BY-NC-SA 4.0
.
Download the pretrained models from this link. Then, extract the models to the models/
folder.
You can launch a gradio UI to play with vampnet.
python app.py --args.load conf/interface.yml --Interface.device cuda
To train a model, run the following script:
python scripts/exp/train.py --args.load conf/vampnet.yml --save_path /path/to/checkpoints
for multi-gpu training, use torchrun:
torchrun --nproc_per_node gpu scripts/exp/train.py --args.load conf/vampnet.yml --save_path path/to/ckpt
You can edit conf/vampnet.yml
to change the dataset paths or any training hyperparameters.
For coarse2fine models, you can use conf/c2f.yml
as a starting configuration.
See python scripts/exp/train.py -h
for a list of options.
To debug training, it's easier to debug with 1 gpu and 0 workers
CUDA_VISIBLE_DEVICES=0 python -m pdb scripts/exp/train.py --args.load conf/vampnet.yml --save_path /path/to/checkpoints --num_workers 0
To fine-tune a model, use the script in scripts/exp/fine_tune.py
to generate 3 configuration files: c2f.yml
, coarse.yml
, and interface.yml
.
The first two are used to fine-tune the coarse and fine models, respectively. The last one is used to launch the gradio interface.
python scripts/exp/fine_tune.py "/path/to/audio1.mp3 /path/to/audio2/ /path/to/audio3.wav" <fine_tune_name>
This will create a folder under conf/<fine_tune_name>/
with the 3 configuration files.
The save_paths will be set to runs/<fine_tune_name>/coarse
and runs/<fine_tune_name>/c2f
.
launch the coarse job:
python scripts/exp/train.py --args.load conf/generated/<fine_tune_name>/coarse.yml
this will save the coarse model to runs/<fine_tune_name>/coarse/ckpt/best/
.
launch the c2f job:
python scripts/exp/train.py --args.load conf/generated/<fine_tune_name>/c2f.yml
launch the interface:
python app.py --args.load conf/generated/<fine_tune_name>/interface.yml