Help with the trained AudioLM and with the MulanEmbedQuantizer (mulan_embed_quantizer) #62
Unanswered
JoaquinCE202
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I need help with the last parte of the Usage, i am trying to implement it in colab and in the last cell:
you need the trained AudioLM (audio_lm) from above
with the MulanEmbedQuantizer (mulan_embed_quantizer)
from musiclm_pytorch import MusicLM
from audiolm_pytorch import AudioLM
musiclm = MusicLM(
audio_lm = audio_lm, #
AudioLM
from https://github.com/lucidrains/audiolm-pytorchmulan_embed_quantizer = quantizer # the
MuLaNEmbedQuantizer
from above)
music = musiclm('the crystalline sounds of the piano in a ballroom', num_samples = 4) # sample 4 and pick the top match with mulan
I can't figure out how I import audio_lm and Mulan's quantizer. If anyone knows how to do it it will be very helpful.
Beta Was this translation helpful? Give feedback.
All reactions