Interesting development using vLLM on top of xtts v2 #432
Replies: 1 comment
-
@vlrevolution It looks interesting, but Im not sure of all the in's and out's of what theyve done. I've at least linked the people maintaining the coqui scripts to see if its something of interest. As for pulling that one straight into AllTalk there is a big chain of question marks caused by Auralis, because its installation is forcing torch==2.5.1 and torchaudio==2.5.1. Which is absolute latest PyTorch. Theres no immediate way just to jump to Torch 2.5.x and not have things break. So it will be a lot of trial/err testing Or perhaps their requirements are too strict and it doesnt need 2.5.1, maybe 2.3 is ok or something.... Ill have to put it in the "to be looked at when things are calmer" pile #74 Thanks |
Beta Was this translation helpful? Give feedback.
-
Check this out:
https://www.astramind.ai/post/auralis
Anything we could incorporate from this to alltalk? It seems like it could potentially really improve performance but I'm not a ML expert
Beta Was this translation helpful? Give feedback.
All reactions