You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I attempted to load up flan-ul2 4-bit 128g gptq, but it looks like T5ForConditionalGeneration isn't supported, or perhaps Encoder/Decoder type LLMs in general.
In particular, https://github.com/qwopqwop200/transformers-t5 would also likely be needed to provide support for quantized T5.
The text was updated successfully, but these errors were encountered:
If possible, I could do a pull request rather than burden you with a feature request. If you could let me know what files/functions I should look at to add support for a new model type, since I'm not familiar with KoboldAI's codebase, that should be enough to get me started.
I attempted to load up flan-ul2 4-bit 128g gptq, but it looks like
T5ForConditionalGeneration
isn't supported, or perhaps Encoder/Decoder type LLMs in general.In particular, https://github.com/qwopqwop200/transformers-t5 would also likely be needed to provide support for quantized T5.
The text was updated successfully, but these errors were encountered: