Skip to content

camenduru/text-generation-webui-colab

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

🐣 Please follow me for new updates https://twitter.com/camenduru
πŸ”₯ Please join our discord server https://discord.gg/k5BwmmvJJU
πŸ₯³ Please join my patreon community https://patreon.com/camenduru

🚦 WIP 🚦

πŸ¦’ Colab

colab Info - Model Page
Open In Colab vicuna-13b-GPTQ-4bit-128g
https://vicuna.lmsys.org
Open In Colab vicuna-13B-1.1-GPTQ-4bit-128g
https://vicuna.lmsys.org
Open In Colab stable-vicuna-13B-GPTQ-4bit-128g
https://huggingface.co/CarperAI/stable-vicuna-13b-delta
Open In Colab gpt4-x-alpaca-13b-native-4bit-128g
https://huggingface.co/chavinlo/gpt4-x-alpaca
Open In Colab pyg-7b-GPTQ-4bit-128g
https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Open In Colab koala-13B-GPTQ-4bit-128g
https://bair.berkeley.edu/blog/2023/04/03/koala
Open In Colab oasst-llama13b-GPTQ-4bit-128g
https://open-assistant.io
Open In Colab wizard-lm-uncensored-7b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
Open In Colab mpt-storywriter-7b-GPTQ-4bit-128g
https://www.mosaicml.com
Open In Colab wizard-lm-uncensored-13b-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
Open In Colab pyg-13b-GPTQ-4bit-128g
https://huggingface.co/PygmalionAI/pygmalion-13b
Open In Colab falcon-7b-instruct-GPTQ-4bit
https://falconllm.tii.ae/
Open In Colab wizard-lm-13b-1.1-GPTQ-4bit-128g
https://github.com/nlpxucan/WizardLM
Open In Colab llama-2-7b-chat-GPTQ-4bit (4bit)
https://ai.meta.com/llama/
Open In Colab llama-2-13b-chat-GPTQ-4bit (4bit)
https://ai.meta.com/llama/
🚦 WIP 🚦 please try llama-2-13b-chat or llama-2-7b-chat or llama-2-7b-chat-GPTQ-4bit
Open In Colab llama-2-7b-chat (16bit)
https://ai.meta.com/llama/
Open In Colab llama-2-13b-chat (8bit)
https://ai.meta.com/llama/
Open In Colab redmond-puffin-13b-GPTQ-4bit (4bit)
https://huggingface.co/NousResearch/Redmond-Puffin-13B
Open In Colab stable-beluga-7b (16bit)
https://huggingface.co/stabilityai/StableBeluga-7B
Open In Colab doctor-gpt-7b (16bit)
https://ai.meta.com/llama/ (https://github.com/llSourcell/DoctorGPT)
Open In Colab code-llama-7b (16bit)
https://github.com/facebookresearch/codellama
Open In Colab code-llama-instruct-7b (16bit)
https://github.com/facebookresearch/codellama
Open In Colab code-llama-python-7b (16bit)
https://github.com/facebookresearch/codellama
Open In Colab mistral-7b-Instruct-v0.1-8bit (8bit)
https://mistral.ai/
Open In Colab mytho-max-l2-13b-GPTQ (4bit)
https://huggingface.co/Gryphe/MythoMax-L2-13b

πŸ¦’ Colab Pro

According to the Facebook Research LLaMA license (Non-commercial bespoke license), maybe we cannot use this model with a Colab Pro account. But Yann LeCun said "GPL v3" (https://twitter.com/ylecun/status/1629189925089296386) I am a little confused. Is it possible to use this with a non-free Colab Pro account?

Tutorial

https://www.youtube.com/watch?v=kgA7eKU1XuA

⚠ If you encounter an IndexError: list index out of range error, please set the models instruction template.

Screenshot 2023-08-28 165206

Text Generation Web UI

https://github.com/oobabooga/text-generation-webui (Thanks to @oobabooga ❀)

Models License

Model License
vicuna-13b-GPTQ-4bit-128g From https://vicuna.lmsys.org: The online demo is a research preview intended for non-commercial use only, subject to the model License of LLaMA, Terms of Use of the data generated by OpenAI, and Privacy Practices of ShareGPT. Please contact us If you find any potential violation. The code is released under the Apache License 2.0.
gpt4-x-alpaca-13b-native-4bit-128g https://huggingface.co/chavinlo/alpaca-native -> https://huggingface.co/chavinlo/alpaca-13b -> https://huggingface.co/chavinlo/gpt4-x-alpaca
llama-2 https://ai.meta.com/llama/ Llama 2 is available for free for research and commercial use. πŸ₯³

Special Thanks

Thanks to facebookresearch ❀ for https://github.com/facebookresearch/llama
Thanks to lmsys ❀ for https://huggingface.co/lmsys/vicuna-13b-delta-v0
Thanks to anon8231489123 ❀ for https://huggingface.co/anon8231489123/vicuna-13b-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/lmsys/vicuna-13b-delta-v0)
Thanks to tatsu-lab ❀ for https://github.com/tatsu-lab/stanford_alpaca
Thanks to chavinlo ❀ for https://huggingface.co/chavinlo/gpt4-x-alpaca
Thanks to qwopqwop200 ❀ for https://github.com/qwopqwop200/GPTQ-for-LLaMa
Thanks to tsumeone ❀ for https://huggingface.co/tsumeone/gpt4-x-alpaca-13b-native-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/chavinlo/gpt4-x-alpaca)
Thanks to transformers ❀ for https://github.com/huggingface/transformers
Thanks to gradio-app ❀ for https://github.com/gradio-app/gradio
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/stable-vicuna-13B-GPTQ
Thanks to Neko-Institute-of-Science ❀ for https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b
Thanks to gozfarb ❀ for https://huggingface.co/gozfarb/pygmalion-7b-4bit-128g-cuda (GPTQ 4bit quantization of: https://huggingface.co/Neko-Institute-of-Science/pygmalion-7b)
Thanks to young-geng ❀ for https://huggingface.co/young-geng/koala
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/koala-13B-GPTQ-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/young-geng/koala)
Thanks to dvruette ❀ for https://huggingface.co/dvruette/oasst-llama-13b-2-epochs
Thanks to gozfarb ❀ for https://huggingface.co/gozfarb/oasst-llama13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/dvruette/oasst-llama-13b-2-epochs)
Thanks to ehartford ❀ for https://huggingface.co/ehartford/WizardLM-7B-Uncensored
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/WizardLM-7B-uncensored-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-7B-Uncensored)
Thanks to mosaicml ❀ for https://huggingface.co/mosaicml/mpt-7b-storywriter
Thanks to OccamRazor ❀ for https://huggingface.co/OccamRazor/mpt-7b-storywriter-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/mosaicml/mpt-7b-storywriter)
Thanks to ehartford ❀ for https://huggingface.co/ehartford/WizardLM-13B-Uncensored
Thanks to ausboss ❀ for https://huggingface.co/ausboss/WizardLM-13B-Uncensored-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/ehartford/WizardLM-13B-Uncensored)
Thanks to PygmalionAI ❀ for https://huggingface.co/PygmalionAI/pygmalion-13b
Thanks to notstoic ❀ for https://huggingface.co/notstoic/pygmalion-13b-4bit-128g (GPTQ 4bit quantization of: https://huggingface.co/PygmalionAI/pygmalion-13b)
Thanks to WizardLM ❀ for https://huggingface.co/WizardLM/WizardLM-13B-V1.1
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/WizardLM-13B-V1.1-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/WizardLM/WizardLM-13B-V1.1)
Thanks to meta-llama ❀ for https://huggingface.co/meta-llama/Llama-2-7b-chat-hf
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/Llama-2-7b-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-7b-chat-hf)
Thanks to meta-llama ❀ for https://huggingface.co/meta-llama/Llama-2-13b-chat-hf
Thanks to localmodels ❀ for https://huggingface.co/localmodels/Llama-2-13B-Chat-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/meta-llama/Llama-2-13b-chat-hf)
Thanks to NousResearch ❀ for https://huggingface.co/NousResearch/Redmond-Puffin-13B
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/Redmond-Puffin-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/NousResearch/Redmond-Puffin-13B)
Thanks to llSourcell ❀ for https://huggingface.co/llSourcell/medllama2_7b
Thanks to MetaAI ❀ for https://ai.meta.com/research/publications/code-llama-open-foundation-models-for-code/
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/CodeLlama-7B-fp16
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/CodeLlama-7B-Instruct-fp16
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/CodeLlama-7B-Python-fp16
Thanks to MistralAI ❀ for https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
Thanks to Gryphe ❀ for https://huggingface.co/Gryphe/MythoMax-L2-13b
Thanks to TheBloke ❀ for https://huggingface.co/TheBloke/MythoMax-L2-13B-GPTQ (GPTQ 4bit quantization of: https://huggingface.co/Gryphe/MythoMax-L2-13b)

Medical Advice Disclaimer

DISCLAIMER: THIS WEBSITE DOES NOT PROVIDE MEDICAL ADVICE The information, including but not limited to, text, graphics, images and other material contained on this website are for informational purposes only. No material on this site is intended to be a substitute for professional medical advice, diagnosis or treatment. Always seek the advice of your physician or other qualified health care provider with any questions you may have regarding a medical condition or treatment and before undertaking a new health care regimen, and never disregard professional medical advice or delay in seeking it because of something you have read on this website.