A Gradio web UI for Large Language Models.
Its goal is to become the AUTOMATIC1111/stable-diffusion-webui of text generation.
![]()  | 
![]()  | 
|---|---|
![]()  | 
![]()  | 
- 3 interface modes: default (two columns), notebook, and chat
 - Multiple model backends: transformers, llama.cpp, ExLlama, AutoGPTQ, GPTQ-for-LLaMa, ctransformers
 - Dropdown menu for quickly switching between different models
 - LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA
 - Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others
 - 4-bit, 8-bit, and CPU inference through the transformers library
 - Use llama.cpp models with transformers samplers (
llamacpp_HFloader) - Multimodal pipelines, including LLaVA and MiniGPT-4
 - Extensions framework
 - Custom chat characters
 - Very efficient text streaming
 - Markdown output with LaTeX rendering, to use for instance with GALACTICA
 - API, including endpoints for websocket streaming (see the examples)
 
To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
| Windows | Linux | macOS | WSL | 
|---|---|---|---|
| oobabooga-windows.zip | oobabooga-linux.zip | oobabooga-macos.zip | oobabooga-wsl.zip | 
Just download the zip above, extract it, and double-click on "start". The web UI and all its dependencies will be installed in the same folder.
- The source codes and more information can be found here: https://github.com/oobabooga/one-click-installers
 - There is no need to run the installers as admin.
 - Huge thanks to @jllllll, @ClayShoaf, and @xNul for their contributions to these installers.
 
Recommended if you have some experience with the command-line.
https://docs.conda.io/en/latest/miniconda.html
On Linux or WSL, it can be automatically installed with these two commands (source):
curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
bash Miniconda3.sh
conda create -n textgen python=3.10.9
conda activate textgen
| System | GPU | Command | 
|---|---|---|
| Linux/WSL | NVIDIA | pip3 install torch torchvision torchaudio | 
| Linux/WSL | CPU only | pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu | 
| Linux | AMD | pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.4.2 | 
| MacOS + MPS | Any | pip3 install torch torchvision torchaudio | 
| Windows | NVIDIA | pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117 | 
| Windows | CPU only | pip3 install torch torchvision torchaudio | 
The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
- MacOS users: oobabooga#393
 - AMD users: https://rentry.org/eq3hg
 
git clone https://github.com/oobabooga/text-generation-webui
cd text-generation-webui
pip install -r requirements.txt
Precompiled wheels are included for CPU-only and NVIDIA GPUs (cuBLAS). For AMD, Metal, and some specific CPUs, you need to uninstall those wheels and compile llama-cpp-python yourself.
To uninstall:
pip uninstall -y llama-cpp-python llama-cpp-python-cuda
To compile: https://github.com/abetlen/llama-cpp-python#installation-with-openblas--cublas--clblast--metal
bitsandbytes >= 0.39 may not work. In that case, to use --load-in-8bit, you may have to downgrade like this:
- Linux: 
pip install bitsandbytes==0.38.1 - Windows: 
pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl 
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
cp docker/.env.example .env
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
docker compose up --build
- You need to have docker compose v2.17 or higher installed. See this guide for instructions.
 - For additional docker files, check out this repository.
 
From time to time, the requirements.txt changes. To update, use these commands:
conda activate textgen
cd text-generation-webui
pip install -r requirements.txt --upgrade
Models should be placed in the text-generation-webui/models folder. They are usually downloaded from Hugging Face.
- Transformers or GPTQ models are made of several files and must be placed in a subfolder. Example:
 
text-generation-webui
├── models
│   ├── lmsys_vicuna-33b-v1.3
│   │   ├── config.json
│   │   ├── generation_config.json
│   │   ├── pytorch_model-00001-of-00007.bin
│   │   ├── pytorch_model-00002-of-00007.bin
│   │   ├── pytorch_model-00003-of-00007.bin
│   │   ├── pytorch_model-00004-of-00007.bin
│   │   ├── pytorch_model-00005-of-00007.bin
│   │   ├── pytorch_model-00006-of-00007.bin
│   │   ├── pytorch_model-00007-of-00007.bin
│   │   ├── pytorch_model.bin.index.json
│   │   ├── special_tokens_map.json
│   │   ├── tokenizer_config.json
│   │   └── tokenizer.model
In the "Model" tab of the UI, those models can be automatically downloaded from Hugging Face. You can also download them via the command-line with python download-model.py organization/model.
- GGML models are a single file and should be placed directly into 
models. Example: 
text-generation-webui
├── models
│   ├── llama-13b.ggmlv3.q4_K_M.bin
Those models must be downloaded manually, as they are not currently supported by the automated downloader.
Instructions
GPT-4chan has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
After downloading the model, follow these steps:
- Place the files under 
models/gpt4chan_model_float16ormodels/gpt4chan_model. - Place GPT-J 6B's config.json file in that same folder: config.json.
 - Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
 
python download-model.py EleutherAI/gpt-j-6B --text-only
When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format:
conda activate textgen
cd text-generation-webui
python server.py
Then browse to
http://localhost:7860/?__theme=dark
Optionally, you can use the following command-line flags:
| Flag | Description | 
|---|---|
-h, --help | 
Show this help message and exit. | 
--multi-user | 
Multi-user mode. Chat histories are not saved or automatically loaded. WARNING: this is highly experimental. | 
--character CHARACTER | 
The name of the character to load in chat mode by default. | 
--model MODEL | 
Name of the model to load by default. | 
--lora LORA [LORA ...] | 
The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. | 
--model-dir MODEL_DIR | 
Path to directory with all the models. | 
--lora-dir LORA_DIR | 
Path to directory with all the loras. | 
--model-menu | 
Show a model menu in the terminal when the web UI is first launched. | 
--settings SETTINGS_FILE | 
Load the default interface settings from this yaml file. See settings-template.yaml for an example. If you create a file called settings.yaml, this file will be loaded by default without the need to use the --settings flag. | 
--extensions EXTENSIONS [EXTENSIONS ...] | 
The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. | 
--verbose | 
Print the prompts to the terminal. | 
| Flag | Description | 
|---|---|
--loader LOADER | 
Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, ctransformers | 
| Flag | Description | 
|---|---|
--cpu | 
Use the CPU to generate text. Warning: Training on CPU is extremely slow. | 
--auto-devices | 
Automatically split the model across the available GPU(s) and CPU. | 
--gpu-memory GPU_MEMORY [GPU_MEMORY ...] | 
Maximum GPU memory in GiB to be allocated per GPU. Example: --gpu-memory 10 for a single GPU, --gpu-memory 10 5 for two GPUs. You can also set values in MiB like --gpu-memory 3500MiB. | 
--cpu-memory CPU_MEMORY | 
Maximum CPU memory in GiB to allocate for offloaded weights. Same as above. | 
--disk | 
If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. | 
--disk-cache-dir DISK_CACHE_DIR | 
Directory to save the disk cache to. Defaults to cache/. | 
--load-in-8bit | 
Load the model with 8-bit precision (using bitsandbytes). | 
--bf16 | 
Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. | 
--no-cache | 
Set use_cache to False while generating text. This reduces the VRAM usage a bit with a performance cost. | 
--xformers | 
Use xformer's memory efficient attention. This should increase your tokens/s. | 
--sdp-attention | 
Use torch 2.0's sdp attention. | 
--trust-remote-code | 
Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. | 
| Flag | Description | 
|---|---|
--load-in-4bit | 
Load the model with 4-bit precision (using bitsandbytes). | 
--compute_dtype COMPUTE_DTYPE | 
compute dtype for 4-bit. Valid options: bfloat16, float16, float32. | 
--quant_type QUANT_TYPE | 
quant_type for 4-bit. Valid options: nf4, fp4. | 
--use_double_quant | 
use_double_quant for 4-bit. | 
| Flag | Description | 
|---|---|
--threads | 
Number of threads to use. | 
--n_batch | 
Maximum number of prompt tokens to batch together when calling llama_eval. | 
--n-gpu-layers N_GPU_LAYERS | 
Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. | 
--n_ctx N_CTX | 
Size of the prompt context. | 
| Flag | Description | 
|---|---|
--no-mmap | 
Prevent mmap from being used. | 
--mlock | 
Force the system to keep the model in RAM. | 
--mul_mat_q | 
Activate new mulmat kernels. | 
--cache-capacity CACHE_CAPACITY | 
Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. | 
--tensor_split TENSOR_SPLIT | 
Split the model across multiple GPUs, comma-separated list of proportions, e.g. 18,17 | 
--llama_cpp_seed SEED | 
Seed for llama-cpp models. Default 0 (random). | 
--n_gqa N_GQA | 
grouped-query attention. Must be 8 for llama-2 70b. | 
--rms_norm_eps RMS_NORM_EPS | 
5e-6 is a good value for llama-2 models. | 
--cpu | 
Use the CPU version of llama-cpp-python instead of the GPU-accelerated version. | 
| Flag | Description | 
|---|---|
--model_type MODEL_TYPE | 
Model type of pre-quantized model. Currently gpt2, gptj, gptneox, falcon, llama, mpt, starcoder (gptbigcode), dollyv2, and replit are supported. | 
| Flag | Description | 
|---|---|
--triton | 
Use triton. | 
--no_inject_fused_attention | 
Disable the use of fused attention, which will use less VRAM at the cost of slower inference. | 
--no_inject_fused_mlp | 
Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. | 
--no_use_cuda_fp16 | 
This can make models faster on some systems. | 
--desc_act | 
For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. | 
--disable_exllama | 
Disable ExLlama kernel, which can improve inference speed on some systems. | 
| Flag | Description | 
|---|---|
--gpu-split | 
Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. 20,7,7 | 
--max_seq_len MAX_SEQ_LEN | 
Maximum sequence length. | 
| Flag | Description | 
|---|---|
--wbits WBITS | 
Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. | 
--model_type MODEL_TYPE | 
Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. | 
--groupsize GROUPSIZE | 
Group size. | 
--pre_layer PRE_LAYER [PRE_LAYER ...] | 
The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg --pre_layer 30 60. | 
--checkpoint CHECKPOINT | 
The path to the quantized checkpoint file. If not specified, it will be automatically detected. | 
--monkey-patch | 
Apply the monkey patch for using LoRAs with quantized models. | 
| Flag | Description | 
|---|---|
--deepspeed | 
Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. | 
--nvme-offload-dir NVME_OFFLOAD_DIR | 
DeepSpeed: Directory to use for ZeRO-3 NVME offloading. | 
--local_rank LOCAL_RANK | 
DeepSpeed: Optional argument for distributed setups. | 
| Flag | Description | 
|---|---|
--rwkv-strategy RWKV_STRATEGY | 
RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". | 
--rwkv-cuda-on | 
RWKV: Compile the CUDA kernel for better performance. | 
| Flag | Description | 
|---|---|
--alpha_value ALPHA_VALUE | 
Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both. | 
--compress_pos_emb COMPRESS_POS_EMB | 
Positional embeddings compression factor. Should typically be set to max_seq_len / 2048. | 
| Flag | Description | 
|---|---|
--listen | 
Make the web UI reachable from your local network. | 
--listen-host LISTEN_HOST | 
The hostname that the server will use. | 
--listen-port LISTEN_PORT | 
The listening port that the server will use. | 
--share | 
Create a public URL. This is useful for running the web UI on Google Colab or similar. | 
--auto-launch | 
Open the web UI in the default browser upon launch. | 
--gradio-auth USER:PWD | 
set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" | 
--gradio-auth-path GRADIO_AUTH_PATH | 
Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" | 
--ssl-keyfile SSL_KEYFILE | 
The path to the SSL certificate key file. | 
--ssl-certfile SSL_CERTFILE | 
The path to the SSL certificate cert file. | 
| Flag | Description | 
|---|---|
--api | 
Enable the API extension. | 
--public-api | 
Create a public URL for the API using Cloudfare. | 
--public-api-id PUBLIC_API_ID | 
Tunnel ID for named Cloudflare Tunnel. Use together with public-api option. | 
--api-blocking-port BLOCKING_PORT | 
The listening port for the blocking API. | 
--api-streaming-port STREAMING_PORT | 
The listening port for the streaming API. | 
| Flag | Description | 
|---|---|
--multimodal-pipeline PIPELINE | 
The multimodal pipeline to use. Examples: llava-7b, llava-13b. | 
Inference settings presets can be created under presets/ as yaml files. These files are detected automatically at startup.
The presets that are included by default are the result of a contest that received 7215 votes. More details can be found here.
If you would like to contribute to the project, check out the Contributing guidelines.
- Subreddit: https://www.reddit.com/r/oobabooga/
 - Discord: https://discord.gg/jwZCF2dPQN
 




