Skip to content

Commit

Permalink
Merge pull request oobabooga#5927 from oobabooga/dev
Browse files Browse the repository at this point in the history
Merge dev branch
  • Loading branch information
oobabooga authored Apr 24, 2024
2 parents a4b732c + c9b0df1 commit ad12236
Show file tree
Hide file tree
Showing 25 changed files with 105 additions and 389 deletions.
20 changes: 8 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,16 +107,13 @@ pip install -r <requirements file according to table below>

Requirements file to use:

| GPU | CPU | requirements file to use |
|--------|---------|---------|
| NVIDIA | has AVX2 | `requirements.txt` |
| NVIDIA | no AVX2 | `requirements_noavx2.txt` |
| AMD | has AVX2 | `requirements_amd.txt` |
| AMD | no AVX2 | `requirements_amd_noavx2.txt` |
| CPU only | has AVX2 | `requirements_cpu_only.txt` |
| CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
| Apple | Intel | `requirements_apple_intel.txt` |
| Apple | Apple Silicon | `requirements_apple_silicon.txt` |
| GPU | requirements file to use |
|--------|---------|
| NVIDIA | `requirements.txt` |
| AMD | `requirements_amd.txt` |
| CPU only | `requirements_cpu_only.txt` |
| Apple Intel | `requirements_apple_intel.txt` |
| Apple Silicon | `requirements_apple_silicon.txt` |

### Start the web UI

Expand All @@ -132,7 +129,7 @@ Then browse to

##### AMD GPU on Windows

1) Use `requirements_cpu_only.txt` or `requirements_cpu_only_noavx2.txt` in the command above.
1) Use `requirements_cpu_only.txt` in the command above.

2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-with-hardware-acceleration).
* Use the `LLAMA_HIPBLAS=on` toggle.
Expand Down Expand Up @@ -255,7 +252,6 @@ List of command-line flags

| Flag | Description |
|-------------|-------------|
| `--tensorcores` | Use llama-cpp-python compiled with tensor cores support. This increases performance on RTX cards. NVIDIA only. |
| `--n_ctx N_CTX` | Size of the prompt context. |
| `--threads` | Number of threads to use. |
| `--threads-batch THREADS_BATCH` | Number of threads to use for batches/prompt processing. |
Expand Down
17 changes: 7 additions & 10 deletions docker/amd/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,13 @@ services:
context: .
args:
# Requirements file to use:
# | GPU | CPU | requirements file to use |
# |--------|---------|---------|
# | NVIDIA | has AVX2 | `requirements.txt` |
# | NVIDIA | no AVX2 | `requirements_noavx2.txt` |
# | AMD | has AVX2 | `requirements_amd.txt` |
# | AMD | no AVX2 | `requirements_amd_noavx2.txt` |
# | CPU only | has AVX2 | `requirements_cpu_only.txt` |
# | CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
# | Apple | Intel | `requirements_apple_intel.txt` |
# | Apple | Apple Silicon | `requirements_apple_silicon.txt` |
# | GPU | requirements file to use |
# |--------|---------|
# | NVIDIA | `requirements.txt` |
# | AMD | `requirements_amd.txt` |
# | CPU only | `requirements_cpu_only.txt` |
# | Apple Intel | `requirements_apple_intel.txt` |
# | Apple Silicon | `requirements_apple_silicon.txt` |
# Default: requirements.txt`
# BUILD_REQUIREMENTS: requirements.txt

Expand Down
17 changes: 7 additions & 10 deletions docker/cpu/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,13 @@ services:
context: .
args:
# Requirements file to use:
# | GPU | CPU | requirements file to use |
# |--------|---------|---------|
# | NVIDIA | has AVX2 | `requirements.txt` |
# | NVIDIA | no AVX2 | `requirements_noavx2.txt` |
# | AMD | has AVX2 | `requirements_amd.txt` |
# | AMD | no AVX2 | `requirements_amd_noavx2.txt` |
# | CPU only | has AVX2 | `requirements_cpu_only.txt` |
# | CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
# | Apple | Intel | `requirements_apple_intel.txt` |
# | Apple | Apple Silicon | `requirements_apple_silicon.txt` |
# | GPU | requirements file to use |
# |--------|---------|
# | NVIDIA | `requirements.txt` |
# | AMD | `requirements_amd.txt` |
# | CPU only | `requirements_cpu_only.txt` |
# | Apple Intel | `requirements_apple_intel.txt` |
# | Apple Silicon | `requirements_apple_silicon.txt` |
# Default: requirements.txt`
# BUILD_REQUIREMENTS: requirements.txt

Expand Down
21 changes: 9 additions & 12 deletions docker/intel/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,19 @@ services:
context: .
args:
# Requirements file to use:
# | GPU | CPU | requirements file to use |
# |--------|---------|---------|
# | NVIDIA | has AVX2 | `requirements.txt` |
# | NVIDIA | no AVX2 | `requirements_noavx2.txt` |
# | AMD | has AVX2 | `requirements_amd.txt` |
# | AMD | no AVX2 | `requirements_amd_noavx2.txt` |
# | CPU only | has AVX2 | `requirements_cpu_only.txt` |
# | CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
# | Apple | Intel | `requirements_apple_intel.txt` |
# | Apple | Apple Silicon | `requirements_apple_silicon.txt` |
# | GPU | requirements file to use |
# |--------|---------|
# | NVIDIA | `requirements.txt` |
# | AMD | `requirements_amd.txt` |
# | CPU only | `requirements_cpu_only.txt` |
# | Apple Intel | `requirements_apple_intel.txt` |
# | Apple Silicon | `requirements_apple_silicon.txt` |
# Default: requirements.txt`
# BUILD_REQUIREMENTS: requirements.txt

# Extension requirements to build:
# BUILD_EXTENSIONS:

# specify which cuda version your card supports: https://developer.nvidia.com/cuda-gpus
TORCH_CUDA_ARCH_LIST: ${TORCH_CUDA_ARCH_LIST:-7.5}
BUILD_EXTENSIONS: ${BUILD_EXTENSIONS:-}
Expand Down
2 changes: 1 addition & 1 deletion docker/nvidia/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -18,4 +18,4 @@ COPY CMD_FLAGS.txt /home/app/text-generation-webui/
EXPOSE ${CONTAINER_PORT:-7860} ${CONTAINER_API_PORT:-5000} ${CONTAINER_API_STREAM_PORT:-5005}
WORKDIR /home/app/text-generation-webui
# set umask to ensure group read / write at runtime
CMD umask 0002 && export HOME=/home/app/text-generation-webui && ./start_linux.sh
CMD umask 0002 && export HOME=/home/app/text-generation-webui && ./start_linux.sh --listen
21 changes: 9 additions & 12 deletions docker/nvidia/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,22 +5,19 @@ services:
context: .
args:
# Requirements file to use:
# | GPU | CPU | requirements file to use |
# |--------|---------|---------|
# | NVIDIA | has AVX2 | `requirements.txt` |
# | NVIDIA | no AVX2 | `requirements_noavx2.txt` |
# | AMD | has AVX2 | `requirements_amd.txt` |
# | AMD | no AVX2 | `requirements_amd_noavx2.txt` |
# | CPU only | has AVX2 | `requirements_cpu_only.txt` |
# | CPU only | no AVX2 | `requirements_cpu_only_noavx2.txt` |
# | Apple | Intel | `requirements_apple_intel.txt` |
# | Apple | Apple Silicon | `requirements_apple_silicon.txt` |
# | GPU | requirements file to use |
# |--------|---------|
# | NVIDIA | `requirements.txt` |
# | AMD | `requirements_amd.txt` |
# | CPU only | `requirements_cpu_only.txt` |
# | Apple Intel | `requirements_apple_intel.txt` |
# | Apple Silicon | `requirements_apple_silicon.txt` |
# Default: requirements.txt`
# BUILD_REQUIREMENTS: requirements.txt

# Extension requirements to build:
# BUILD_EXTENSIONS:

# specify which cuda version your card supports: https://developer.nvidia.com/cuda-gpus
TORCH_CUDA_ARCH_LIST: ${TORCH_CUDA_ARCH_LIST:-7.5}
BUILD_EXTENSIONS: ${BUILD_EXTENSIONS:-}
Expand Down
4 changes: 1 addition & 3 deletions docs/04 - Model Tab.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Options:
* **alpha_value**: Used to extend the context length of a model with a minor loss in quality. I have measured 1.75 to be optimal for 1.5x context, and 2.5 for 2x context. That is, with alpha = 2.5 you can make a model with 4096 context length go to 8192 context length.
* **rope_freq_base**: Originally another way to write "alpha_value", it ended up becoming a necessary parameter for some models like CodeLlama, which was fine-tuned with this set to 1000000 and hence needs to be loaded with it set to 1000000 as well.
* **compress_pos_emb**: The first and original context-length extension method, discovered by [kaiokendev](https://kaiokendev.github.io/til). When set to 2, the context length is doubled, 3 and it's tripled, etc. It should only be used for models that have been fine-tuned with this parameter set to different than 1. For models that have not been tuned to have greater context length, alpha_value will lead to a smaller accuracy loss.
* **cpu**: Loads the model in CPU mode using Pytorch. The model will be loaded in 32-bit precision, so a lot of RAM will be used. CPU inference with transformers is older than llama.cpp and it works, but it's a lot slower. Note: this parameter has a different interpretation in the llama.cpp loader (see below).
* **cpu**: Loads the model in CPU mode using Pytorch. The model will be loaded in 32-bit precision, so a lot of RAM will be used. CPU inference with transformers is older than llama.cpp and it works, but it's a lot slower.
* **load-in-8bit**: Load the model in 8-bit precision using bitsandbytes. The 8-bit kernel in that library has been optimized for training and not inference, so load-in-8bit is slower than load-in-4bit (but more accurate).
* **bf16**: Use bfloat16 precision instead of float16 (the default). Only applies when quantization is not used.
* **auto-devices**: When checked, the backend will try to guess a reasonable value for "gpu-memory" to allow you to load a model with CPU offloading. I recommend just setting "gpu-memory" manually instead. This parameter is also needed for loading GPTQ models, in which case it needs to be checked before loading the model.
Expand Down Expand Up @@ -84,9 +84,7 @@ Example: https://huggingface.co/TheBloke/Llama-2-7b-Chat-GGUF
* **n_batch**: Batch size for prompt processing. Higher values are supposed to make generation faster, but I have never obtained any benefit from changing this value.
* **threads**: Number of threads. Recommended value: your number of physical cores.
* **threads_batch**: Number of threads for batch processing. Recommended value: your total number of cores (physical + virtual).
* **tensorcores**: Use llama.cpp compiled with "tensor cores" support, which improves performance on NVIDIA RTX cards in most cases.
* **streamingllm**: Experimental feature to avoid re-evaluating the entire prompt when part of it is removed, for instance, when you hit the context length for the model in chat mode and an old message is removed.
* **cpu**: Force a version of llama.cpp compiled without GPU acceleration to be used. Can usually be ignored. Only set this if you want to use CPU only and llama.cpp doesn't work otherwise.
* **no_mul_mat_q**: Disable the mul_mat_q kernel. This kernel usually improves generation speed significantly. This option to disable it is included in case it doesn't work on some system.
* **no-mmap**: Loads the model into memory at once, possibly preventing I/O operations later on at the cost of a longer load time.
* **mlock**: Force the system to keep the model in RAM rather than swapping or compressing (no idea what this means, never used it).
Expand Down
18 changes: 2 additions & 16 deletions modules/llama_cpp_python_hijack.py
Original file line number Diff line number Diff line change
@@ -1,25 +1,11 @@
from typing import Sequence

import llama_cpp
from tqdm import tqdm

from modules import shared
from modules.cache_utils import process_llamacpp_cache

try:
import llama_cpp
except:
llama_cpp = None

try:
import llama_cpp_cuda
except:
llama_cpp_cuda = None

try:
import llama_cpp_cuda_tensorcores
except:
llama_cpp_cuda_tensorcores = None


def eval_with_progress(self, tokens: Sequence[int]):
"""
Expand Down Expand Up @@ -81,7 +67,7 @@ def my_generate(self, *args, **kwargs):
lib.Llama.generate = my_generate


for lib in [llama_cpp, llama_cpp_cuda, llama_cpp_cuda_tensorcores]:
for lib in [llama_cpp]:
if lib is not None:
lib.Llama.eval = eval_with_progress
monkey_patch_generate(lib)
31 changes: 3 additions & 28 deletions modules/llamacpp_hf.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@
from pathlib import Path
from typing import Any, Dict, Optional, Union

import llama_cpp
import torch
from torch.nn import CrossEntropyLoss
from transformers import GenerationConfig, PretrainedConfig, PreTrainedModel
Expand All @@ -10,32 +11,6 @@
from modules import RoPE, llama_cpp_python_hijack, shared
from modules.logging_colors import logger

try:
import llama_cpp
except:
llama_cpp = None

try:
import llama_cpp_cuda
except:
llama_cpp_cuda = None

try:
import llama_cpp_cuda_tensorcores
except:
llama_cpp_cuda_tensorcores = None


def llama_cpp_lib():
if shared.args.cpu and llama_cpp is not None:
return llama_cpp
elif shared.args.tensorcores and llama_cpp_cuda_tensorcores is not None:
return llama_cpp_cuda_tensorcores
elif llama_cpp_cuda is not None:
return llama_cpp_cuda
else:
return llama_cpp


class LlamacppHF(PreTrainedModel):
def __init__(self, model, path):
Expand All @@ -57,7 +32,7 @@ def __init__(self, model, path):
'n_tokens': self.model.n_tokens,
'input_ids': self.model.input_ids.copy(),
'scores': self.model.scores.copy(),
'ctx': llama_cpp_lib().llama_new_context_with_model(model.model, model.context_params)
'ctx': llama_cpp.llama_new_context_with_model(model.model, model.context_params)
}

def _validate_model_class(self):
Expand Down Expand Up @@ -220,7 +195,7 @@ def from_pretrained(cls, pretrained_model_name_or_path: Optional[Union[str, os.P
'split_mode': 1 if not shared.args.row_split else 2
}

Llama = llama_cpp_lib().Llama
Llama = llama_cpp.Llama
model = Llama(**params)

return LlamacppHF(model, model_file)
35 changes: 5 additions & 30 deletions modules/llamacpp_model.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import re
from functools import partial

import llama_cpp
import numpy as np
import torch

Expand All @@ -9,32 +10,6 @@
from modules.logging_colors import logger
from modules.text_generation import get_max_prompt_length

try:
import llama_cpp
except:
llama_cpp = None

try:
import llama_cpp_cuda
except:
llama_cpp_cuda = None

try:
import llama_cpp_cuda_tensorcores
except:
llama_cpp_cuda_tensorcores = None


def llama_cpp_lib():
if shared.args.cpu and llama_cpp is not None:
return llama_cpp
elif shared.args.tensorcores and llama_cpp_cuda_tensorcores is not None:
return llama_cpp_cuda_tensorcores
elif llama_cpp_cuda is not None:
return llama_cpp_cuda
else:
return llama_cpp


def ban_eos_logits_processor(eos_token, input_ids, logits):
logits[eos_token] = -float('inf')
Expand All @@ -60,8 +35,8 @@ def __del__(self):
@classmethod
def from_pretrained(self, path):

Llama = llama_cpp_lib().Llama
LlamaCache = llama_cpp_lib().LlamaCache
Llama = llama_cpp.Llama
LlamaCache = llama_cpp.LlamaCache

result = self()
cache_capacity = 0
Expand Down Expand Up @@ -126,12 +101,12 @@ def load_grammar(self, string):
if string != self.grammar_string:
self.grammar_string = string
if string.strip() != '':
self.grammar = llama_cpp_lib().LlamaGrammar.from_string(string)
self.grammar = llama_cpp.LlamaGrammar.from_string(string)
else:
self.grammar = None

def generate(self, prompt, state, callback=None):
LogitsProcessorList = llama_cpp_lib().LogitsProcessorList
LogitsProcessorList = llama_cpp.LogitsProcessorList
prompt = prompt if type(prompt) is str else prompt.decode()

# Handle truncation
Expand Down
4 changes: 0 additions & 4 deletions modules/loaders.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,9 @@
'alpha_value',
'rope_freq_base',
'compress_pos_emb',
'cpu',
'numa',
'no_offload_kqv',
'row_split',
'tensorcores',
'streaming_llm',
'attention_sink_size',
],
Expand All @@ -62,15 +60,13 @@
'alpha_value',
'rope_freq_base',
'compress_pos_emb',
'cpu',
'numa',
'cfg_cache',
'trust_remote_code',
'no_use_fast',
'logits_all',
'no_offload_kqv',
'row_split',
'tensorcores',
'streaming_llm',
'attention_sink_size',
'llamacpp_HF_info',
Expand Down
8 changes: 4 additions & 4 deletions modules/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -179,7 +179,7 @@ def huggingface_loader(model_name):

# DeepSpeed ZeRO-3
elif shared.args.deepspeed:
model = LoaderClass.from_pretrained(path_to_model, torch_dtype=params['torch_dtype'], trust_remote_code=params['trust_remote_code'])
model = LoaderClass.from_pretrained(path_to_model, torch_dtype=params['torch_dtype'], trust_remote_code=params.get('trust_remote_code'))
model = deepspeed.initialize(model=model, config_params=ds_config, model_parameters=None, optimizer=None, lr_scheduler=None)[0]
model.module.eval() # Inference
logger.info(f'DeepSpeed ZeRO-3 is enabled: {is_deepspeed_zero3_enabled()}')
Expand Down Expand Up @@ -215,15 +215,15 @@ def huggingface_loader(model_name):
else:
params['quantization_config'] = BitsAndBytesConfig(load_in_8bit=True)

if params['max_memory'] is not None:
if params.get('max_memory') is not None:
with init_empty_weights():
model = LoaderClass.from_config(config, trust_remote_code=params['trust_remote_code'])
model = LoaderClass.from_config(config, trust_remote_code=params.get('trust_remote_code'))

model.tie_weights()
params['device_map'] = infer_auto_device_map(
model,
dtype=torch.int8,
max_memory=params['max_memory'],
max_memory=params.get('max_memory'),
no_split_module_classes=model._no_split_modules
)

Expand Down
Loading

0 comments on commit ad12236

Please sign in to comment.