Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Usage]: Can I use vllm.LLM(quantization="bitsandbytes"...) when bitsandbytes is supported in the v0.5.0 version #5480

Closed
cywuuuu opened this issue Jun 13, 2024 · 12 comments
Labels
stale usage How to use vllm

Comments

@cywuuuu
Copy link

cywuuuu commented Jun 13, 2024

Your current environment

The output of `python collect_env.py`
PyTorch version: 2.3.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.29.3
Libc version: glibc-2.35

Python version: 3.10.8 (main, Nov 24 2022, 14:13:03) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-91-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA L20
Nvidia driver version: 550.54.14
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                       x86_64
CPU op-mode(s):                     32-bit, 64-bit
Address sizes:                      52 bits physical, 57 bits virtual
Byte Order:                         Little Endian
CPU(s):                             180
On-line CPU(s) list:                0-179
Vendor ID:                          GenuineIntel
Model name:                         Intel(R) Xeon(R) Platinum 8457C
CPU family:                         6
Model:                              143
Thread(s) per core:                 2
Core(s) per socket:                 45
Socket(s):                          2
Stepping:                           8
BogoMIPS:                           5200.00
Flags:                              fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx_vnni avx512_bf16 wbnoinvd arat avx512vbmi umip pku ospke waitpkg avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid cldemote movdiri movdir64b fsrm md_clear serialize tsxldtrk arch_lbr amx_bf16 avx512_fp16 amx_tile amx_int8 arch_capabilities
Hypervisor vendor:                  KVM
Virtualization type:                full
L1d cache:                          4.2 MiB (90 instances)
L1i cache:                          2.8 MiB (90 instances)
L2 cache:                           180 MiB (90 instances)
L3 cache:                           195 MiB (2 instances)
NUMA node(s):                       2
NUMA node0 CPU(s):                  0-89
NUMA node1 CPU(s):                  90-179
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit:        Not affected
Vulnerability L1tf:                 Not affected
Vulnerability Mds:                  Not affected
Vulnerability Meltdown:             Not affected
Vulnerability Mmio stale data:      Unknown: No mitigations
Vulnerability Retbleed:             Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass:    Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:           Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:           Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds:                Not affected
Vulnerability Tsx async abort:      Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.3
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] sentence-transformers==2.7.0
[pip3] torch==2.3.0
[pip3] torchvision==0.16.2+cu121
[pip3] transformers==4.40.0
[pip3] triton==2.3.0
[pip3] vllm-nccl-cu12==2.18.1.0.4.0
[conda] numpy                     1.26.3                   pypi_0    pypi
[conda] nvidia-nccl-cu12          2.20.5                   pypi_0    pypi
[conda] sentence-transformers     2.7.0                    pypi_0    pypi
[conda] torch                     2.1.2+cu121              pypi_0    pypi
[conda] torchvision               0.16.2+cu121             pypi_0    pypi
[conda] transformers              4.40.0                   pypi_0    pypi
[conda] triton                    2.3.0                    pypi_0    pypi
[conda] vllm-nccl-cu12            2.18.1.0.4.0             pypi_0    pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.4.2
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    NIC0    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      SYS     90-179  1               N/A
NIC0    SYS      X 

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0

How would you like to use vllm

I want to run inference of a mixtral model qlora with bitsandbytes. I don't know how to integrate it with vllm.

@cywuuuu cywuuuu added the usage How to use vllm label Jun 13, 2024
@cywuuuu
Copy link
Author

cywuuuu commented Jun 13, 2024

but actually i didnt see it supported in vllm/entrypoints/llm.py

from contextlib import contextmanager
from typing import ClassVar, List, Optional, Sequence, Union, cast, overload

from tqdm import tqdm
from transformers import PreTrainedTokenizer, PreTrainedTokenizerFast

from vllm.engine.arg_utils import EngineArgs
from vllm.engine.llm_engine import LLMEngine
from vllm.inputs import (PromptInputs, PromptStrictInputs, TextPrompt,
                         TextTokensPrompt, TokensPrompt,
                         parse_and_batch_prompt)
from vllm.logger import init_logger
from vllm.lora.request import LoRARequest
from vllm.outputs import EmbeddingRequestOutput, RequestOutput
from vllm.pooling_params import PoolingParams
from vllm.sampling_params import SamplingParams
from vllm.transformers_utils.tokenizer import get_cached_tokenizer
from vllm.usage.usage_lib import UsageContext
from vllm.utils import Counter, deprecate_kwargs

logger = init_logger(__name__)


class LLM:
    """An LLM for generating texts from given prompts and sampling parameters.

    This class includes a tokenizer, a language model (possibly distributed
    across multiple GPUs), and GPU memory space allocated for intermediate
    states (aka KV cache). Given a batch of prompts and sampling parameters,
    this class generates texts from the model, using an intelligent batching
    mechanism and efficient memory management.

    Args:
        model: The name or path of a HuggingFace Transformers model.
        tokenizer: The name or path of a HuggingFace Transformers tokenizer.
        tokenizer_mode: The tokenizer mode. "auto" will use the fast tokenizer
            if available, and "slow" will always use the slow tokenizer.
        skip_tokenizer_init: If true, skip initialization of tokenizer and
            detokenizer. Expect valid prompt_token_ids and None for prompt
            from the input.
        trust_remote_code: Trust remote code (e.g., from HuggingFace) when
            downloading the model and tokenizer.
        tensor_parallel_size: The number of GPUs to use for distributed
            execution with tensor parallelism.
        dtype: The data type for the model weights and activations. Currently,
            we support `float32`, `float16`, and `bfloat16`. If `auto`, we use
            the `torch_dtype` attribute specified in the model config file.
            However, if the `torch_dtype` in the config is `float32`, we will
            use `float16` instead.
        quantization: The method used to quantize the model weights. Currently,
            we support "awq", "gptq", "squeezellm", and "fp8" (experimental).
            If None, we first check the `quantization_config` attribute in the
            model config file. If that is None, we assume the model weights are
            not quantized and use `dtype` to determine the data type of
            the weights.

@orange-jams
Copy link

FYI: https://github.com/vllm-project/vllm/blob/v0.5.0/examples/lora_with_quantization_inference.py#L82

It seems that the bitsandbytes of VLLM currently only supports the llama model.

@jeejeelee
Copy link
Collaborator

ping @chenqianfzh

@QwertyJack
Copy link
Contributor

Does it support llama3?

@cywuuuu
Copy link
Author

cywuuuu commented Jun 14, 2024

Does it support llama3?

I am not sure, I am trying to deal this with mixtral, it seems not working

@jeejeelee
Copy link
Collaborator

Does it support llama3?

I am not sure, I am trying to deal this with mixtral, it seems not working

Currently, mixtrail does not support B&B, but llama3 should be able to.

@simeneide
Copy link

When I try to load 'meta-llama/Meta-Llama-3-70B-Instruct' I get the following error:

File ~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:435, in LlamaForCausalLM.load_weights(self, weights)
    [433](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:433)     else:
    [434](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:434)         name = remapped_kv_scale_name
--> [435](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:435) param = params_dict[name]
    [436](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:436) weight_loader = getattr(param, "weight_loader",
    [437](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:437)                         default_weight_loader)
    [438](https://vscode-remote+ssh-002dremote-002bdatacrunch-002dplayground.vscode-resource.vscode-cdn.net/home/simen.eide%40schibsted.com/~/.local/lib/python3.10/site-packages/vllm/model_executor/models/llama.py:438) weight_loader(param, loaded_weight)

KeyError: 'model.layers.46.mlp.down_proj.weight'

@QwertyJack
Copy link
Contributor

When loading Llama3-8B-Instruct I got garbage output: #5569

@ekmekovski
Copy link

WHen u use arguents for engine like the below:
python3 -m vllm.entrypoints.openai.api_server --host x.x.x.x --port xxxx --model xxxx/Llama3-8b --quatization bitsandbytes --enforce_eager --dtype half
What quantization will it make u use?
Thnx lot

Copy link

This issue has been automatically marked as stale because it has not had any activity within 90 days. It will be automatically closed if no further activity occurs within 30 days. Leave a comment if you feel this issue should remain open. Thank you!

@github-actions github-actions bot added the stale label Nov 20, 2024
Copy link

This issue has been automatically closed due to inactivity. Please feel free to reopen if you feel it is still relevant. Thank you!

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
stale usage How to use vllm
Projects
None yet
Development

No branches or pull requests

6 participants