-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: BitsandBytes quantization is not working as expected #5569
Comments
cc @mgoin |
Thanks for reporting this issue @QwertyJack! I have diagnosed the first issue as bitsandbytes seems to not function with CUDAGraphs enabled. We have a test case for the format but it always runs with The second issue is that this isn't sufficient to produce good results. I still see gibberish in the output of Llama 3 8B with Example with
Client:
|
Thanks for confirming! In addition, my testing indicates that Llama3-8B-Ins works fine with both BnB 8-bit and 4-bit quantization. tokenizer = AutoTokenizer.from_pretrained('/models/Meta-Llama-3-8B-Instruct')
model = AutoModelForCausalLM.from_pretrained('/models/Meta-Llama-3-8B-Instruct', load_in_4bit=True)
messages = [{"role": "user", "content": "Hi!"}]
input_ids = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(input_ids)
response = outputs[0][input_ids.shape[-1]:]
print(tokenizer.decode(response))
# Will output:
#
# Hi! It's nice to meet you. Is there something I can help you with, or would you like to chat?
# Btw, can I specify 8-bit or 4-bit for BnB quant in vLLM serving API? |
Hello @QwertyJack, @mgoin, I also had a problem with Llama 3 using bitsandbytes quantization via the OpenAI endpoint. python3 -m vllm.entrypoints.openai.api_server --model meta-llama/Meta-Llama-3-8B-Instruct --load-format bitsandbytes --quantization bitsandbytes --enforce-eager --gpu-memory-utilization 0.85 Then, using
However, as tested by @chenqianfzh in lora_with_quantization_inference.py, python3 -m vllm.entrypoints.openai.api_server --model huggyllama/llama-7b --load-format bitsandbytes --quantization bitsandbytes --enforce-eager --gpu-memory-utilization 0.85 Then,
I installed My current environment
Hope it helps |
Same here. It works when I using LLM class directly, but I got a same error when I use |
+1 to this question. It seems like currently only 4-bit on bitsandbytes is supported? |
It appears that the model isn't being quantized properly. I used the script below and printed the parameters of the loaded model. The linear layers (mlp, attention) are quantized, but others are not. from vllm import LLM, SamplingParams
llm = LLM(model="meta-llama/Meta-Llama-3-70B-Instruct", quantization="bitsandbytes", load_format="bitsandbytes", enforce_eager=True)
print(llm.llm_engine.model_executor.driver_worker.model_runner.model.state_dict()) The output shows the following:
The |
@chenqianfzh can you please look into this issue? I agree this looks like it might be a culprit |
@QwertyJack |
@QwertyJack @vrdn-23 |
@chenweize1998 |
@lixuechenAlfred Got it. Haven't worked with quantized models much before. Thanks for bringing it up. Then the problem must be caused by other reasons. Do you have any idea why serving quantized Llama 3 8B in vllm gives poor results? |
If it helps, I might add that I read somewhere that one of the reasons Llama 3 8B is so good is because it's "over-trained", but that a downside to this means that (a) it doesn't fine-tune well and (b) it doesn't quantize to int4 or int8 well. Not sure if that's helpful, but it could just be a model limitation. |
@K-Mistele I think the issue being discussed here is more along the lines of "seeing different behavior" between bitsandbytes quantization through VLLM and quantization directly through huggingface. I've also been seeing differences in output quality in a fine-tuned model of my own when using a vLLM hosted model and when using it directly from huggingface, so there might be a more deeper bug somewhere here! |
The issue of bnb with Llama3 is root-caused, which is a bug in processing GQA. I found the PR from @thesues #5753 fixed this issue. Especially, the following change in loader.py does the job: Yet, PR 5753 does more that fixing that bug. It is also about loading pre-quant bnb mode. Could you take a look and upstream it if it looks right to you? Thanks. |
BTW, the issue that gibberish is output when eager_mode == False looks like caused by something else. I am working on it now. |
@chenweize1998 I believed the reason why quantized llama3 gives poort results lies in GQA and now it seems like the contributor @chenqianfzh confirms my opinion. Please refer to his reply. |
Tried to load aya 23 35B, but getting
|
You are right, only Llama is supported by now. More models to come. |
Thanks for your reply! Unfortunately, even with @thesues 's #5753 the issue still exists: $ grep -A 5 -B 2 'for seq, quant_state in' $PYTHON_SITE_PACKAGE_PATH/vllm/model_executor/model_loader/loader.py
num_elements = [0] * len(quant_states)
for seq, quant_state in quant_states.items():
num_elements[seq] = math.prod(
quant_state.shape) // pack_ratio
offsets = np.concatenate(([0], np.cumsum(num_elements)))
set_weight_attrs(param, {"bnb_shard_offsets": offsets})
$ python -m vllm.entrypoints.openai.api_server --dtype half --kv-cache-dtype fp8 --served-model-name llama3-8b --model /data/models/llama3-bnb-nf4 --load-format bitsandbytes --quantization bitsandbytes
INFO 07-16 06:35:43 api_server.py:212] vLLM API server version 0.5.2
INFO 07-16 06:35:43 api_server.py:213] args: Namespace(host=None, port=8000, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key=None, lora_modules=None, prompt_adapters=None, chat_template=None, response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], model='/data/models/llama3-bnb-nf4', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, download_dir=None, load_format='bitsandbytes', dtype='half', kv_cache_dtype='fp8', quantization_param_path=None, max_model_len=None, guided_decoding_backend='outlines', distributed_executor_backend=None, worker_use_ray=False, pipeline_parallel_size=1, tensor_parallel_size=1, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=16, enable_prefix_caching=False, disable_sliding_window=False, use_v2_block_manager=False, num_lookahead_slots=0, seed=0, swap_space=4, gpu_memory_utilization=1.0, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=256, max_logprobs=20, disable_log_stats=False, quantization='bitsandbytes', rope_scaling=None, rope_theta=None, enforce_eager=False, max_context_len_to_capture=None, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, enable_lora=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', scheduler_delay_factor=0.0, enable_chunked_prefill=False, speculative_model=None, num_speculative_tokens=None, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, model_loader_extra_config=None, preemption_mode=None, served_model_name=['llama3-8b'], qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, engine_use_ray=False, disable_log_requests=False, max_log_len=None)
WARNING 07-16 06:35:43 config.py:241] bitsandbytes quantization is not fully optimized yet. The speed can be slower than non-quantized models.
...
INFO: Started server process [1782426]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
$ curl http://localhost:8000/v1/chat/completions \
-H "Content-Type: application/json" -d '{"model": "llama3-8b", "messages": [{"role": "user", "content": "Hi! How are you?"}], "max_tokens": 128}'
{"id":"cmpl-96f358a0c68a4fc6a790b055061ce8ae","object":"chat.completion","created":1721108466,"model":"llama3-8b","choices":[{"index":0,"message":{"role":"assistant","content":"I!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!","tool_calls":[]},"logprobs":null,"finish_reason":"length","stop_reason":null}],"usage":{"prompt_tokens":15,"total_tokens":143,"completion_tokens":128}} Seems that it output is full of |
I'm observing the same issue. |
can you try the latest vllm which includes commit#87525fa ? |
I am compiling vllm with the latest main branch, i.e. #443c7cf4, and in short it seems that the latest commit does not help. Here is the detailed result: Pre-quant with BnB 8bit: fail to load
Pre-quant with BnB 4bit: repeat output
Original checkpoint with dynamic BnB quant: repeat output
|
In addition, the same issue happens with the pre-quant llama3.1-8b. |
@QwertyJack
no gibberish found
and for pre-quant, I think I tested this model below, it works.
BTW, @chenqianfzh shall we enforce eager mode for bitsandbytes in the code? |
@thesues and I had an offline discussion offline earlier today. We will work together on bitsandbytes in vllm together. The following is the list of items that we are working and will work on:
Root cause unknown yet. I just send out a PR of workaround, #6846, to enforce eager mode with bnb temporarily. Hope it can cause less confusion while we are still working on the bug.
Please let us know if you see any problems with this plan. Thanks. |
@chenqianfzh This sounds good to me. Thank you for sharing the plan! |
You are right!
Actually, I mean Anyway, many thanks for your great work! |
#7320 add bitsandbytes fp4 support, please review |
qwen2 same error vllm serve /ai/qwen2-7b --host 0.0.0.0 --port 10860 --max-model-len 4096 --trust-remote-code --tensor-parallel-size 1 --dtype=half --quantization bitsandbytes --load-format bitsandbytes --enforce-eager |
bitsandbytes-foundation/bitsandbytes#1308 this issue got fixed , this will enable not to enforce eager mode ? @chenqianfzh @kylesayrs |
Hi there, is the version of bitsandbytes with the fixes already implemented in vLLM? I'm able to reproduce this issue on the latest vLLM Docker (vllm/vllm-openai:v0.6.3.post1). The following two models generate gibberish when used:
The problem is only resolved if I set --enforce-eager. Bear in mind that without this flag the issue seems to be more common as the context tokens increase (e.g., a very short message might be processed correctly, but as tokens are added, the likelihood of gibberish output is higher). |
Your current environment
🐛 Describe the bug
With the latest bitsandbytes quantization feature, the official Llama3-8B-Instruct produces garbage.
Start the server:
Test the service:
The text was updated successfully, but these errors were encountered: