Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: vllm serve works incorrect for (some) Vision LM models #10286

Closed
1 task done
Aktsvigun opened this issue Nov 13, 2024 · 28 comments · Fixed by #9919
Closed
1 task done

[Bug]: vllm serve works incorrect for (some) Vision LM models #10286

Aktsvigun opened this issue Nov 13, 2024 · 28 comments · Fixed by #9919
Labels
bug Something isn't working

Comments

@Aktsvigun
Copy link

Your current environment

The output of `python collect_env.py`
Collecting environment information...
PyTorch version: 2.4.0+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.5 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.35

Python version: 3.11.0rc1 (main, Aug 12 2022, 10:02:14) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.15.0-124-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: 
GPU 0: NVIDIA H100 80GB HBM3
GPU 1: NVIDIA H100 80GB HBM3
GPU 2: NVIDIA H100 80GB HBM3
GPU 3: NVIDIA H100 80GB HBM3

Nvidia driver version: 565.57.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                         x86_64
CPU op-mode(s):                       32-bit, 64-bit
Address sizes:                        43 bits physical, 57 bits virtual
Byte Order:                           Little Endian
CPU(s):                               80
On-line CPU(s) list:                  0-79
Vendor ID:                            GenuineIntel
Model name:                           Intel Xeon Processor (Icelake)
CPU family:                           6
Model:                                106
Thread(s) per core:                   2
Core(s) per socket:                   40
Socket(s):                            1
Stepping:                             0
BogoMIPS:                             4200.00
Flags:                                fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced fsgsbase bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 wbnoinvd arat avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq la57 rdpid fsrm md_clear arch_capabilities
Hypervisor vendor:                    KVM
Virtualization type:                  full
L1d cache:                            2.5 MiB (80 instances)
L1i cache:                            2.5 MiB (80 instances)
L2 cache:                             160 MiB (40 instances)
L3 cache:                             16 MiB (1 instance)
NUMA node(s):                         1
NUMA node0 CPU(s):                    0-79
Vulnerability Gather data sampling:   Unknown: Dependent on hypervisor status
Vulnerability Itlb multihit:          Not affected
Vulnerability L1tf:                   Not affected
Vulnerability Mds:                    Not affected
Vulnerability Meltdown:               Not affected
Vulnerability Mmio stale data:        Vulnerable: Clear CPU buffers attempted, no microcode; SMT Host state unknown
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed:               Not affected
Vulnerability Spec rstack overflow:   Not affected
Vulnerability Spec store bypass:      Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1:             Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2:             Mitigation; Enhanced / Automatic IBRS; IBPB conditional; RSB filling; PBRSB-eIBRS SW sequence; BHI SW loop, KVM SW loop
Vulnerability Srbds:                  Not affected
Vulnerability Tsx async abort:        Not affected

Versions of relevant libraries:
[pip3] mypy-extensions==1.0.0
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.1.3.1
[pip3] nvidia-cuda-cupti-cu12==12.1.105
[pip3] nvidia-cuda-nvrtc-cu12==12.1.105
[pip3] nvidia-cuda-runtime-cu12==12.1.105
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.0.2.54
[pip3] nvidia-curand-cu12==10.3.2.106
[pip3] nvidia-cusolver-cu12==11.4.5.107
[pip3] nvidia-cusparse-cu12==12.1.0.106
[pip3] nvidia-ml-py==12.560.30
[pip3] nvidia-nccl-cu12==2.20.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.1.105
[pip3] pynvml==11.5.3
[pip3] pytorch-triton==3.1.0+cf34004b8a
[pip3] pyzmq==26.2.0
[pip3] torch==2.4.0
[pip3] torchaudio==2.5.0.dev20241105+cu121
[pip3] torchvision==0.19.0
[pip3] transformers==4.46.2
[pip3] triton==3.0.0
[conda] Could not collect
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.6.3.post1
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV18    NV18    NV18    0-79    0               N/A
GPU1    NV18     X      NV18    NV18    0-79    0               N/A
GPU2    NV18    NV18     X      NV18    0-79    0               N/A
GPU3    NV18    NV18    NV18     X      0-79    0               N/A

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

LD_LIBRARY_PATH=/mnt/share/ai_studio/.venv/lib/python3.11/site-packages/cv2/../../lib64:/usr/local/cuda-12.1/lib64:
CUDA_MODULE_LOADING=LAZY

Model Input Dumps

No response

🐛 Describe the bug

I am running a Vision LM model llava-hf/llava-1.5-13b-hf via vllm serve, and it outputs weird outputs: official script from vllm examples with somewhat "fixed" top_p for better determinism outputs only '\n' tokens:

image_url = "https://wallpapers.com/images/featured/high-resolution-gfinds1akzwf6vcq.jpg"
chat_completion_from_url = client.chat.completions.create(
    messages=[{
        "role":
        "user",
        "content": [
            {
                "type": "text",
                "text": "hey"
            },
            {
                "type": "image_url",
                "image_url": {
                    "url": image_url
                },
            },
        ],
    }],
    model="llava-hf/llava-1.5-13b-hf",
    max_tokens=32,
    top_p=0.1
)

result = chat_completion_from_url.choices[0].message.content
print("Chat completion output from image url:", result)

# This outputs the '\n' token 32 times.

I launch the vllm server according to this official script:

vllm serve llava-hf/llava-1.5-13b-hf --chat-template template_llava.jinja

Crucially, running the vllm server via Jupyter-notebook yields completely normal outputs, which coincide with outputs, obtained via HuggingFace's transformers from the official Llava's example
:

from vllm import LLM, SamplingParams
from PIL import Image
import requests

image_url = "https://wallpapers.com/images/featured/high-resolution-gfinds1akzwf6vcq.jpg"
image = Image.open(requests.get(image_url, stream=True).raw)

llm = LLM(model="llava-hf/llava-1.5-13b-hf")
sampling_params = SamplingParams(top_p=0.1, max_tokens=32)

prompt = "USER: <image>\nhey\nASSISTANT:"

outputs = llm.generate(
    {
        "prompt": prompt,
        "multi_modal_data": {"image": image},
    },
    sampling_params=sampling_params
)
print(outputs[0].outputs[0].text)

# This outputs "The image features a beautiful landscape with a large body of water, such as a lake or a river, surrounded by lush green trees and mountains. The water"

The inputs to the text encoder are completely normal, according to the logs:

Received request chat-7832348944684bcf9d8abb7197872fab: prompt: '<s>USER: <image>\nhey\nASSISTANT:\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.1, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=16, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), guided_decoding=GuidedDecodingParams(json=None, regex=None, choice=None, grammar=None, json_object=None, backend=None, whitespace_pattern=None), prompt_token_ids: [1, 3148, 1001, 29901, 29871, 32000, 29871, 13, 354, 29891, 13, 22933, 9047, 13566, 29901, 13], lora_request: None, prompt_adapter_request: None

Hence, I have a certain feeling there is a bug in how an image is processed when launching the vllm server via vllm serve. Could you please investigate?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@Aktsvigun Aktsvigun added the bug Something isn't working label Nov 13, 2024
@DarkLight1337
Copy link
Member

For easier debugging, can you try using the offline chat method (LLM.chat) and see if you get similar issues?

@Aktsvigun
Copy link
Author

Thanks for the tip! It does better than with vllm serve (see the output below), however, it does not replicate the llm.generate results (which I consider to be correct since they coincide with results via transformers)

CHAT_TEMPLATE = """
{%- if messages[0]['role'] == 'system' -%}
    {%- set system_message = messages[0]['content'] -%}
    {%- set messages = messages[1:] -%}
{%- else -%}
    {% set system_message = '' -%}
{%- endif -%}

{{ bos_token + system_message }}
{%- for message in messages -%}
    {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) -%}
        {{ raise_exception('Conversation roles must alternate user/assistant/user/assistant/...') }}
    {%- endif -%}

    {%- if message['role'] == 'user' -%}
        {{ 'USER: ' + message['content'] + '\n' }}
    {%- elif message['role'] == 'assistant' -%}
        {{ 'ASSISTANT: ' + message['content'] + eos_token + '\n' }}
    {%- endif -%}
{%- endfor -%}

{%- if add_generation_prompt -%}
    {{ 'ASSISTANT:' }}
{% endif %}
""".strip()

prompt = "USER: <image>\nhey\nASSISTANT:"

outputs = llm.chat(
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "hey"},
            {"type": "image_url", "image_url": {"url": image_url}},
        ],
    }],
    sampling_params=sampling_params,
    chat_template=CHAT_TEMPLATE
)
print(outputs[0].outputs[0].text)

### Outputs "This image showcases a beautiful landscape featuring a large lake surrounded by lush green trees and mountains. The lake is situated in a valley, with the mountains"

@DarkLight1337
Copy link
Member

To eliminate randomness, can you set temperature=0 for all cases?

@Aktsvigun
Copy link
Author

I set top_p=0.1, can confirm temperature=0 has led to absolutely the same results

@DarkLight1337
Copy link
Member

Can you post the image here? I can't seem to access that URL locally.

@Aktsvigun
Copy link
Author

Sure

image

@Aktsvigun
Copy link
Author

@DarkLight1337 btw can confirm this issue is not model-specific: hosting Qwen/Qwen2-VL-72B-Instruct-AWQ with vllm serve also leads to poorer results compared to hosting it locally via LLM.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 13, 2024

Received request chat-7832348944684bcf9d8abb7197872fab: prompt: '<s>USER: <image>\nhey\nASSISTANT:\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.1, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=16, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None), guided_decoding=GuidedDecodingParams(json=None, regex=None, choice=None, grammar=None, json_object=None, backend=None, whitespace_pattern=None), prompt_token_ids: [1, 3148, 1001, 29901, 29871, 32000, 29871, 13, 354, 29891, 13, 22933, 9047, 13566, 29901, 13], lora_request: None, prompt_adapter_request: None

Here, it looks like there is extra BOS token at the start, and a newline at the end of the prompt. This might affect the result.

After I remove those extra tokens After I added those special tokens to generate prompt, the generate and chat outputs are the same for offline inference. Unfortunately, we don't currently support passing add_special_tokens=False for offline inference. On the other hand, using the original chat prompt with add_special_tokens=False results in erroneous output.

@Aktsvigun
Copy link
Author

Thanks, this indeed leads to the same results! But what about vllm serve? It still leads to different (and much poorer) results compared to the chat method output.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 13, 2024

On the other hand, using the original chat prompt with add_special_tokens=False results in erroneous output.

I think you need to pass add_special_tokens=True since the example chat template doesn't add those tokens, You can try setting add_special_tokens=True, since the default is False for Chat Completions API.

Tbh I'm not completely sure about this since those tokens are logged in the request...

@Aktsvigun
Copy link
Author

Adding add_special_tokens=True does not change anything in the prompt, but it indeed fixes the problem, thanks a lot! I deem this may be connected with special tokens to images maybe.

Just in case you know, there is another similar issue with Qwen/Qwen2-VL-72B-Instruct-AWQ:

  • Hosting it with vllm serve inserts the image before the text even if in the messages I provide the image after the text:
<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
<|vision_start|><|image_pad|><|vision_end|>
...here goes the user prompt...<|im_end|>
<|im_start|>assistant

This leads to much poorer results compared to when the text goes first (I can run it in such a setting locally):

<|im_start|>system
You are a helpful assistant.<|im_end|>
<|im_start|>user
...here goes the user prompt...<|vision_start|><|image_pad|><|vision_end|><|im_end|>
<|im_start|>assistant

Do you maybe know a way to fix the order when running it via vllm serve? Setting add_special_tokens=True does not work in this case 😀

@DarkLight1337
Copy link
Member

You can try out #9919 which should fix the format of the chat template.

@Aktsvigun
Copy link
Author

Doesn't help, unfortunately. To verify, I also ran llm.chat with chat_template argument provided (chat_template = AutoTokenizer.from_pretrained("Qwen/Qwen2-VL-72B-Instruct-AWQ").chat_template), which also has the image going first (which btw comes not from chat_template).

@DarkLight1337
Copy link
Member

Hosting it with vllm serve inserts the image before the text even if in the messages I provide the image after the text:

Is this the logged input or the actual input to the model? Based on our discussion, there seems to be some discrepancy between the two...

@Aktsvigun
Copy link
Author

Sure, to make it clear:

Case 1: llm.generate

outputs = llm.generate(
    {
        "prompt": text,
        "multi_modal_data": {"image": image},
    },
    sampling_params=sampling_params
)
print(outputs[0].prompt)
### Text goes first (before image) here

print(outputs[0].outputs[0].text)
### Adequate output

Case 2: llm.chat

outputs = llm.chat(
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": prompt
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{encode_image_to_base64('tmp/download.jpeg')}"
                    },
                },
            ],
        },
    ],
    sampling_params=sampling_params,
    chat_template=chat_template
)

print(outputs[0].prompt)
### Image goes first here

print(outputs[0].outputs[0].text)
### Gibberish output

Case 3: vllm serve

resp = client.chat.completions.create(
    model="qwen/Qwen2-VL-72B-Instruct-AWQ/",
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": prompt
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{encode_image_to_base64('tmp/download.jpeg')}"
                    },
                },
            ],
        },
    ],
    max_tokens=32,
    top_p=0.1,
    extra_body=dict(add_special_tokens=True, chat_template=chat_template),
)
print(resp.choices[0].message.content)
### Gibberish output

### Taking the input from the logs shows the image goes first:
Received request chat-8e007db210f34fe982f9f6ec2c22265b: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|> ...

@Aktsvigun
Copy link
Author

Aktsvigun commented Nov 13, 2024

@DarkLight1337 and another thing I've just realised: even though adding add_special_tokens=True for Llava leads to the same results, it seems these results are a bit controversial with output via transformers. Given the fact that the original llm.generate output coincides with that from transformers, it seems "adding special tokens" can "somewhat fix" the vllm serve bug, but doesn't solve it completely.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 13, 2024

Oh, just realized #9919 has a bug with the chat template detection - fixed. Can you try it again? It should now output image after the text prompt.

@Aktsvigun
Copy link
Author

Apologies for my late reply, was setting up the environment. This does not help either, unfortunately. I launch the server with:

VLLM_LOGGING_LEVEL=DEBUG vllm serve ../cache/models/Qwen2-VL-72B-Instruct-AWQ/ --chat-template chat_template.jinja --chat-template-content-format string

According to the logs, the image still goes first in the prompt:

... Received request chatcmpl-2c16dc9aa6f74e739901555b4ba9d704: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>\n...

This happens regardless of the value for add_special_tokens.

@DarkLight1337
Copy link
Member

Does this also happen for offline chat?

@DarkLight1337
Copy link
Member

Can you post the full logs?

@Aktsvigun
Copy link
Author

Aktsvigun commented Nov 14, 2024

Does this also happen for offline chat?

It does, absolutely the same behavior

Can you post the full logs?

Sure:

DEBUG 11-14 11:53:01 client.py:165] Heartbeat successful.
INFO 11-14 11:53:02 logger.py:37] Received request chatcmpl-7f03fc87daa147df85c9c279d29324c9: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>\n[MY_USER_PROMPT_GOES_HERE]<|im_end|>\n<|im_start|>assistant\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.1, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=32, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 11-14 11:53:02 preprocess.py:215] Your model uses the legacy input pipeline instead of the new multi-modal processor. Please note that the legacy pipeline will be removed in a future release. For more details, see: https://github.com/vllm-project/vllm/issues/10114
INFO 11-14 11:53:02 engine.py:267] Added request chatcmpl-7f03fc87daa147df85c9c279d29324c9.
INFO:     127.0.0.1:47236 - "POST /v1/chat/completions HTTP/1.1" 200 OK

@DarkLight1337
Copy link
Member

Hmm, you don't seem to be using #9919. In that PR, there should be a log message about the chat template content format.

@Aktsvigun
Copy link
Author

Apologies, indeed my bad! Here is the log for your PR's version:

INFO 11-14 12:58:39 chat_utils.py:325] Detected the chat template content format to be 'string'. You can set `--chat-template-content-format` to override this.
INFO 11-14 12:58:39 logger.py:37] Received request chatcmpl-aab220f3fc3e4eea9f29a3580b55f82d: prompt: '<|im_start|>system\nYou are a helpful assistant.<|im_end|>\n<|im_start|>user\n<|vision_start|><|image_pad|><|vision_end|>\nAct as an experienced data extractor. Please read all the data & text from the image below and write it down.\n\nFor the text, first detect its language; without mentioning it, write the text down as it is. Try to preserve the formatting of the text (bold text etc.). For other data sources, transform them into a text if they can be transformed. For example, if you see an image of a salmon, write: "*An image of a salmon*". Otherwise, just ignore the data source.<|im_end|>\n<|im_start|>assistant\n', params: SamplingParams(n=1, presence_penalty=0.0, frequency_penalty=0.0, repetition_penalty=1.0, temperature=0.7, top_p=0.1, top_k=-1, min_p=0.0, seed=None, stop=[], stop_token_ids=[], bad_words=[], include_stop_str_in_output=False, ignore_eos=False, max_tokens=32, min_tokens=0, logprobs=None, prompt_logprobs=None, skip_special_tokens=True, spaces_between_special_tokens=True, truncate_prompt_tokens=None, guided_decoding=None), prompt_token_ids: None, lora_request: None, prompt_adapter_request: None.
INFO 11-14 12:58:39 preprocess.py:215] Your model uses the legacy input pipeline instead of the new multi-modal processor. Please note that the legacy pipeline will be removed in a future release. For more details, see: https://github.com/vllm-project/vllm/issues/10114
INFO 11-14 12:58:41 engine.py:267] Added request chatcmpl-aab220f3fc3e4eea9f29a3580b55f82d.

Again, I've launched it with

VLLM_LOGGING_LEVEL=DEBUG vllm serve ../cache/models/Qwen2-VL-72B-Instruct-AWQ/ --chat-template chat_template.jinja --chat-template-content-format string

@DarkLight1337
Copy link
Member

Are you on the latest version of the branch? I'm running this code:

from vllm import LLM, SamplingParams
from vllm.multimodal.utils import encode_image_base64
from PIL import Image

image = Image.open("debug.png")
base64 = encode_image_base64(image, format="PNG")

llm = LLM(model="Qwen/Qwen2-VL-7B-Instruct", tensor_parallel_size=2)
sampling_params = SamplingParams(temperature=0, top_p=0.1, max_tokens=32)

outputs = llm.chat(
    messages=[
        {
            "role": "user",
            "content": [
                {
                    "type": "text",
                    "text": "What is in this image?"
                },
                {
                    "type": "image_url",
                    "image_url": {
                        "url": f"data:image/jpeg;base64,{base64}"
                    },
                },
            ],
        },
    ],
    sampling_params=sampling_params,
)

print(outputs[0].prompt)
### Image goes first here

print(outputs[0].outputs[0].text)
### Gibberish output

and it successfully detects the chat template format as "openai".

@Aktsvigun
Copy link
Author

Checked, many thanks @ DarkLight1337, this indeed fixes this issue! Could you please tell me whether your PR will be included in the upcoming vllm release?

P.s. unsure whether I need to close this issue since we revealed prompt from console logs doesn't always coincide with the genuine input to the model - please tell me if I'd better close this one.

@DarkLight1337
Copy link
Member

P.s. unsure whether I need to close this issue since we revealed prompt from console logs doesn't always coincide with the genuine input to the model - please tell me if I'd better close this one.

Let's create a separate issue for this. I'll edit my PR to close this one once it is merged.

@DarkLight1337
Copy link
Member

Checked, many thanks @ DarkLight1337, this indeed fixes this issue! Could you please tell me whether your PR will be included in the upcoming vllm release?

The PR is basically done, so it will probably make it into the next release, provided that someone approves it.

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 16, 2024

Unfortunately this didn't make it into v0.6.4.post1. For now, you'll have to install the latest code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants