-
-
Notifications
You must be signed in to change notification settings - Fork 6.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Error loading microsoft/Phi-3.5-vision-instruct #7718
Comments
Can you check out #7710 and see if it fixes your issue? |
@DarkLight1337 is this currently fixed? |
Which version of vLLM are you using? |
Text only inference works fine for me (just text messages without any image), but, still getting the following errors with the image inputs:
This also happens with the |
You may have to increase the |
I tried with larger |
@DarkLight1337 this is the exact error I have. I get it in both the Docker and outside of the Docker. |
@Isotr0py since you have a CPU-only environment (and also implemented this model), can you help investigate this? Thanks! |
Ok, I will investigate this tonight. |
Small addition @DarkLight1337 @Isotr0py , from vllm import LLM
from transformers import AutoTokenizer
from vllm import LLM, SamplingParams
from vllm.assets.image import ImageAsset
from vllm.utils import FlexibleArgumentParser
llm = LLM(
model="microsoft/Phi-3.5-vision-instruct",
trust_remote_code=True
) Image inputs work without any issues when I use the LLM as above with |
Please note that multi-image support is not supported yet for OpenAI-compatible server. Can you provide a minimum reproducible example? |
Sure, after running with the above instructions, run the following: from openai import OpenAI
openai_api_key = "EMPTY"
openai_api_base = "http://localhost:8001/v1". #### make sure this port is correct, I changed it to 8001 in server
client = OpenAI(
api_key=openai_api_key,
base_url=openai_api_base,
)
chat_response = client.chat.completions.create(
model="microsoft/Phi-3.5-vision-instruct",
messages=[{
"role": "user",
"content": [
{"type": "text", "text": "What's in this image?"},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/d/dd/Gfp-wisconsin-madison-the-nature-boardwalk.jpg/2560px-Gfp-wisconsin-madison-the-nature-boardwalk.jpg",
},
},
],
}],
)
print("Chat response:", chat_response) |
@berkecanrizai I have created #7916 to fix this. Please take a look at this :) |
Thanks, that was fast :D |
Your current environment
vllm version:
Version: 0.5.4
🐛 Describe the bug
Repro command:
Error:
The text was updated successfully, but these errors were encountered: