Skip to content

[Usage]: How to use the image datasets sharegpt4v provided in benchmark_serving? #14418

@DK-DARKmatter

Description

@DK-DARKmatter

Your current environment

The output of `python collect_env.py`

How would you like to use vllm

I want to run multi modal benchmark with vllm, first with image inputs. I download the dataset sharegpt4v as provide in README, but I found if I run benchmark_serving with:

dataset-name sharegpt

the program actually read the text prompt but no image input. How can we use benchmark_serving with the given dataset, to test the basic performance of dealing with images and prompt input?

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

Labels

staleOver 90 days of inactivityusageHow to use vllm

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions