Your current environment
The output of `python collect_env.py`
How would you like to use vllm
I want to run multi modal benchmark with vllm, first with image inputs. I download the dataset sharegpt4v as provide in README, but I found if I run benchmark_serving with:
the program actually read the text prompt but no image input. How can we use benchmark_serving with the given dataset, to test the basic performance of dealing with images and prompt input?
Before submitting a new issue...