You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/source/models/vlm.rst
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -242,6 +242,10 @@ To consume the server, you can use the OpenAI client like in the example below:
242
242
243
243
A full code example can be found in `examples/openai_chat_completion_client_for_multimodal.py <https://github.com/vllm-project/vllm/blob/main/examples/openai_chat_completion_client_for_multimodal.py>`_.
244
244
245
+
.. tip::
246
+
Loading from local file paths is also supported on vLLM: You can specify the allowed local media path via ``--allowed-local-media-path`` when launching the API server/engine,
247
+
and pass the file path as ``url`` in the API request.
248
+
245
249
.. tip::
246
250
There is no need to place image placeholders in the text content of the API request - they are already represented by the image content.
247
251
In fact, you can place image placeholders in the middle of the text by interleaving text and image content.
0 commit comments