-
-
Couldn't load subscription status.
- Fork 10.9k
Description
Your current environment
The output of python collect_env.py
Your output of `python collect_env.py` here
🐛 Describe the bug
APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] AsyncLLM output_handler failed.
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] Traceback (most recent call last):
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/async_llm.py", line 457, in output_handler
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] processed_outputs = output_processor.process_outputs(
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/output_processor.py", line 435, in process_outputs
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] stop_string = req_state.detokenizer.update(
(APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/detokenizer.py", line 118, in update (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] self.output_text += self.decode_next(new_token_id) (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/detokenizer.py", line 219, in decode_next (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] token = self._protected_step(next_token_id) (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/detokenizer.py", line 241, in _protected_step (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] raise e (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] File "/opt/conda/lib/python3.10/site-packages/vllm/v1/engine/detokenizer.py", line 233, in _protected_step (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] token = self.stream.step(self.tokenizer, next_token_id) (APIServer pid=10005) ERROR 09-28 05:39:47 [async_llm.py:480] TypeError: argument 'id': StreamInput must be either an integer or a list of integers (APIServer pid=10005) INFO: 127.0.0.1:49204 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error (APIServer pid=10005) INFO: 127.0.0.1:39430 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
(APIServer pid=10005) INFO: 127.0.0.1:39440 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error (APIServer pid=10005) INFO: 127.0.0.1:39456 - "POST /v1/chat/completions HTTP/1.1" 500 Internal Server Error
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.