Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

I get the following error: list index out of range #67

Open
ErykCh opened this issue Oct 21, 2024 · 14 comments
Open

I get the following error: list index out of range #67

ErykCh opened this issue Oct 21, 2024 · 14 comments
Labels
question Further information is requested

Comments

@ErykCh
Copy link

ErykCh commented Oct 21, 2024

Hi,

in LibreChat connected to optillm proxy served by docker I get following error:

2024-10-21 06:56:48 error: [handleAbortError] AI response error; aborting request: 500 "list index out of range"

prompt:

<optillm_approach>bon|moa|mcts|cot_reflection</optillm_approach>

There are two hippos in front of a some hippo, two hippos behind a some hippo and a some hippo in the middle. How many hippos are there?

and most other approaches.

It looks like optillm is returning messages that are not compliant with the OpenAI API standard.

@ErykCh ErykCh changed the title Using vLLM I get the following error: list index out of range I get the following error: list index out of range Oct 21, 2024
@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

And there is one more error in logs:

2024-10-21 07:39:53 error: [OpenAIClient] Known OpenAI error: Error: missing role for choice 0
2024-10-21 07:44:46 warn: [OpenAIClient.chatCompletion][stream] API error

but when there is this error, LibreChat displays a message, but it would also be good to correct it

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

This error is connected with moa:

<optillm_approach>moa</optillm_approach>
There are two hippos in front of a some hippo, two hippos behind a some hippo and a some hippo in the middle. How many hippos are there?

and error is connected with vllm as a model inference:

2024-10-21 08:12:42,987 - INFO - Received request to /v1/chat/completions
2024-10-21 08:12:43,047 - INFO - Using approach(es) ['moa'], operation SINGLE, with model Qwen2.5
2024-10-21 08:12:54,009 - INFO - HTTP Request: POST http://localhost:8000/v1/chat/completions "HTTP/1.1 200 OK"
2024-10-21 08:12:54,011 - ERROR - Error processing request: list index out of range
2024-10-21 08:12:54,011 - INFO - 10.155.25.104 - - [21/Oct/2024 08:12:54] "POST /v1/chat/completions HTTP/1.1" 500 -
2024-10-21 08:12:54,517 - INFO - Received request to /v1/chat/completions
2024-10-21 08:12:54,568 - INFO - Using approach(es) ['moa'], operation SINGLE, with model Qwen2.5
2024-10-21 08:13:01,626 - INFO - HTTP Request: POST http://localhost:8000/v1/chat/completions "HTTP/1.1 200 OK"
2024-10-21 08:13:01,627 - ERROR - Error processing request: list index out of range
2024-10-21 08:13:01,627 - INFO - 10.155.25.104 - - [21/Oct/2024 08:13:01] "POST /v1/chat/completions HTTP/1.1" 500 -
2024-10-21 08:13:02,473 - INFO - Received request to /v1/chat/completions
2024-10-21 08:13:02,528 - INFO - Using approach(es) ['moa'], operation SINGLE, with model Qwen2.5
2024-10-21 08:13:10,743 - INFO - HTTP Request: POST http://localhost:8000/v1/chat/completions "HTTP/1.1 200 OK"
2024-10-21 08:13:10,745 - ERROR - Error processing request: list index out of range
2024-10-21 08:13:10,745 - INFO - 10.155.25.104 - - [21/Oct/2024 08:13:10] "POST /v1/chat/completions HTTP/1.1" 500 -

@codelion
Copy link
Owner

Does vllm support returning multiple responses from the /v1/chat/completions end point? For moa we get 3 generations from the model

Can you try with another technique like cot_reflection do you get the same error? Unfortunately, I cannot test vllm locally as I am on mac m3 and it doesn't support it (vllm-project/vllm#2081)

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

Yes, it returns.

cot_reflection is ok
mcts is ok (but I will create another ticket, it seems that is problem with mcts configuration)

@codelion
Copy link
Owner

codelion commented Oct 21, 2024

Do you have the same problem (running moa) with another model? Or are you calling it with the right chat template?

Error: missing role for choice 0
Because this error will come if the response message doesn’t have a user role.

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

I was sending too many different approaches at once (bon|moa|mcts|cot_reflection). So to sort it out.

Problem with vllm and error (following error is in OptiLLM logs)
Error processing request: list index out of range
refers to moa

Problem with what optillm returns to LibreChat (following error is in LibreChat logs)
Known OpenAI error: Error: missing role for choice 0
refers to mcts, bon, cot_reflection
and I've tried z3 now and it also has this error, luckily the answer itself appears in the chat for all of them

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

Known OpenAI error: Error: missing role for choice 0

Direct connection from LibreChat to vLLM doesn't cause such an error

@codelion
Copy link
Owner

Known OpenAI error: Error: missing role for choice 0

Direct connection from LibreChat to vLLM doesn't cause such an error

This particular error looks like a known issue with LibreChat - danny-avila/LibreChat#1222

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

Ok, so only 'list index out of range' left.

@codelion
Copy link
Owner

Ok, so only 'list index out of range' left.

For that can please run moa with vllm with the changes I made here - 0df5291

I added more logging to help figure out where it is failing.

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

I have
--log debug

but I don't see all debug logs
image

@ErykCh
Copy link
Author

ErykCh commented Oct 21, 2024

So this is problem with vLLM
it return 1 choice instead of 3
but parameter is visible in vLLM logs

image

@codelion
Copy link
Owner

Yeah, vllm is not returning 3 responses, can you get the 3 responses if you set n directly in your openai client and call vllm?

@codelion codelion added the question Further information is requested label Oct 28, 2024
@codelion
Copy link
Owner

codelion commented Nov 4, 2024

We will implement a fall back for this in #83

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants