Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong API call to openai's reasoning models? #6629

Closed
PlanetMacro opened this issue Feb 5, 2025 · 5 comments
Closed

Wrong API call to openai's reasoning models? #6629

PlanetMacro opened this issue Feb 5, 2025 · 5 comments

Comments

@PlanetMacro
Copy link

System: Debian latest

When I chose a reasoning model from open-ai (any o1 or o3 model), it seems like OpenAI's API is not called correctly. Using a non reasoning model like gpt-4o everything works fine.

Error:

22:21:21 - openhands:ERROR: retry_mixin.py:55 - litellm.APIError: APIError: OpenAIException - Error code: 500 - {'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}. Attempt #1 | You can customize retry values in the configuration.

Full terminal output is:

MY_TERMINAL: $ docker run -it --rm --pull=always -e SANDBOX_RUNTIME_CONTAINER_IMAGE=docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik -e LOG_ALL_EVENTS=true -v /var/run/docker.sock:/var/run/docker.sock -v ~/.openhands-state:/.openhands-state -p 3000:3000 --add-host host.docker.internal:host-gateway --name openhands-app docker.all-hands.dev/all-hands-ai/openhands:main
main: Pulling from all-hands-ai/openhands
Digest: sha256:4c1ce0995992f687fce4406fd574a79075083c8c52dceb12b565667c70e1515f
Status: Image is up to date for docker.all-hands.dev/all-hands-ai/openhands:main
Starting OpenHands...
Running OpenHands as root
22:19:47 - openhands:INFO: server_config.py:39 - Using config class None
INFO: Started server process [10]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO: 172.17.0.1:59080 - "GET / HTTP/1.1" 200 OK
INFO: 172.17.0.1:59080 - "GET /favicon.ico HTTP/1.1" 200 OK
INFO: 172.17.0.1:59080 - "GET /locales/en/translation.json HTTP/1.1" 200 OK
INFO: 172.17.0.1:48242 - "GET /api/options/config HTTP/1.1" 200 OK
INFO: 172.17.0.1:48242 - "GET /api/settings HTTP/1.1" 200 OK
INFO: 172.17.0.1:48242 - "GET /api/settings HTTP/1.1" 200 OK
22:20:17 - openhands:INFO: manage_conversations.py:132 - Initializing new conversation
22:20:17 - openhands:INFO: manage_conversations.py:52 - Loading settings
22:20:17 - openhands:INFO: manage_conversations.py:55 - Settings loaded
22:20:17 - openhands:INFO: manage_conversations.py:78 - Loading conversation store
22:20:17 - openhands:INFO: manage_conversations.py:80 - Conversation store loaded
22:20:17 - openhands:INFO: manage_conversations.py:86 - New conversation ID: 11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: manage_conversations.py:93 - Saving metadata for conversation 11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: manage_conversations.py:103 - Starting agent loop for conversation 11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: standalone_conversation_manager.py:192 - maybe_start_agent_loop:11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: standalone_conversation_manager.py:195 - start_agent_loop:11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: standalone_conversation_manager.py:222 - _get_event_stream:11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: standalone_conversation_manager.py:225 - found_local_agent_loop:11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: manage_conversations.py:121 - Finished initializing conversation 11e3e12114ae4b249111a83c7e26db5c
INFO: 172.17.0.1:44972 - "POST /api/conversations HTTP/1.1" 200 OK
INFO: ('172.17.0.1', 44982) - "WebSocket /socket.io/?latest_event_id=-1&conversation_id=11e3e12114ae4b249111a83c7e26db5c&EIO=4&transport=websocket" [accepted]
INFO: 172.17.0.1:44972 - "GET /api/conversations/11e3e12114ae4b249111a83c7e26db5c HTTP/1.1" 200 OK
INFO: 172.17.0.1:44992 - "GET /api/settings HTTP/1.1" 200 OK
22:20:17 - openhands:INFO: listen_socket.py:30 - sio:connect: wzV7bojMdQfuFHNTAAAB
22:20:17 - openhands:INFO: standalone_conversation_manager.py:92 - join_conversation:11e3e12114ae4b249111a83c7e26db5c:wzV7bojMdQfuFHNTAAAB
22:20:17 - openhands:INFO: standalone_conversation_manager.py:222 - _get_event_stream:11e3e12114ae4b249111a83c7e26db5c
22:20:17 - openhands:INFO: standalone_conversation_manager.py:225 - found_local_agent_loop:11e3e12114ae4b249111a83c7e26db5c
22:20:18 - openhands:INFO: docker_runtime.py:139 - [runtime 11e3e12114ae4b249111a83c7e26db5c] Starting runtime with image: docker.all-hands.dev/all-hands-ai/runtime:main-nikolaik
22:20:18 - openhands:INFO: docker_runtime.py:143 - [runtime 11e3e12114ae4b249111a83c7e26db5c] Container started: openhands-runtime-11e3e12114ae4b249111a83c7e26db5c. VSCode URL: None
22:20:18 - openhands:INFO: docker_runtime.py:154 - [runtime 11e3e12114ae4b249111a83c7e26db5c] Waiting for client to become ready at http://host.docker.internal:38944...
22:20:42 - openhands:INFO: docker_runtime.py:160 - [runtime 11e3e12114ae4b249111a83c7e26db5c] Runtime is ready.
22:20:42 - openhands:INFO: base.py:272 - [runtime 11e3e12114ae4b249111a83c7e26db5c] Selected repo: None, loading microagents from /workspace/.openhands/microagents (inside runtime)
22:20:42 - USER_ACTION
[Agent Controller 11e3e12114ae4b249111a83c7e26db5c] MessageAction (source=EventSource.USER)
CONTENT: Write a hello world program in RUST
22:20:42 - openhands:INFO: agent_controller.py:451 - [Agent Controller 11e3e12114ae4b249111a83c7e26db5c] Setting agent(CodeActAgent) state from AgentState.LOADING to AgentState.RUNNING

==============
[Agent Controller 11e3e12114ae4b249111a83c7e26db5c] LEVEL 0 LOCAL STEP 0 GLOBAL STEP 0

22:20:43 - openhands:INFO: standalone_conversation_manager.py:83 - Conversation 11e3e12114ae4b249111a83c7e26db5c connected in 0.017739534378051758 seconds
22:20:43 - openhands:INFO: standalone_conversation_manager.py:64 - Reusing active conversation 11e3e12114ae4b249111a83c7e26db5c
INFO: 172.17.0.1:50840 - "GET /api/conversations/11e3e12114ae4b249111a83c7e26db5c/vscode-url HTTP/1.1" 200 OK
INFO: 172.17.0.1:50842 - "GET /api/conversations/11e3e12114ae4b249111a83c7e26db5c/list-files HTTP/1.1" 200 OK

22:21:21 - openhands:ERROR: retry_mixin.py:55 - litellm.APIError: APIError: OpenAIException - Error code: 500 - {'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}. Attempt #1 | You can customize retry values in the configuration.

@PlanetMacro
Copy link
Author

Reasoning models are prompted differently https://platform.openai.com/docs/guides/reasoning#advice-on-prompting

@mamoodi
Copy link
Collaborator

mamoodi commented Feb 6, 2025

CC @enyst in case he can provide some insight.

@PlanetMacro
Copy link
Author

I anyone able to run the o3 model? Then its maybe just me

@mamoodi
Copy link
Collaborator

mamoodi commented Feb 9, 2025

Not sure. I know some team members tried o3-mini and a PR was created to fix/add it. However, I can't say about o3.

@PlanetMacro
Copy link
Author

Latest version worked.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants