Conversation
|
@OpenHands Extract NonExecutableActionEvent from this PR, make a new branch from main, and apply it on it. Then make a PR. Understand well what role it has, and understand that on main branch we don't have Responses path. Make sure that NonExecutableActionEvent fulfills its role correctly on Completions path. Add unit tests to that new PR. |
|
I'm on it! enyst can track my progress at all-hands.dev |
|
This works! View and send back reasoning on stateless variant, and it cleared up some of the code. At a price (llm.py is still a bit unhappy, and telemetry.py will need another pass), just IMHO it's mostly easier to read and follow the execution path for one API or the other. I kinda like that the Agent only chooses, all the rest of its code is now without
The agent is working on porting tests. I think llm.py needs a heavier refactoring..., I'm thinking to make a facade for normalize_kwargs and format_messages_for(), and take them out; not sure it'd reduce redundancies, but more for readability and more reliance on types. 🤔 This PR doesn't do a conversion; no conversion the other way around either
|
There was a problem hiding this comment.
Did a quick skim thourgh! I actually think the structure LGTM and make sense! (and agree that we might need to do a good refactor afterwards)
Happy to take another look when we merge that NonExecutableActionEvent and when tests are in and passed!
🙏
Co-authored-by: openhands <openhands@all-hands.dev>
|
OpenHands-GPT-5 here. Concise summary of test coverage for the Responses API work:
|
|
Thanks for this! I'll try to take a look sometime in these two days! |
…ol calls\n\n- Remove NonExecutableActionEvent usage and exports\n- Represent non-exec tool calls as ActionEvent(action=None)\n- Keep ActionEvent.visualize minimal for action=None; preserve __str__\n- Update View.filter_unmatched_tool_calls to pair by tool_call_id\n- Agent emits ActionEvent(action=None) then AgentErrorEvent for missing tool/invalid args\n- Update tests and visualizer accordingly\n\nCo-authored-by: openhands <openhands@all-hands.dev>
Merged NEA and tests! |
xingyaoww
left a comment
There was a problem hiding this comment.
I made some changes along the way but this PR LGTM! There's still some changes for simplification / clean up but we can leave for future PRs
One last issue i'm trying to figure out now: https://community.openai.com/t/schema-additionalproperties-must-be-false-when-strict-is-true/929996
# Keep the tasks short for demo purposes
^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/openhands/sdk/conversation/impl/local_conversation.py", line 244, in run
self.agent.step(self._state, on_event=self._on_event)
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/openhands/sdk/agent/agent.py", line 209, in step
raise e
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/openhands/sdk/agent/agent.py", line 178, in step
llm_response = self.llm.responses(
^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/openhands/sdk/llm/llm.py", line 593, in responses
resp: ResponsesAPIResponse = _one_attempt()
^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/tenacity/init.py", line 338, in wrapped_f
return copy(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/tenacity/init.py", line 477, in call
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/tenacity/init.py", line 378, in iter
result = action(retry_state)
^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/tenacity/init.py", line 400, in
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.11/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.11/Frameworks/Python.framework/Versions/3.12/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/tenacity/init.py", line 480, in call
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/openhands/sdk/llm/llm.py", line 569, in _one_attempt
ret = litellm_responses(
^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1343, in wrapper
raise e
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/utils.py", line 1218, in wrapper
result = original_function(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/responses/main.py", line 655, in responses
raise litellm.exception_type(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 2301, in exception_type
raise e
File "/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/litellm_core_utils/exception_mapping_utils.py", line 391, in exception_type
raise BadRequestError(
litellm.exceptions.BadRequestError: litellm.BadRequestError: OpenAIException - {"error":{"message":"litellm.BadRequestError: OpenAIException - {\n "error": {\n "message": "Invalid schema for function 'execute_bash': In context=(), 'additionalProperties' is required to be supplied and to be false.",\n "type": "invalid_request_error",\n "param": "tools[0].parameters",\n "code": "invalid_function_parameters"\n }\n}. Received Model Group=gpt-5-mini\nAvailable Model Group Fallbacks=None","type":null,"param":null,"code":"400"}}
/Users/xingyaow/Projects/All-Hands-AI/openhands-v1-dev/agent-sdk.worktree/worktree2/.venv/lib/python3.12/site-packages/litellm/llms/custom_httpx/async_client_cleanup.py:66: DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()
|
|
||
| return message_dict | ||
|
|
||
| # Backward-compatible alias |
There was a problem hiding this comment.
Let's clean clean this up in future PRs? We can just remove this?
| ) | ||
|
|
||
| @classmethod | ||
| def from_litellm_message(cls, message: LiteLLMMessage) -> "Message": |
There was a problem hiding this comment.
Let's remove this too in follow up PR
There was a problem hiding this comment.
@OpenHands create a github issue that we should remove all definition and usage of to_llm_dict and from_litellm_message in favor of the new API.
There was a problem hiding this comment.
I'm on it! xingyaoww can track my progress at all-hands.dev
There was a problem hiding this comment.
I've successfully created a GitHub issue to track the removal of deprecated API methods in favor of the new responses API.
Summary
GitHub Issue Created: #639 - Remove deprecated to_llm_dict() and from_litellm_message() in favor of new responses API
The issue documents:
-
The new API introduced in PR Add native responses #622:
from_llm_chat_message()- Parse Chat Completions messagesfrom_llm_responses_output()- Parse Responses API outputto_chat_dict()- Serialize to Chat Completions formatto_responses_dict()- Serialize to Responses API format
-
Deprecated methods to remove:
from_litellm_message()- Backward-compatible aliasto_llm_dict()- Old serialization method
-
Scope of work: Identified 5 files across production code, tests, and fixtures that need updating
-
Migration tasks: Checklist of specific actions needed to complete the migration
The issue has been labeled with refactoring and technical-debt tags for easy filtering and prioritization.
|
@xingyaoww If it helps, actually we had it unspecified most of the time |
|
@xingyaoww Works for me with I think that’s fine? |
xingyaoww
left a comment
There was a problem hiding this comment.
Confirmed it works locally for me! Let's get this in!
|
@OpenHands create another github issue that Tool.to_mcp_schema() will outputs a MCP schema and its "properties" contains "kind" which suppose NOT be contains in tool schema we send to LLM. We should consider keeping |
|
I'm on it! xingyaoww can track my progress at all-hands.dev |
SummaryI've successfully created a GitHub issue for the problem you described: Issue #640: Tool.to_mcp_schema() includes 'kind' field in properties which should not be sent to LLMs What the issue documents:
The issue has been labeled with |
|
@enyst it seems that gpt5-mini is crashing because of this fix because GPT-5 does NOT natively support Responses API. |
|
I suspect Litellm seems to have some wonky criteria where it doesn't recognize "litellm_proxy/openai/gpt-5" as supporting Responses, but it does recognize "openai/gpt-5". So it reroutes the first. We can look more into it tomorrow! I'm pretty sure GPT-5 does support natively Responses API. The call does go to OpenAI on |

Native responses API, take two (or four 😅)
This PR proposes an approach where:
from_llm_chat_messagefrom_llm_responses_outputto_chat_dictto_responses_dictAgent Server images for this PR
• GHCR package: https://github.com/All-Hands-AI/agent-sdk/pkgs/container/agent-server
Variants & Base Images
golang:1.21-bookwormeclipse-temurin:17-jdknikolaik/python-nodejs:python3.12-nodejs22Pull (multi-arch manifest)
Run
All tags pushed for this build
The
400c58dtag is a multi-arch manifest (amd64/arm64); your client pulls the right arch automatically.