-
Notifications
You must be signed in to change notification settings - Fork 736
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: support ollama.com structured-outputs Support ollama structured output. #1305
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good!
However, I failed to run the example... The model failed to produce a valid json format and the pydantic validation failed. Other reviewers can double check with this.. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @willshang76 , seems currently the example code can't run as expected
Ollama server started on http://localhost:11434/v1 for llama3.2 model.
2024-12-11 21:53:27,666 - httpx - INFO - HTTP Request: POST http://localhost:11434/v1/chat/completions "HTTP/1.1 200 OK"
2024-12-11 21:53:27,674 - camel.models.model_manager - ERROR - Error processing with model: <camel.models.ollama_model.OllamaModel object at 0x10628fb60>
2024-12-11 21:53:27,674 - camel.agents.chat_agent - ERROR - An error occurred while running model llama3.2, index: 0
Traceback (most recent call last):
File "/Users/enrei/Desktop/camel_1127/camel/camel/agents/chat_agent.py", line 965, in _step_model_response
response = self.model_backend.run(openai_messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Desktop/camel_1127/camel/camel/models/model_manager.py", line 211, in run
raise exc
File "/Users/enrei/Desktop/camel_1127/camel/camel/models/model_manager.py", line 201, in run
response = self.current_model.run(messages)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Desktop/camel_1127/camel/camel/models/ollama_model.py", line 142, in run
response = self._client.beta.chat.completions.parse(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/resources/beta/chat/completions.py", line 156, in parse
return self._post(
^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1280, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 957, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1063, in _request
return self._process_response(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_base_client.py", line 1162, in _process_response
return api_response.parse()
^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_response.py", line 319, in parse
parsed = self._options.post_parser(parsed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/resources/beta/chat/completions.py", line 150, in parser
return _parse_chat_completion(
^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 110, in parse_chat_completion
"parsed": maybe_parse_content(
^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 161, in maybe_parse_content
return _parse_content(response_format, message.content)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/lib/_parsing/_completions.py", line 221, in _parse_content
return cast(ResponseFormatT, model_parse_json(response_format, content))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/openai/_compat.py", line 169, in model_parse_json
return model.model_validate_json(data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Library/Caches/pypoetry/virtualenvs/camel-ai-gf_ewcJG-py3.12/lib/python3.12/site-packages/pydantic/main.py", line 625, in model_validate_json
return cls.__pydantic_validator__.validate_json(json_data, strict=strict, context=context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
pydantic_core._pydantic_core.ValidationError: 1 validation error for PetList
Invalid JSON: expected ident at line 1 column 2 [type=json_invalid, input_value="It sounds like you have ...ties in your household?", input_type=str]
For further information visit https://errors.pydantic.dev/2.9/v/json_invalid
Traceback (most recent call last):
File "/Users/enrei/Desktop/camel_1127/camel/examples/models/ollama_model_example.py", line 83, in <module>
assistant_response = agent.step(user_msg)
^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Desktop/camel_1127/camel/camel/agents/chat_agent.py", line 540, in step
) = self._step_model_response(openai_messages, num_tokens)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/enrei/Desktop/camel_1127/camel/camel/agents/chat_agent.py", line 976, in _step_model_response
raise ModelProcessingError(
camel.models.model_manager.ModelProcessingError: Unable to process messages: none of the provided models run succesfully.
(camel-ai-py3.12)
Based on the allama doc(https://ollama.com/blog/structured-outputs), you need to install the latest version of allama to make it work. |
Thanks @willshang76 ! |
Description
Now ollama supports structured output.
Motivation and Context
close #1290
Types of changes
What types of changes does your code introduce? Put an
x
in all the boxes that apply:Implemented Tasks
Checklist
Go over all the following points, and put an
x
in all the boxes that apply.If you are unsure about any of these, don't hesitate to ask. We are here to help!