-
Notifications
You must be signed in to change notification settings - Fork 8.2k
Description
Describe the bug
Description
When interfacing with the fireworks.ai API using the AssistantAgent and UserProxyAgent classes, I've encountered a compatibility issue due to the mandatory name field within these classes. The fireworks.ai API rejects the request with an InvalidRequestError because the name field is not expected.
Steps to reproduce
- Use
AssistantAgentorUserProxyAgentwith thenamefield included in the request payload. - Send a request to the fireworks.ai API endpoint.
- The API responds with an
InvalidRequestError, indicating thenamefield is extraneous.
Expected Behavior
Expetced Behavior
There should be compatibility with fireworks.ai API, which means either the name field should be accepted, or there should be a way to exclude it from requests depending on the API requirements.
Actual Behavior
The request is rejected by the fireworks.ai API, and the following error message is received:
openai.error.InvalidRequestError: [{'loc': ('body', 'messages', 1, 'name'), 'msg': 'extra fields not permitted', 'type': 'value_error.extra'}]
Possible Solutions
- Allow conditional inclusion of the
namefield in the request payload based on the API being used. - Modify the
AssistantAgentandUserProxyAgentclasses to make thenamefield optional or provide a method to exclude it for certain APIs.
Concerns
While a temporary fix could involve modifying the request information within the classes, this could lead to additional issues upon future updates. Thus, a permanent solution from the autogen team would be preferable to ensure consistent support for various models.
Additional Context
- The issue occurred with models
mixtral-8x7b-instructandllama-v2-7b-chat. - I am willing to implement a temporary fix by modifying the request data myself, but due to work commitments, it might take 3-5 days before I can address this.
- There is a concern that manual code modifications might introduce new issues with future version updates.
Request
I would appreciate guidance on how this issue might be resolved on an official level. It's not clear whether this is an issue with the settings on fireworks.ai's side or if the model itself does not support the name field. I hope that autogen can officially support multiple models to ensure broad compatibility.
Screenshots and logs
Traceback (most recent call last):
File "/home/zyl/chatdev/autogen/sqlFormat/demo.py", line 57, in
user_proxy.initiate_chat(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 531, in initiate_chat
self.send(self.generate_init_message(**context), recipient, silent=silent)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 334, in send
recipient.receive(message, self, request_reply, silent)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 462, in receive
reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 781, in generate_reply
final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 162, in run_chat
speaker = groupchat.select_speaker(speaker, self)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/groupchat.py", line 91, in select_speaker
final, name = selector.generate_oai_reply(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py", line 606, in generate_oai_reply
response = oai.ChatCompletion.create(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 803, in create
response = cls.create(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 834, in create
return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py", line 224, in _get_response
response = openai_completion.create(request_timeout=request_timeout, **config)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 157, in create
response, _, api_key = requestor.request(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 713, in _interpret_response
self._interpret_response_line(
File "/home/zyl/miniconda3/envs/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py", line 779, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: [{'loc': ('body', 'messages', 1, 'name'), 'msg': 'extra fields not permitted', 'type': 'value_error.extra'}]
Additional Information
AutoGen Version: 0.1.14
Operating System: Linux version 5.15.133.1-microsoft-standard-WSL2 (root@1c602f52c2e4) (gcc (GCC) 11.2.0, GNU ld (GNU Binutils) 2.37) #1 SMP Thu Oct 5 21:02:42 UTC 2023 Ubuntu 22.04.3 LTS
Python Version:Python 3.10.13
Related Issues: not found
Looking forward to your response and a possible solution. Thank you!