Skip to content

InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions) #573

@AugustusLZJ

Description

@AugustusLZJ

I am seeing the following in issues in several notebooks, ./notebook/agentchat_MathChat.ipynb is one of them.

I am setting:

config_list = [
    {
        'model': 'gpt-3.5-turbo',
        'api_key': token_response.json()["access_token"],
        'base_url': 'https://chat-my.linkxyz.com/',
        'api_type': "azure",
        'api_version': "2023-08-01-preview"
    },
]

It seems AssistantAgent is good, but while calling:

math_problem = "Find all $x$ that satisfy the inequality $(2x+10)(x+3)<(3x+9)(x+8)$. Express your answer in interval notation."
mathproxyagent.initiate_chat(assistant, problem=math_problem)

It shows up the following error:

Let's use Python to solve a math problem.

Query requirements:
You should always use the 'print' function for the output and use fractions/radical forms instead of decimals.
You can use packages like sympy to help you.
You must follow the formats below to write your code:

python
# your code


First state the key idea to solve the problem. You may choose from three ways to solve the problem:
Case 1: If the problem can be solved with Python code directly, please write a program to solve it. You can enumerate all possible arrangements if needed.
Case 2: If the problem is mostly reasoning, you can solve it by yourself directly.
Case 3: If the problem cannot be handled in the above two ways, please follow this process:
1. Solve the problem step by step (do not over-divide the steps).
2. Take out any queries that can be asked through Python (for example, any calculations or equations that can be calculated).
3. Wait for me to give the results.
4. Continue if you think the result is correct. If the result is invalid or unexpected, please correct your query or reasoning.

After all the queries are run and you get the answer, put the answer in \boxed{}.

Problem:
Find all $x$ that satisfy the inequality $(2x+10)(x+3)<(3x+9)(x+8)$. Express your answer in interval notation.

--------------------------------------------------------------------------------
---------------------------------------------------------------------------
InvalidRequestError                       Traceback (most recent call last)
/Path/autogen/notebook/agentchat_MathChat.ipynb Cell 11 line 5
      1 # given a math problem, we use the mathproxyagent to generate a prompt to be sent to the assistant as the initial message.
      2 # the assistant receives the message and generates a response. The response will be sent back to the mathproxyagent for processing.
      3 # The conversation continues until the termination condition is met, in MathChat, the termination condition is the detect of "\boxed{}" in the response.
      4 math_problem = "Find all $x$ that satisfy the inequality $(2x+10)(x+3)<(3x+9)(x+8)$. Express your answer in interval notation."
----> 5 mathproxyagent.initiate_chat(assistant, problem=math_problem)

File /Path/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:531, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
    517 """Initiate a chat with the recipient agent.
    518 
    519 Reset the consecutive auto reply counter.
   (...)
    528         "message" needs to be provided if the `generate_init_message` method is not overridden.
    529 """
    530 self._prepare_chat(recipient, clear_history)
--> 531 self.send(self.generate_init_message(**context), recipient, silent=silent)

File /Path/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:334, in ConversableAgent.send(self, message, recipient, request_reply, silent)
    332 valid = self._append_oai_message(message, "assistant", recipient)
    333 if valid:
--> 334     recipient.receive(message, self, request_reply, silent)
    335 else:
    336     raise ValueError(
    337         "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
    338     )

File /Path/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:462, in ConversableAgent.receive(self, message, sender, request_reply, silent)
    460 if request_reply is False or request_reply is None and self.reply_at_receive[sender] is False:
    461     return
--> 462 reply = self.generate_reply(messages=self.chat_messages[sender], sender=sender)
    463 if reply is not None:
    464     self.send(reply, sender, silent=silent)

File /Path/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:781, in ConversableAgent.generate_reply(self, messages, sender, exclude)
    779     continue
    780 if self._match_trigger(reply_func_tuple["trigger"], sender):
--> 781     final, reply = reply_func(self, messages=messages, sender=sender, config=reply_func_tuple["config"])
    782     if final:
    783         return reply

File /Path/pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:606, in ConversableAgent.generate_oai_reply(self, messages, sender, config)
    603     messages = self._oai_messages[sender]
    605 # TODO: #1143 handle token limit exceeded error
--> 606 response = oai.ChatCompletion.create(
    607     context=messages[-1].pop("context", None), messages=self._oai_system_message + messages, **llm_config
    608 )
    609 return True, oai.ChatCompletion.extract_text_or_function_call(response)[0]

File /Path/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py:803, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, **config)
    801     base_config["max_retry_period"] = 0
    802 try:
--> 803     response = cls.create(
    804         context,
    805         use_cache,
    806         raise_on_ratelimit_or_timeout=i < last or raise_on_ratelimit_or_timeout,
    807         **base_config,
    808     )
    809     if response == -1:
    810         return response

File /Path/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py:834, in Completion.create(cls, context, use_cache, config_list, filter_func, raise_on_ratelimit_or_timeout, allow_format_str_template, **config)
    832 with diskcache.Cache(cls.cache_path) as cls._cache:
    833     cls.set_cache(seed)
--> 834     return cls._get_response(params, raise_on_ratelimit_or_timeout=raise_on_ratelimit_or_timeout)

File /Path/pyautogen/lib/python3.10/site-packages/autogen/oai/completion.py:224, in Completion._get_response(cls, config, raise_on_ratelimit_or_timeout, use_cache)
    222         response = openai_completion.create(**config)
    223     else:
--> 224         response = openai_completion.create(request_timeout=request_timeout, **config)
    225 except (
    226     ServiceUnavailableError,
    227     APIConnectionError,
    228 ):
    229     # transient error
    230     logger.info(f"retrying in {retry_wait_time} seconds...", exc_info=1)

File /Path/pyautogen/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
     23 while True:
     24     try:
---> 25         return super().create(*args, **kwargs)
     26     except TryAgain as e:
     27         if timeout is not None and time.time() > start + timeout:

File /Path/pyautogen/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:155, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
    129 @classmethod
    130 def create(
    131     cls,
   (...)
    138     **params,
    139 ):
    140     (
    141         deployment_id,
    142         engine,
   (...)
    152         api_key, api_base, api_type, api_version, organization, **params
    153     )
--> 155     response, _, api_key = requestor.request(
    156         "post",
    157         url,
    158         params=params,
    159         headers=headers,
    160         stream=stream,
    161         request_id=request_id,
    162         request_timeout=request_timeout,
    163     )
    165     if stream:
    166         # must be an iterator
    167         assert not isinstance(response, OpenAIResponse)

File /Path/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py:299, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
    278 def request(
    279     self,
    280     method,
   (...)
    287     request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
    288 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
    289     result = self.request_raw(
    290         method.lower(),
    291         url,
   (...)
    297         request_timeout=request_timeout,
    298     )
--> 299     resp, got_stream = self._interpret_response(result, stream)
    300     return resp, got_stream, self.api_key

File /Path/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py:710, in APIRequestor._interpret_response(self, result, stream)
    702     return (
    703         self._interpret_response_line(
    704             line, result.status_code, result.headers, stream=True
    705         )
    706         for line in parse_stream(result.iter_lines())
    707     ), True
    708 else:
    709     return (
--> 710         self._interpret_response_line(
    711             result.content.decode("utf-8"),
    712             result.status_code,
    713             result.headers,
    714             stream=False,
    715         ),
    716         False,
    717     )

File /Path/pyautogen/lib/python3.10/site-packages/openai/api_requestor.py:775, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
    773 stream_error = stream and "error" in resp.data
    774 if stream_error or not 200 <= rcode < 300:
--> 775     raise self.handle_error_response(
    776         rbody, rcode, resp.data, rheaders, stream_error=stream_error
    777     )
    778 return resp

InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions)
mathproxyagent (to assistant):

Let's use Python to solve a math problem.

Query requirements:
You should always use the 'print' function for the output and use fractions/radical forms instead of decimals.
You can use packages like sympy to help you.
You must follow the formats below to write your code:
```python
# your code

First state the key idea to solve the problem. You may choose from three ways to solve the problem:
Case 1: If the problem can be solved with Python code directly, please write a program to solve it. You can enumerate all possible arrangements if needed.
Case 2: If the problem is mostly reasoning, you can solve it by yourself directly.
Case 3: If the problem cannot be handled in the above two ways, please follow this process:

  1. Solve the problem step by step (do not over-divide the steps).
  2. Take out any queries that can be asked through Python (for example, any calculations or equations that can be calculated).
  3. Wait for me to give the results.
  4. Continue if you think the result is correct. If the result is invalid or unexpected, please correct your query or reasoning.

After all the queries are run and you get the answer, put the answer in \boxed{}.

Problem:
For what negative value of $k$ is there exactly one solution to the system of equations \begin{align*}
...
y &= -x + 4?
\end{align*}


Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

InvalidRequestError Traceback (most recent call last)
/Path/autogen/notebook/agentchat_MathChat.ipynb Cell 13 line 2
1 math_problem = "For what negative value of $k$ is there exactly one solution to the system of equations \begin{align*}\ny &= 2x^2 + kx + 6 \\\ny &= -x + 4?\n\end{align*}"
----> 2 mathproxyagent.initiate_chat(assistant, problem=math_problem)

File pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:531, in ConversableAgent.initiate_chat(self, recipient, clear_history, silent, **context)
517 """Initiate a chat with the recipient agent.
518
519 Reset the consecutive auto reply counter.
(...)
528 "message" needs to be provided if the generate_init_message method is not overridden.
529 """
530 self._prepare_chat(recipient, clear_history)
--> 531 self.send(self.generate_init_message(**context), recipient, silent=silent)

File pyautogen/lib/python3.10/site-packages/autogen/agentchat/conversable_agent.py:334, in ConversableAgent.send(self, message, recipient, request_reply, silent)
332 valid = self._append_oai_message(message, "assistant", recipient)
333 if valid:
--> 334 recipient.receive(message, self, request_reply, silent)
335 else:
336 raise ValueError(
337 "Message can't be converted into a valid ChatCompletion message. Either content or function_call must be provided."
338 )
...
776 rbody, rcode, resp.data, rheaders, stream_error=stream_error
777 )
778 return resp

InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions)
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions