Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BUG: Tool Calls report Invalid input. Please specify the prompt #780

Closed
rainsoft opened this issue Dec 18, 2023 · 3 comments · Fixed by #794
Closed

BUG: Tool Calls report Invalid input. Please specify the prompt #780

rainsoft opened this issue Dec 18, 2023 · 3 comments · Fixed by #794
Labels
bug Something isn't working
Milestone

Comments

@rainsoft
Copy link

Describe the bug

When I test the function call, xinerence report "Invalid input. Please specify the prompt"。

To Reproduce

import json

from colorama import init, Fore
from loguru import logger
from openai import OpenAI

from tool_register import get_tools, dispatch_tool

init(autoreset=True)
client = OpenAI(
    base_url="http://127.0.0.1:9997/v1",
    api_key="xxx"
)

functions = get_tools()


def run_conversation(query: str, stream=False, functions=None, max_retry=5):
    params = dict(model="chatglm3", messages=[{"role": "user", "content": query}],
                  stream=stream)
    if functions:
        params["tools"] = functions
    response = client.chat.completions.create(**params)

    for _ in range(max_retry):
        if not stream:
            response_message = response.choices[0].message
            tool_calls = response_message.tool_calls
            if tool_calls:
                params["messages"].append(response_message)
                for tool_call in tool_calls:
                    logger.info(f"Function Call Response: {tool_call.model_dump()}")

                    function_args = json.loads(tool_call.function.arguments)
                    tool_response = dispatch_tool(tool_call.function.name, function_args)
                    logger.info(f"Tool Call Response: {tool_response}")

                    params["messages"].append(
                        {
                            "tool_call_id": tool_call.id,
                            "role": "tool",
                            "name": tool_call.function.name,
                            "content": tool_response,  # 调用函数返回结果
                        }
                    )
            else:
                reply = response.choices[0].message.content
                logger.info(f"Final Reply: \n{reply}")
                return

        else:
            output = ""
            for chunk in response:
                content = chunk.choices[0].delta.content or ""
                print(Fore.BLUE + content, end="", flush=True)
                output += content

                if chunk.choices[0].finish_reason == "stop":
                    return

                elif chunk.choices[0].finish_reason == "tool_calls":
                    params["messages"].append(
                        {
                            "role": "assistant",
                            "content": output
                        }
                    )
                    tools_calls = chunk.choices[0].message.tool_calls
                    for tool_call in tools_calls:
                        logger.info(f"Function Call Response: {tool_call.model_dump()}")

                        function_args = json.loads(tool_call.function.arguments)
                        tool_response = dispatch_tool(tool_call.function.name, function_args)
                        logger.info(f"Tool Call Response: {tool_response}")
                        params["messages"].append(
                            {
                                "role": "function",
                                "name": tool_call.function.name,
                                "content": tool_response,  # 调用函数返回结果
                            }
                        )

                    break

        return client.chat.completions.create(**params)


if __name__ == "__main__":
    query = "你是谁"
    run_conversation(query, stream=True)

    logger.info("\n=========== next conversation ===========")

    query = "帮我查询北京的天气怎么样"
    run_conversation(query, functions=functions, stream=False)

too_regiester.py

import inspect
import traceback
from copy import deepcopy
from pprint import pformat
from types import GenericAlias
from typing import get_origin, Annotated

_TOOL_HOOKS = {}
_TOOL_DESCRIPTIONS = {}


def register_tool(func: callable):
    tool_name = func.__name__
    tool_description = inspect.getdoc(func).strip()
    python_params = inspect.signature(func).parameters
    properties = []
    requiredes = []
    for name, param in python_params.items():
        annotation = param.annotation
        if annotation is inspect.Parameter.empty:
            raise TypeError(f"Parameter `{name}` missing type annotation")
        if get_origin(annotation) != Annotated:
            raise TypeError(f"Annotation type for `{name}` must be typing.Annotated")

        typ, (description, required) = annotation.__origin__, annotation.__metadata__
        typ: str = str(typ) if isinstance(typ, GenericAlias) else typ.__name__
        if not isinstance(description, str):
            raise TypeError(f"Description for `{name}` must be a string")
        if not isinstance(required, bool):
            raise TypeError(f"Required for `{name}` must be a bool")

        if required:
            requiredes.append(name)

        properties.append({
            name: {"type": typ, "description": description}
        })
    tool_def = {
        "type": "function",
        "function": {
            "name": tool_name,
            "description": tool_description,
            "parameters": {
                "type": "object",
                "properties": properties,
                "required": requiredes
            }
        }
    }

    print("[registered tool] " + pformat(tool_def))
    _TOOL_HOOKS[tool_name] = func
    _TOOL_DESCRIPTIONS[tool_name] = tool_def

    return func


def dispatch_tool(tool_name: str, tool_params: dict) -> str:
    if tool_name not in _TOOL_HOOKS:
        return f"Tool `{tool_name}` not found. Please use a provided tool."
    tool_call = _TOOL_HOOKS[tool_name]
    try:
        ret = tool_call(**tool_params)
    except:
        ret = traceback.format_exc()
    return str(ret)


def get_tools() -> list:
    return list(deepcopy(_TOOL_DESCRIPTIONS).values())


# Tool Definitions

@register_tool
def random_number_generator(
        seed: Annotated[int, 'The random seed used by the generator', True],
        range: Annotated[tuple[int, int], 'The range of the generated numbers', True],
) -> int:
    """
    Generates a random number x, s.t. range[0] <= x < range[1]
    """
    if not isinstance(seed, int):
        raise TypeError("Seed must be an integer")
    if not isinstance(range, tuple):
        raise TypeError("Range must be a tuple")
    if not isinstance(range[0], int) or not isinstance(range[1], int):
        raise TypeError("Range must be a tuple of integers")

    import random
    return random.Random(seed).randint(*range)


@register_tool
def get_weather(
        city_name: Annotated[str, 'The name of the city to be queried', True],
) -> str:
    """
    Get the current weather for `city_name`
    """

    if not isinstance(city_name, str):
        raise TypeError("City name must be a string")

    key_selection = {
        "current_condition": ["temp_C", "FeelsLikeC", "humidity", "weatherDesc", "observation_time"],
    }
    import requests
    try:
        resp = requests.get(f"https://wttr.in/{city_name}?format=j1")
        resp.raise_for_status()
        resp = resp.json()
        ret = {k: {_v: resp[k][0][_v] for _v in v} for k, v in key_selection.items()}
    except:
        import traceback
        ret = "Error encountered while fetching weather data!\n" + traceback.format_exc()

    return str(ret)


if __name__ == "__main__":
    print(dispatch_tool("get_weather", {"city_name": "beijing"}))
    print(get_tools())
  1. Your Python version.
    python 3.10
  2. The version of xinference you use.
    0.72
  3. Versions of crucial packages.
  4. Full stack of the error.
    Traceback (most recent call last):
    File "/work/miniconda3/envs/chatglm3/lib/python3.10/site-packages/openai/_base_client.py", line 885, in _request
    raise self._make_status_error_from_response(err.response) from None
    openai.BadRequestError: Error code: 400 - {'detail': 'Invalid input. Please specify the prompt.'}
  5. Minimized code to reproduce the error.

Expected behavior

A clear and concise description of what you expected to happen.

Additional context

按照OpenAI的function call规范,第二轮的messages包含 user、assistant、tool三个message,但是被xinference拦截了。

@XprobeBot XprobeBot added the bug Something isn't working label Dec 18, 2023
@XprobeBot XprobeBot added this to the v0.7.3 milestone Dec 18, 2023
@codingl2k1
Copy link
Contributor

Currently, Multiple turn conversations containing tool messages are not yet supported. We will support this feature as soon as possible.

@rainsoft
Copy link
Author

Thanks.

@robator0127
Copy link

robator0127 commented Dec 3, 2024

i encountered the error again
xinference, version 0.16.1
i changed the source code and the error is gone
source file :**/site-packages/xinference/api/restful_api.py ,in function "async def create_chat_completion",there are some lines(start line number:1895) which are as follows:
if not messages or messages[-1].get("role") not in ["user", "system", "tool"]:
raise HTTPException(
status_code=400, detail="Invalid input. Please specify the prompt."
)
it asks the role of last messages is one of user or system or tool,sometimes this condition can't be satisfied,and then the error occured
then change it ,delete "or messages[-1].get("role") not in ["user", "system", "tool"]",the new lines are :
if not messages :
raise HTTPException(
status_code=400, detail="Invalid input. Please specify the prompt."
)
after the modification,restart xinference
try prompt again,it may work well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants