Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: openai agent uses invalid tool name to lookup function #9688

Closed
vjr opened this issue Dec 25, 2023 · 6 comments
Closed

[Bug]: openai agent uses invalid tool name to lookup function #9688

vjr opened this issue Dec 25, 2023 · 6 comments
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized

Comments

@vjr
Copy link
Contributor

vjr commented Dec 25, 2023

Bug Description

The OpenAIAgent with QueryEngineTools created with SQLTableRetrieverQueryEngines appears to look up functions using invalid tool names as in if the available tools are named Tool_A and Tool_B it sometimes uses tool name as Tool_A.Tool_A or Tool_A.Tool_B when the intended tool name to use is just Tool_A to lookup the function.

This happens with the gpt-4-1106-preview model but not if I use gpt-3.5-turbo.

Version

0.9.21

Steps to Reproduce

Create an OpenAIAgent with a few tools and run a query() or chat() call to observe the problem when using the gpt-4-1106-preview model.

Relevant Logs/Tracbacks

File "test.py", line 23, in query
    return str(self.agent.chat(query_str))
  File ".venv/lib/python3.10/site-packages/llama_index/callbacks/utils.py", line 39, in wrapper
    return func(self, *args, **kwargs)
  File ".venv/lib/python3.10/site-packages/llama_index/agent/runner/base.py", line 473, in chat
    chat_response = self._chat(
  File ".venv/lib/python3.10/site-packages/llama_index/agent/runner/base.py", line 431, in _chat
    cur_step_output = self._run_step(task.task_id, mode=mode)
  File ".venv/lib/python3.10/site-packages/llama_index/agent/runner/base.py", line 293, in _run_step
    cur_step_output = self.agent_worker.run_step(step, task, **kwargs)
  File ".venv/lib/python3.10/site-packages/llama_index/callbacks/utils.py", line 39, in wrapper
    return func(self, *args, **kwargs)
  File ".venv/lib/python3.10/site-packages/llama_index/agent/openai/step.py", line 573, in run_step
    return self._run_step(
  File ".venv/lib/python3.10/site-packages/llama_index/agent/openai/step.py", line 470, in _run_step
    self._call_function(
  File ".venv/lib/python3.10/site-packages/llama_index/agent/openai/step.py", line 357, in _call_function
    EventPayload.TOOL: get_function_by_name(
  File ".venv/lib/python3.10/site-packages/llama_index/agent/openai/step.py", line 51, in get_function_by_name
    raise ValueError(f"Tool with name {name} not found")
ValueError: Tool with name MyToolName.MyToolName not found
@vjr vjr added bug Something isn't working triage Issue needs to be triaged/prioritized labels Dec 25, 2023
Copy link

dosubot bot commented Dec 25, 2023

🤖

Hello again, @vjr! Hope you're having a good day! Nice to see you back in the LlamaIndex repo. Let's delve into this issue. 🌟

Thank you for reporting this issue. It seems like the OpenAIAgent is trying to look up a tool using an incorrect name format, which is causing the ValueError you're seeing. This could be due to a bug in the way the OpenAIAgent is handling tool names, especially when using the gpt-4-1106-preview model.

As a workaround, you could try to ensure that the tool names are correctly formatted before they are used in the OpenAIAgent. This could be done by adding a check in the _call_function method in the OpenAIAgent class to ensure that the tool name is correctly formatted before it is used to look up a function.

Here's a rough example of what this could look like:

def _call_function(self, function_name, *args, **kwargs):
    # Ensure the tool name is correctly formatted
    tool_name = function_name.split('.')[0]
    if not tool_name in self.tools:
        raise ValueError(f"Tool with name {tool_name} not found")
    # Rest of the function...

This is just a workaround and the root cause of the issue still needs to be addressed. It would be helpful if you could provide more information about the specific circumstances under which this issue occurs. For example, does it happen with all tools or only certain ones? Does it happen every time you use the gpt-4-1106-preview model or only under certain conditions?

This information could help us to better understand the issue and find a more permanent solution. In the meantime, I hope the workaround above can help you to avoid the ValueError and continue using the OpenAIAgent with the gpt-4-1106-preview model.

Please let me know if you have any questions or if there's anything else I can do to help. You can find more information about the OpenAIAgent and QueryEngineTool classes in the LlamaIndex documentation.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@vjr
Copy link
Contributor Author

vjr commented Dec 25, 2023

@dosu-bot

The problem happens intermittently with the gpt-4-1106-preview model. It does not happen with the gpt-3.5-turbo model.

Copy link

dosubot bot commented Dec 25, 2023

🤖

Based on the information you've provided, it seems like the issue you're experiencing is similar to a previously resolved issue in the LlamaIndex repository. The issue was titled [Bug]: Validation Error for tools in OpenAI Assistant Agent example and was resolved by a pull request: one line fix for wrapping of custom function tools to create OpenAI assistant.

Unfortunately, the exact changes made in the pull request to resolve the issue are not detailed in the repository. However, given the title of the pull request, it seems like the issue was related to how custom function tools were being wrapped when creating the OpenAI assistant.

The OpenAIAssistantAgent class in the LlamaIndex codebase is a wrapper around the OpenAI assistant API. It provides methods to interact with the OpenAI assistant, including creating new assistants, retrieving existing ones, running the assistant, and interacting with it through chat. The error you're experiencing is being raised from the get_function_by_name function, which is likely used to handle function calls required by the assistant.

Given this information, it's possible that the issue you're experiencing is due to how the OpenAIAssistantAgent is handling function calls when using the gpt-4-1106-preview model. It might be incorrectly wrapping the tool names, resulting in invalid tool names like Tool_A.Tool_A or Tool_A.Tool_B.

To resolve this issue, you might want to look into how the OpenAIAssistantAgent is wrapping the tool names when using the gpt-4-1106-preview model. You could also try to replicate the changes made in the aforementioned pull request, as they might be applicable to your issue.

If you're still experiencing issues after trying these steps, please provide more information about your setup and the exact steps you're taking when the issue occurs. This will help in diagnosing and resolving the issue.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@stepdi
Copy link
Contributor

stepdi commented Jan 2, 2024

Hey, @vjr!

I was trying to reproduce but was unable to. Tried several times but function (tool) names were always right.
Is this code good to reproduce the issue?

from llama_index.agent import OpenAIAgent
from llama_index.llms import OpenAI
from llama_index.tools import QueryEngineTool, ToolMetadata

from my_query_engine import query_engine  # SQLTableRetrieverQueryEngine

query_engine_tools = [
    QueryEngineTool(
        query_engine=query_engine,
        metadata=ToolMetadata(
            name="Tool_A",
            description="Use it for salaries stats",
        ),
    ),
    QueryEngineTool(
        query_engine=query_engine,
        metadata=ToolMetadata(
            name="Tool_B",
            description="Use it for weather data",
        ),
    ),
]

llm = OpenAI(model="gpt-4-1106-preview")
agent = OpenAIAgent.from_tools(query_engine_tools, llm=llm, verbose=True)

agent.query("what about salaries and weather? ")

Got output:

=== Calling Function ===
Calling function: Tool_A with args: {"input": "salaries"}
...
Calling function: Tool_B with args: {"input": "weather"}

@vjr
Copy link
Contributor Author

vjr commented Jan 8, 2024

hi @stepdi yes the issue does not seem to be occurring any longer.

i had monkey-patched (is that the right phrase?) a couple of the call_function APIs to fix the tool names and have now removed them.

i suspect it was the openAI llm that was sending incorrect tool name and has been fixed there perhaps?

thank you.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 8, 2024
Copy link

dosubot bot commented Apr 8, 2024

Hi, @vjr,

I'm helping the LlamaIndex team manage their backlog and am marking this issue as stale. From what I understand, the issue was reported regarding the OpenAIAgent using incorrect tool names with the gpt-4-1106-preview model. I provided a workaround to ensure correct tool name formatting, and you confirmed that the issue no longer occurs, suspecting that the problem may have been fixed in the OpenAI llm.

Could you please confirm if this issue is still relevant to the latest version of the LlamaIndex repository? If it is, please let the LlamaIndex team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and cooperation. If you have any further questions or need assistance, feel free to reach out.

@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 15, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working triage Issue needs to be triaged/prioritized
Projects
None yet
Development

No branches or pull requests

2 participants