Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow client_tools to be defined only once #142

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

MichaelClifford
Copy link

What does this PR do?

This PR aims to address an issue I noticed where client_tools has to be declared twice in two different way in order to work properly. It has to be declared in the AgentConfig with something liketool.get_tool_definition(), as well as in the Agent. See the example below.

agent_config = AgentConfig(
    client_tools=[tool.get_tool_definition() for tool in client_tools],
    ...
    )

agent = Agent(client=client,
              agent_config=agent_config,
              client_tools=client_tools,
              )

This PR updates the Agent class initialization to set agent_config["client_tools"] based on the Agent class's client_tools parameter so that the user only needs to declare client_tools once and not worry about the .get_tool_definition() list comprehension.

Test Plan

I've confirmed that these code changes work as expected using the llamastack/distribution-ollama:latest image as the local Llama Stack server. You can run the code snippet below to verify.

from llama_stack_client import LlamaStackClient
from llama_stack_client.lib.agents.client_tool import client_tool
from llama_stack_client.lib.agents.event_logger import EventLogger
from llama_stack_client.lib.agents.agent import Agent
from llama_stack_client.types.agent_create_params import AgentConfig


client = LlamaStackClient(base_url="http://localhost:8321")

@client_tool
def torchtune(query: str = "torchtune"):
    """
    Answer information about torchtune.

    :param query: The query to use for querying the internet
    :returns: Information about torchtune
    """
    dummy_response = """
            torchtune is a PyTorch library for easily authoring, finetuning and experimenting with LLMs.

            torchtune provides:

            PyTorch implementations of popular LLMs from Llama, Gemma, Mistral, Phi, and Qwen model families
            Hackable training recipes for full finetuning, LoRA, QLoRA, DPO, PPO, QAT, knowledge distillation, and more
            Out-of-the-box memory efficiency, performance improvements, and scaling with the latest PyTorch APIs
            YAML configs for easily configuring training, evaluation, quantization or inference recipes
            Built-in support for many popular dataset formats and prompt templates
    """
    return dummy_response

agent_config = AgentConfig(
    model="meta-llama/Llama-3.1-8B-Instruct",
    enable_session_persistence = False,
    instructions = "You are a helpful assistant.",
    tool_choice="auto",
    tool_prompt_format="json",
    )

agent = Agent(client=client,
              agent_config=agent_config,
              client_tools=[torchtune]
              )

session_id = agent.create_session("test")
response = agent.create_turn(
            messages=[{"role":"user","content":"What is torchtune?"}],
            session_id= session_id,
            )

for r in EventLogger().log(response):
    r.print()

You should see an output below that has correctly called the CustomTool.

inference> {"type": "function", "name": "torchtune", "parameters": {"query": "What is torchtune?"}}
CustomTool> "\n            torchtune is a PyTorch library for easily authoring, finetuning and experimenting with LLMs.\n\n            torchtune provides:\n\n            PyTorch implementations of popular LLMs from Llama, Gemma, Mistral, Phi, and Qwen model families\n            Hackable training recipes for full finetuning, LoRA, QLoRA, DPO, PPO, QAT, knowledge distillation, and more\n            Out-of-the-box memory efficiency, performance improvements, and scaling with the latest PyTorch APIs\n            YAML configs for easily configuring training, evaluation, quantization or inference recipes\n            Built-in support for many popular dataset formats and prompt templates\n    "
inference> This response is based on the provided function `torchtune` which returns information about torchtune.

@yanxi0830
Copy link
Contributor

Thanks! LGTM to improve SDK ergonomics.

@@ -29,6 +29,7 @@ def __init__(
):
self.client = client
self.agent_config = agent_config
self.agent_config["client_tools"] = [client_tool.get_tool_definition() for client_tool in client_tools]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validate that there's no conflict if agent_config already sets this value?

Copy link
Author

@MichaelClifford MichaelClifford Feb 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback @ehhuang! Would a simple if statement, where we only update the agent_config if client_tools does not already exist work? Like this:

if "client_tools" not in self.agent_config.keys():
    self.agent_config["client_tools"] = [client_tool.get_tool_definition() for client_tool in client_tools]

Copy link
Author

@MichaelClifford MichaelClifford Feb 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After reading the other threads, I agree it would be nicer to have the tools defined in the agent_config instead of the agent. We could modify the Agent initilization with something like:

if "client_tools" in self.agent_config.keys():
 self.client_tools = {t.get_name(): t for t in agent_config["client_tools"]}
 self.agent_config["client_tools"] = [tool.get_tool_definition() for tool in agent_config["client_tools"]]

That way it just modifies and sets the the client tools values from whatever is in the agent_config. WDYT?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Could you please go ahead and remove instances of the old API from the codebase?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ehhuang Sounds good. It looks like the only place where the old instances of the API will need to be removed here is in the react/agent.py. I can also make another PR to llama-stack-apps to update its use there as well. I think its only in a couple of spots. And I do not think this impacts the llama-stack repo at all. Let me know if you think there's somewhere else that needs to be updated too. Thanks!

@MichaelClifford
Copy link
Author

Update: Went ahead and updated the PR so that there is no longer a client_tools param for the Agent to prevent conflicts. Now a user simple puts the list of ClientTool objects into the AgentConfig and they are processed correctly during Agent initialization.

@@ -83,7 +83,7 @@ def get_tool_defs():
model=model,
instructions=instruction,
toolgroups=builtin_toolgroups,
client_tools=[client_tool.get_tool_definition() for client_tool in client_tools],
client_tools=client_tools,
Copy link
Contributor

@yanxi0830 yanxi0830 Feb 27, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this will work as client_tools are list of ClientTool objects, whereas AgentConfig assumes these are of JSON formats.

Could you test the change a script? E.g. https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/react_agent.py

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm that this does work with https://github.com/meta-llama/llama-stack-apps/blob/main/examples/agents/react_agent.py. This is due to the fact that the way this PR is able to use AgentConfig instead of Agent to define client_tools is that it converts CustomTool into JSON format during the Agent initialization. But I admit this essentially violates the the current expected type for client_tools in AgentConfig (which is defined as an Iterable[ToolDefParam] )

Copy link
Contributor

@yanxi0830 yanxi0830 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See comment in https://github.com/meta-llama/llama-stack-client-python/pull/142/files#r1974435609

I think your previous version makes more sense.

@MichaelClifford
Copy link
Author

Thanks for the feedback @yanxi0830 😄 After reading #160, it sounds like there is still some outstanding discussions on the best way to simplify the use of client tools with agents. For now I can go ahead and revert to the previous version and move any further discussions into that issue. Sounds like a proper fix might be a bit more involved and require some additional changes in LlamaStack.

Signed-off-by: Michael Clifford <mcliffor@redhat.com>
Signed-off-by: Michael Clifford <mcliffor@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants