Skip to content

Agent run calling MCP list tools every time before calling the model provider #3218

@psg2

Description

@psg2

Initial Checks

Description

When running an agent connected with a MCPServerStreamableHTTP I've noticed in our Datadog traces that the agent calls MCP list tools every time prior to sending a request to the model provider even a single run.

Image

Also if the list tools fails the entire run fails with an McpError or HTTPStatusError.

Earlier today our agent failed with:

Server error '520 <none>' for url 'https://api.fireflies.ai/mcp'
For more information check: https://developer.mozilla.org/en-US/docs/Web/HTTP/Status/520

In the example code we can see two log calls like:

DEBUG:mcp.client.streamable_http:Sending client message: root=JSONRPCRequest(method='tools/list', params=None, jsonrpc='2.0', id=1)
DEBUG:mcp.client.streamable_http:Sending client message: root=JSONRPCRequest(method='tools/list', params=None, jsonrpc='2.0', id=2)

Slack thread: https://pydanticlogfire.slack.com/archives/C083V7PMHHA/p1761072691821499

Example Code

# type: ignore
import asyncio
from datetime import date

from pydantic_ai import (
    Agent,
    RunContext,
)
import dotenv
import os
from pydantic_ai.mcp import MCPServerStreamableHTTP
import logging

logging.basicConfig(level=logging.WARNING)
mcp_logger = logging.getLogger("mcp.client.streamable_http")
mcp_logger.setLevel(logging.DEBUG)

dotenv.load_dotenv(".env.local")

FIREFLIES_API_KEY = os.getenv("FIREFLIES_API_KEY")

fireflies_server = MCPServerStreamableHTTP(
    "https://api.fireflies.ai/mcp",
    tool_prefix="mcp_fireflies",
    headers={"Authorization": f"Bearer {FIREFLIES_API_KEY}"},
    timeout=30,
)

weather_agent = Agent(
    "bedrock:us.anthropic.claude-3-5-haiku-20241022-v1:0",
    system_prompt="Providing a weather forecast at the locations the user provides.",
    toolsets=[fireflies_server],
)


@weather_agent.tool
async def weather_forecast(
    ctx: RunContext,
    location: str,
    forecast_date: date,
) -> str:
    return f"The forecast in {location} on {forecast_date} is 24°C and sunny."


output_messages: list[str] = []


async def main():
    user_prompt = "What will the weather be like in Paris on Tuesday?"

    result = await weather_agent.run(user_prompt=user_prompt, deps=None)

    print(result.output)


if __name__ == "__main__":
    asyncio.run(main())

Python, Pydantic AI & LLM client version

Python 3.12.11
Pydantic AI Slim 1.2.1
AWS Bedrock

Metadata

Metadata

Assignees

Type

No type

Projects

No projects

Milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions