This repository demonstrates a basic implementation of a Pydantic AI agent that utilizes the Model Context Protocol (MCP) to access and execute tools. This example shows how to bridge the gap between MCP tools and Pydantic AI's agent framework.
NOTE that this is a very basic implementation. Over the next month I'll be expanding this in much more depth to make it fully conversational, add in easier management of MCP clients, support for multiple MCP clients, etc.
The Model Context Protocol (MCP) is a standardized protocol for AI model interactions, allowing models to access external tools and context. This example demonstrates how to:
- Connect to an MCP server
- Convert MCP tools to Pydantic AI tools
- Create an interactive agent that can use these tools
- MCP Client: Connects to an MCP server using stdio communication
- Tool Conversion: Transforms MCP tools into Pydantic AI compatible tools
- Interactive Agent: Provides a chat interface to interact with the AI agent
pydantic_mcp_agent.py
: Main application file that sets up the MCP client and runs the agentmcp_tools.py
: Utility module that converts MCP tools to Pydantic AI toolsrequirements.txt
: Dependencies required to run the application
The mcp_tools.py
file contains the core functionality for converting MCP tools to Pydantic AI tools:
async def mcp_tools(session: ClientSession) -> List[PydanticTool]:
"""Convert MCP tools to pydantic_ai Tools."""
await session.initialize()
tools = (await session.list_tools()).tools
return [create_tool_instance(session, tool) for tool in tools]
This function:
- Initializes the MCP session
- Retrieves the list of available tools from the MCP server
- Converts each MCP tool to a Pydantic AI tool
The create_tool_instance
function handles the actual conversion:
def create_tool_instance(session: ClientSession, tool: MCPTool) -> PydanticTool:
"""Initialize a Pydantic AI Tool from an MCP Tool."""
async def execute_tool(**kwargs: Any) -> Any:
return await session.call_tool(tool.name, arguments=kwargs)
async def prepare_tool(ctx: RunContext, tool_def: ToolDefinition) -> ToolDefinition | None:
tool_def.parameters_json_schema = tool.inputSchema
return tool_def
return PydanticTool(
execute_tool,
name=tool.name,
description=tool.description or "",
takes_ctx=False,
prepare=prepare_tool
)
This function:
- Creates an execution function that calls the MCP tool
- Creates a preparation function that sets up the tool's schema
- Returns a Pydantic AI Tool with the appropriate configuration
The main application file sets up the MCP client and initializes the agent:
async def main() -> None:
server_params = StdioServerParameters(
command=os.getenv("NPX_COMMAND", "npx"),
args=["-y", "@modelcontextprotocol/server-filesystem", str(pathlib.Path(__file__).parent)],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
tools = await mcp_tools.mcp_tools(session)
agent = Agent(model="openai:gpt-4o-mini", tools=tools)
# Start the interactive chat loop
await chat_loop(agent)
This function:
- Configures the MCP server parameters
- Establishes a connection to the MCP server
- Converts MCP tools to Pydantic AI tools
- Creates a Pydantic AI agent with the converted tools
- Starts an interactive chat loop
- Python 3.9+
- Node.js (for the MCP server)
-
Clone this repository
-
Set up a virtual environment:
Windows:
python -m venv venv venv\Scripts\activate
macOS/Linux:
python3 -m venv venv source venv/bin/activate
-
Install the Python dependencies:
pip install -r requirements.txt
-
Create a
.env
file with your OpenAI API key:OPENAI_API_KEY=your_api_key_here
-
Run the agent:
python pydantic_mcp_agent.py
-
Interact with the agent through the command line interface
You can extend this example by adding more MCP clients for the agent. I will be diving into this more myself soon!
This example relies on the following key packages:
pydantic-ai
: Framework for building AI agents with Pydanticmcp
: Model Context Protocol client libraryopenai
: OpenAI API client for accessing AI models