Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MCP example for SSE #354

Merged
merged 1 commit into from
Mar 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion examples/mcp/filesystem_example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ uv run python python examples/mcp/filesystem_example/main.py

## Details

The example uses the `MCPServerStdio` class from `agents`, with the command:
The example uses the `MCPServerStdio` class from `agents.mcp`, with the command:

```bash
npx -y "@modelcontextprotocol/server-filesystem" <samples_directory>
Expand Down
5 changes: 3 additions & 2 deletions examples/mcp/git_example/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,16 +10,17 @@ uv run python examples/mcp/git_example/main.py

## Details

The example uses the `MCPServerStdio` class from `agents`, with the command:
The example uses the `MCPServerStdio` class from `agents.mcp`, with the command:

```bash
uvx mcp-server-git
```

Prior to running the agent, the user is prompted to provide a local directory path to their git repo. Using that, the Agent can invoke Git MCP tools like `git_log` to inspect the git commit log.

Under the hood:

1. The server is spun up in a subprocess, and exposes a bunch of tools like `git_log()`
2. We add the server instance to the Agent via `mcp_agents`.
3. Each time the agent runs, we call out to the MCP server to fetch the list of tools via `server.list_tools()`. The result is cached.
3. Each time the agent runs, we call out to the MCP server to fetch the list of tools via `server.list_tools()`. The result is cached.
4. If the LLM chooses to use an MCP tool, we call the MCP server to run the tool via `server.run_tool()`.
8 changes: 2 additions & 6 deletions examples/mcp/git_example/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,12 +30,8 @@ async def main():
directory_path = input("Please enter the path to the git repository: ")

async with MCPServerStdio(
params={
"command": "uvx",
"args": [
"mcp-server-git"
]
}
cache_tools_list=True, # Cache the tools list, for demonstration
params={"command": "uvx", "args": ["mcp-server-git"]},
) as server:
with trace(workflow_name="MCP Git Example"):
await run(server, directory_path)
Expand Down
13 changes: 13 additions & 0 deletions examples/mcp/sse_example/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
# MCP SSE Example

This example uses a local SSE server in [server.py](server.py).

Run the example via:

```
uv run python python examples/mcp/sse_example/main.py
```

## Details

The example uses the `MCPServerSse` class from `agents.mcp`. The server runs in a sub-process at `https://localhost:8000/sse`.
83 changes: 83 additions & 0 deletions examples/mcp/sse_example/main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import asyncio
import os
import shutil
import subprocess
import time
from typing import Any

from agents import Agent, Runner, gen_trace_id, trace
from agents.mcp import MCPServer, MCPServerSse
from agents.model_settings import ModelSettings


async def run(mcp_server: MCPServer):
agent = Agent(
name="Assistant",
instructions="Use the tools to answer the questions.",
mcp_servers=[mcp_server],
model_settings=ModelSettings(tool_choice="required"),
)

# Use the `add` tool to add two numbers
message = "Add these numbers: 7 and 22."
print(f"Running: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)

# Run the `get_weather` tool
message = "What's the weather in Tokyo?"
print(f"\n\nRunning: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)

# Run the `get_secret_word` tool
message = "What's the secret word?"
print(f"\n\nRunning: {message}")
result = await Runner.run(starting_agent=agent, input=message)
print(result.final_output)


async def main():
async with MCPServerSse(
name="SSE Python Server",
params={
"url": "http://localhost:8000/sse",
},
) as server:
trace_id = gen_trace_id()
with trace(workflow_name="SSE Example", trace_id=trace_id):
print(f"View trace: https://platform.openai.com/traces/{trace_id}\n")
await run(server)


if __name__ == "__main__":
# Let's make sure the user has uv installed
if not shutil.which("uv"):
raise RuntimeError(
"uv is not installed. Please install it: https://docs.astral.sh/uv/getting-started/installation/"
)

# We'll run the SSE server in a subprocess. Usually this would be a remote server, but for this
# demo, we'll run it locally at http://localhost:8000/sse
process: subprocess.Popen[Any] | None = None
try:
this_dir = os.path.dirname(os.path.abspath(__file__))
server_file = os.path.join(this_dir, "server.py")

print("Starting SSE server at http://localhost:8000/sse ...")

# Run `uv run server.py` to start the SSE server
process = subprocess.Popen(["uv", "run", server_file])
# Give it 3 seconds to start
time.sleep(3)

print("SSE server started. Running example...\n\n")
except Exception as e:
print(f"Error starting SSE server: {e}")
exit(1)

try:
asyncio.run(main())
finally:
if process:
process.terminate()
33 changes: 33 additions & 0 deletions examples/mcp/sse_example/server.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
import random

import requests
from mcp.server.fastmcp import FastMCP

# Create server
mcp = FastMCP("Echo Server")


@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
print(f"[debug-server] add({a}, {b})")
return a + b


@mcp.tool()
def get_secret_word() -> str:
print("[debug-server] get_secret_word()")
return random.choice(["apple", "banana", "cherry"])


@mcp.tool()
def get_current_weather(city: str) -> str:
print(f"[debug-server] get_current_weather({city})")

endpoint = "https://wttr.in"
response = requests.get(f"{endpoint}/{city}")
return response.text


if __name__ == "__main__":
mcp.run(transport="sse")