-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible to launch a server and client in separate processes? #84
Comments
I am not sure I fully understand. MCP is a client / server architecture. Clients and Servers operate independently and speak over a transport to each other. Implementing a clientIf you want to implement a client, you likely want to connect to servers either via STDIO or HTTP+SSE. For STDIO you spawn a server executable via Since you mentioned 'in the same script'. If you want to have a stdio client, you need to spawn a server. However, this is not the same script. STDIO Servers are just programs that listen for JSON-RPC messages on STDIN and write JSON-RPC messages to stdout. It might help to take https://modelcontextprotocol.io/llms-full.txt, put it into Claude and ask the model for more help to understand the concept. Implementing a serverIf you are only interested in implementing a server, you never need to start a client. I would recommend a framework like https://github.com/jlowin/fastmcp for ease of use. I hope this helps. Let me know if you have more questions., |
@dsp-ant yea I guess I'm just confused. The example in the readme is something like:
But in this example, it seems like the client depends on variables that you can only get from launching the server? How do you launch them separately? I may have missed where this was documented. An example for the target usage I am looking for:
If we can achieve something similar to the above, would love to add it to llama-index, but so far I haven't found a way to achieve the above using the current mcp package/docs. |
I couldn't find an example of the SSE (Server-Sent Events) server either. However, the script below might help you get things set up. Navigate to the directory uv run --script client.py If the script hangs indefinitely, verify that the server starts successfully with the following command: uv run mcp-simple-tool --transport sse --port 8000 # /// script
# requires-python = ">=3.10"
# dependencies = ["mcp"]
# [tool.uv.sources]
# mcp = { path = "../../..", editable = true }
# ///
import asyncio
import subprocess
from mcp.client.session import ClientSession
from mcp.client.sse import sse_client
def start_server():
# Start the server process
process = subprocess.Popen(
["uv", "run", "mcp-simple-tool", "--transport", "sse", "--port", "8000"],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
text=True,
)
for line in process.stderr:
if "Uvicorn running" in line:
print(line)
break
return process
def main():
# Start the server
server_process = start_server()
try:
# Run the client logic
asyncio.run(client_logic())
finally:
# Terminate the server process
server_process.terminate()
print("Server terminated.")
async def client_logic():
async with sse_client(url="http://0.0.0.0:8000/sse") as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
# List available tools
tools = await session.list_tools()
print(tools)
# Call the fetch tool
result = await session.call_tool("fetch", {"url": "https://example.com"})
print(result)
if __name__ == "__main__":
main() |
When looking at the examples, it seems like you always need to launch the server and client in the same script, because they share the read/write variables.
Is there a way to launch these two pieces in their own scripts? If so, is it documented? This feels like an extremely common use case that might be missing.
Trying to write an MCP server integration for tools in llama-index and realized I can't figure it out.
Thanks for any help!
The text was updated successfully, but these errors were encountered: