-
Notifications
You must be signed in to change notification settings - Fork 11.7k
Description
What would you like to be added?
Add MCP Client Sampling support to Gemini CLI, following the MCP Specification.
This would enable MCP servers to call language models through Gemini CLI, similar to how VS Code with GitHub Copilot already supports this feature.
Key capabilities needed:
- Handle
sampling/createMessagerequests from MCP servers - Support model preferences (hints, cost/speed/intelligence priorities)
- Forward requests to appropriate Gemini models based on preferences
Why is this needed?
Enables Agentic MCP Tools: Without sampling support, MCP servers can only provide simple deterministic tools. With sampling, they become intelligent agents that can reason and make decisions.
Practical Example - Background Code Style Checker:
While coding, you start an MCP style checker tool that returns immediately, then works silently in the background.
It monitors file saves and uses AI (via sampling) to analyze violations in context - understanding your project's conventions and complexity patterns.
All this reasoning happens in the background while the developer continues working. The MCP server leverages Gemini's AI through sampling, without requiring its own API key.
Benefits:
- Users don't need separate API keys for each MCP server
- Users control which Gemini model is used (balancing cost vs intelligence)
- Enables intelligent, context-aware MCP tools that can reason about complex tasks
- Provides feature parity with VS Code (which already supports this)
Additional context
Live demo with sampling supported:
