Skip to content

Add MCP Support with LiteLLM#3937

Closed
quinlanjager wants to merge 26 commits intoAider-AI:mainfrom
quinlanjager:feature/litellm-mcp
Closed

Add MCP Support with LiteLLM#3937
quinlanjager wants to merge 26 commits intoAider-AI:mainfrom
quinlanjager:feature/litellm-mcp

Conversation

@quinlanjager
Copy link

@quinlanjager quinlanjager commented May 3, 2025

Overview

Related: #2525

This PR integrates Model Context Protocol (MCP) servers with Aider using LiteLLM's MCP bridge implementation. Server tools are provided to theunderlying model by Coder so all Coders will have access to them.

Configuration follows the standard MCP Server Configuration JSON schema used by Claude and Cursor. All server operations are ran on the main thread with coroutines. Coders will execute requests to multiple servers concurrently.

Configuration

MCP servers can be configured in multiple ways:

  1. Command Line: Directly specify server configurations as a JSON string:

    aider --mcp-servers '{"mcpServers":{"git":{"command":"uvx","args":["mcp-server-git"]}}}'
  2. Config File: Use a separate JSON configuration file:

    aider --mcp-servers-file mcp.json
  3. YAML Config: Add to your .aider.conf.yml:

    mcp-servers: |
      {
        "mcpServers": {
          "git": {
            "command": "uvx",
            "args": ["mcp-server-git"]
          }
        }
      }
    
    # mcp-servers-file: /path/to/mcp.json

Implementation Details

The integration leverages LiteLLM's experimental_mcp_client module to load tools from configured servers and provide them to OpenAI compatible models.

The McpServer class is used to manage stdio transport connections via the Python MCP SDK.

The Coder class has been extended to initialize and use MCP tools, process tool calls in streaming responses, and execute tools concurrently across multiple servers. While generating a single reply, at most 25 tool calls can be made.

Limitations

Currently stdio is the only supported server transport. This was a scoping decision. The Python MCP SDK has an SSE server transport so implementation should be possible if desired.

It would be nice if we maintained persistent server connections throughout Aider's runtime. Currently, connections only exist for the duration of each request. I've been using this quite a bit and it is reasonably fast but I admit it is not ideal. Implementing context management at the top level using the with statement would provide a more efficient approach to connection handling.

@CLAassistant
Copy link

CLAassistant commented May 3, 2025

CLA assistant check
All committers have signed the CLA.

@quinlanjager quinlanjager force-pushed the feature/litellm-mcp branch from f52ddb9 to 10ea9ba Compare May 3, 2025 08:03
@quinlanjager quinlanjager mentioned this pull request May 3, 2025
9 tasks
@imunique-ZJ
Copy link

I've been testing this PR, and overall it seems to be working quite well. Great job on this!

However, while testing, I encountered an error message: container not running: No such process when using mcpServers added via containers, configured like this:

 "mcpServers": {
   "sequentialthinking": {
     "command": "docker",
     "args": [
       "run",
       "--rm",
       "-i",
       "mcp/sequentialthinking"
     ]
   }
 }

I'm not entirely sure if this is expected behavior for this specific configuration, or if I might have missed something in the setup?

Additionally, I was also wondering about the feasibility of running aider itself within a container, and using containerized mcpServers.

@quinlanjager
Copy link
Author

@imunique-ZJ Thank you for giving this a spin! 😄 I tested your config outside of a docker container. I was able to start and connect to the server. It sounds like you're running the project in a Docker container which might be the problem.

When using the MCP SDK's STDIO transport, Aider is responsible for starting and stopping the server processes. This means Aider has to have access to the executables and dependencies needed to start the server (in this case the docker process). So if you're running Aider inside of a Docker container and want to use MCP tools started with Docker you will have to configure some "Docker-in-Docker" solution. I think this might be beyond the scope of this PR though.

@imunique-ZJ
Copy link

Thanks for the detailed explanation and for testing this out!

I also tested outside of a container environment, and I did occasionally encounter the same error. However, it didn't seem to affect the core functionality, and I was still able to get the expected results. So, it's not a blocking issue for me. 👍

Regarding the Docker-in-Docker or Podman-in-Podman approach, since it would be outside the scope of this particular PR, so I won't go into further discussion here.

Thanks again.

@quinlanjager quinlanjager force-pushed the feature/litellm-mcp branch from f979394 to 44b385c Compare May 5, 2025 06:00
@quinlanjager quinlanjager force-pushed the feature/litellm-mcp branch from 44b385c to 282b349 Compare May 5, 2025 06:06
@quinlanjager
Copy link
Author

quinlanjager commented May 5, 2025

I updated the implementation to allow partial (or total) failure when initializing MCP servers. Even if a user's configured MCP servers fail to initialize we should allow the user to continue using Aider but let them know something has gone wrong.

Partial Failure

Screenshot 2025-05-04 at 11 03 11 PM

Total Failure

Screenshot 2025-05-04 at 11 03 54 PM

Multiple Total Failures

Screenshot 2025-05-04 at 11 09 20 PM

@quinlanjager quinlanjager mentioned this pull request May 5, 2025
@strawberrymelonpanda
Copy link

strawberrymelonpanda commented May 5, 2025

I apologize if I'm missing something obvious, but have you been able to test this using local LLMs? If so, could you describe your setup please?

I tried using Qwen3 30B-A3B (which has good agentic support) with both Llama.cpp (using --jinga in LCPP and stream=false in Aider since streaming isn't supported) and also with Ollama, wasn't having success. I quite possibly have something set up incorrectly.

With Llama.cpp, Qwen3-30B-A3B, --jinga flag, aider --verbose:

Loading MCP servers from file: /home/<user>/.aider/mcp.json
Loading MCP server: git
Loaded 1 MCP servers from /home/<user>/.aider/mcp.json
[...]
MCP servers configured:
  - git
    - git_status: Shows the working tree status
    [...]

Testing with: Are there any uncommited changes in the repo?

USER Are there any uncommited changes in the repo?
kwargs:
{
    "model": "openai/qwen3-30b-tools",    
    "stream": false,
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "git_status",
                "description": "Shows the working tree status",
                [...]
ModelResponse(id='chatcmpl-88kLkZ5RRgnRYXsRHA0DaT75tvMval0J', created=1746473202, model='qwen3-30b-tools', object='chat.completion', [...] choices=[Choices(finish_reason='tool_calls', index=0, message=Message(content=None, role='assistant', tool_calls=[ChatCompletionMessageToolCall(function=Function(arguments='{"repo_path":"."}', name='git_status'), id='4BNEfmO4MmbLrLxTlEl1g4fnjWVXiJTn', type='function')], function_call=None, provider_specific_fields={'refusal': None}))], usage=Usage(completion_tokens=22, prompt_tokens=2640, total_tokens=2662, completion_tokens_details=None, prompt_tokens_details=None), service_tier=None, timings={'prompt_n': 2633, 'prompt_ms': 1689.764, 'prompt_per_token_ms': 0.6417637675655146, 'prompt_per_second': 1558.2057612779063, 'predicted_n': 22, 'predicted_ms': 281.688, 'predicted_per_token_ms': 12.804, 'predicted_per_second': 78.1005935645111})

Empty response received from LLM. Check your provider account?

It looks like it's attempting to send a tool_call for git_status with arg repo_path: ".", which seems as expected.
tool_calls=[ChatCompletionMessageToolCall(function=Function(arguments='{"repo_path":"."}', name='git_status')

But Aider tells me Empty response received from LLM..

For completeness, without the --jinga flag I get from Aider:

litellm.APIError: APIError: OpenAIException - tools param requires --jinja flag
Retrying in 1.0 seconds...

Using Ollama with model: "ollama_chat/qwen3:30b-a3b" looks similar:

USER Are there any uncommited changes in the repo?
kwargs:
{
    "model": "ollama_chat/qwen3:30b-a3b",
    "stream": false,
    "tools": [
        {
            "type": "function",
            "function": {
                "name": "git_status",
                "description": "Shows the working tree status",
                [...]
ModelResponse(id='chatcmpl-47db188f-b040-4bcb-b405-26da9aff3e86', created=1746475327, model='ollama_chat/qwen3:30b-a3b', object='chat.completion', [...], choices=[Choices(finish_reason='stop', index=0, message=Message(content='', role='assistant', tool_calls=[ChatCompletionMessageToolCall(function=Function(arguments='{"repo_path": "."}', name='git_status'), id='84f31a11-f671-43e3-872e-75f7297c9b1f', type='function')], function_call=None, provider_specific_fields=None))], usage=Usage(completion_tokens=280, prompt_tokens=8913, total_tokens=9193, completion_tokens_details=None, prompt_tokens_details=None))

Empty response received from LLM. Check your provider account?

I still see:

tool_calls=[ChatCompletionMessageToolCall(function=Function(arguments='{"repo_path": "."}', name='git_status')

But Empty response received from LLM. Check your provider account? from Aider.

@rawwerks
Copy link

rawwerks commented May 5, 2025

fyi @ishaan-jaff & @krrishdholakia i think you might think this is cool.

aider is a perfect test bed to put the litellm mcp bridge to work!

@ishaan-jaff
Copy link

Looks great, let us know if there's any way we can improve the litellm mcp bridge

@quinlanjager
Copy link
Author

quinlanjager commented May 5, 2025

@strawberrymelonpanda I ran into the same problem. I looked at the LiteLLM repo and there seems to be an issue tool calling with Ollama models (BerriAI/litellm#7570). I'm not 100% sure this is the exact problem that is happening behind the scenes, but it does look like the tools are being passed to the completion. So it is feeling like this might be related. I was also able to get tool calling working with Ollama with these settings based on this suggestion:

env OPENAI_API_BASE=<ollama-base-url>/v1 aider --model openai/<ollama-model>

This option does appear to have some limitations though so YMMV.

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented May 5, 2025

@quinlanjager Thanks for the pointers, that indeed got it moving.

For anyone following along, I changed my .aider.conf.yaml to:

openai-api-base: "http://127.0.0.1:11434/v1"
model: "openai/qwen3:30b-a3b"

A few notes:

  1. The CTX issue is annoying, but not that big of a problem since you can change it in other ways. Ollama recently changed the default to 4096 "with plans to increase it further", but that's still way too low IMO. The bigger issue is I don't normally use Ollama, and would love to know what's going on on the Llama.CPP side here. Llama.CPP normally works fine with Aider.

@ishaan-jaff, since you're in the thread, any ideas what's happening here? Is the Llama.CPP tool-use --jinga flag incompatible with the LiteLLM MCP Bridge?

  1. Back to Ollama, it does indeed work now but it's not a great user experience as is with the messaging. I'm still seeing Empty response received from LLM. Check your provider account? in yellow even when it proceeds to work just fine:
architect> Are there any uncommited changes?

Empty response received from LLM. Check your provider account?

Tokens: 288 sent, 0 received.
Running MCP tool: git_status from server git
Tool arguments: {"repo_path":"."}

There are no uncommitted changes in the repository. The working tree is clean, and all modifications are either staged or already committed. Let me know if you need further assistance!

Tokens: 283 sent, 112 received.
MCP servers configured:
  - git

It's probably worth trying to find a way to suppress this message.

  1. I've noticed there's no opportunity for user approval.
> Add uncommited changes to git and commit them in single-task commits.

Empty response received from LLM. Check your provider account?

Tokens: 295 sent, 0 received.
Running MCP tool: git_status from server git
Tool arguments: {"repo_path":"."}

Running MCP tool: git_diff_unstaged from server git
Tool arguments: {"repo_path":"."}

Empty response received from LLM. Check your provider account?

Tokens: 619 sent, 0 received.
Running MCP tool: git_add from server git
Tool arguments: {"files":["test"],"repo_path":"."}

Running MCP tool: git_commit from server git
Tool arguments: {"message":"Stage changes to test","repo_path":"."}

Everything worked and there's a new commit as expected, but as a user without always-yes flags set, I'd really want the opportunity to review the MCP tool calls before they're executed.

MCP is a must-have for Aider so thanks for this!

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented May 5, 2025

Some other thoughts: I'll also mention that I'd love some new / commands specific to MCP if it's at all possible.
MCP-CLI has some good examples, but I think some of the most important:

/tools - Probably the same list a user gets at startup with the --verbose flag set:

MCP servers configured:
  - git
    - git_status: Shows the working tree status
    - git_diff_unstaged: Shows changes in the working directory that are not yet staged
    [...]

/servers - Add or disable MCP servers on the fly once Aider is started?

Finally, granular MCP endpoint support at a config level, as some MCP clients have.
E.g. for mcp-server-git I maybe want to enable git_status but not git_commit.

Mostly just nice-to-haves and food for thought.

@quinlanjager
Copy link
Author

quinlanjager commented May 5, 2025

@strawberrymelonpanda Thanks for the feedback. I think you're right that respecting the --always-yes config is necessary. I can include that in this PR to round out the feature. I'll start with asking for confirmation before using any tools (like running a shell command).

As for the warning message, this is happening because some models return "None" content with a tool call (rather than a string like "I'll use tools to find this out for you"). It makes sense to skip this warning if there are tool calls. I'll update my PR to include this.

I feel additional Aider commands are beyond the scope of this PR (I think these would be good features to include in a follow up though). I want to focus this PR on the fundamental configuration and execution of MCP tools. Anything building on top of this platform, I'd prefer to leave for follow ups as they benefit from their own discussions. Though these are great suggestions. I actually really like granular tool config.

@raayu83
Copy link

raayu83 commented Jul 25, 2025

Hello,

I wonder how close this PR is to being merged?
Adding this would enable using context7, allowing aider to use up to date documentation for thousands of libraries as a reference.

@morandalex
Copy link

morandalex commented Jul 30, 2025

I am not understanding why this issue is not moving ahead. mcp is essential for ai scene..

@solatis
Copy link
Contributor

solatis commented Jul 31, 2025

I stopped using and advocating for Aider at work because, despite so much community work and dedication, this issue has not progressed, and there has been no feedback from the maintainers. I am not usually the kind of person who writes negative comments like this, but this is just baffling to me. The competition is fierce, and this does not make any sense at all.

I understand we don't have a fundamental right to this project being maintained and moving forward, but perhaps some official statement, plan, and/or vision on how the project will move ahead in a time when Claude Code and Gemini CLI are gaining insane traction would be nice.

@morandalex
Copy link

@paul-gauthier

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented Aug 1, 2025

some official statement, plan, and/or vision on how the project will move ahead

I tried asking once, #4149 , the response was:

PRs that are marked priority will get more review and possibly get merged. I work through them as I am able to.

Personally speaking, until the priority PRs are merged, I'm considering Aider in a sort of "maintenance" mode and trying to adjust my expectations accordingly. 🤷‍♂️

In the meantime for MCP support, there's Roo-Code and Cline if you don't mind something VS Code based, and the new Crush if you want something for the terminal.

Not to take anything away from this PR, of course - using it locally is also an option. it's just hard to see MCP in Aider advancing any further until this PR is merged or another approach is decided on.

@dwash96
Copy link

dwash96 commented Aug 3, 2025

For those interested, I have a fork that incorporates this PR with the addition of the streamable http transport type here:
https://github.com/dwash96/aider/tree/v0.86.1
https://github.com/dwash96/aider-ce

I'm not making a PR since I've detailed here:
#4390 (comment)

Other PR merges/additions as well as some of my own smaller scale fixes for a few things in the above fork branch. Let me know if there are any other issues but I feel like the decision to largely outsource mcp to litellm and the mcp library was the most correct/maintainable decision to be made for integrating this in the project

robbintt added a commit to robbintt/aiderX that referenced this pull request Aug 27, 2025
@strawberrymelonpanda
Copy link

strawberrymelonpanda commented Aug 31, 2025

For those interested, I have a fork that incorporates this PR with the addition of the streamable http transport type here:

Let me point out that if you're interested in the fork, you should use this URL instead:
Aider-CE

Same project, but the other link goes to a specific branch (now out of date), while this one goes to the latest changes. 👍

Fork Additions

Merged PRs

MCP: #3937
    MCP Multi Tool Response
Navigator Mode: #3781
    Navigator Mode Large File Count
Qwen 3: #4383
Fuzzy Search: #4366
Map Cache Location Config: #2911
Enhanced System Prompts: #3804
Repo Map File Name Truncation Fix: #4320

Other Updates

Added Remote MCP Tool Calls With HTTP Streaming
    Enforce single tool call at a time
    Upgraded MCP dep to 1.12.3 for Remote MCP Tool Calls
    Updated base Python version to 3.12 to better support navigator mode (might consider undoing this, if dependency list supports it)
Suppress LiteLLM asyncio errors that clutter output
Updated Docker File Build Process
    Manually install necessary ubuntu dependencies
.gitignore updates
Experimental Context Compaction For Longer Running Generation Tasks

Other Notes

MCP Configuration

@rodion-m
Copy link

rodion-m commented Sep 4, 2025

@paul-gauthier please merge it.

@acsezen
Copy link

acsezen commented Sep 11, 2025

@paul-gauthier Out of curiosity, is there any specific reason to not merge this PR ?

Thanks!

@geraldoandradee
Copy link

Please solve the PR conflicts and merge it. This will add so much value to Aider.

@adrianlzt
Copy link

adrianlzt commented Sep 16, 2025 via email

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented Sep 16, 2025

It's already marked as "Priority" by the repo's maintainer, so If it were going to be merged, I'd say he's already convinced.

Personally I do not think that is going to happen. The repo's owner has never commented here, or many priority PRs.

This is just my opinion, but I personally consider Aider to be in a maintenance mode, with minor updates that are largely assisted by AI. (Each release has a % of code written by AI comment; the last release was 88%) Not that I think there's anything wrong with that, but expectations should probably be adjusted.

That said, I'm not trying to discourage anyone from showing support.

@wladimiiir
Copy link

If you prefer something in between, there is also AiderDesk.

@cantalupo555
Copy link
Contributor

If you prefer something in between, there is also AiderDesk.

Interesting, I didn't know about your project.

@cantalupo555
Copy link
Contributor

cantalupo555 commented Sep 24, 2025

Definitely, Aider is dead.
I previously asked why it was taking so long to merge this pull request...
Then I replied with that message, and @paul-gauthier just ignored it. I waited a month before sharing it here.

At this point, I migrated to OpenCode, and now I can see how outdated Aider really is...

Screenshot from 2025-09-24 06-29-36

@RyanCarrier
Copy link

The best thing about Aider was the "ai!" comments, but for terminal based AI coding OpenCode is the one I also go to now

MatthewZMD referenced this pull request in MatthewZMD/aidermacs Oct 15, 2025
@mathstuf
Copy link

Looks to work well for me (via aider-ce; thanks for the links), but it would be nice if not specifying args would be interpreted as [] rather than needing to be explicit (as I see examples of mcpServers elsewhere that do not specify it).

@morandalex
Copy link

morandalex commented Oct 17, 2025

I changed to cline yesterday that for my use case is like an equal replacement

@strawberrymelonpanda
Copy link

strawberrymelonpanda commented Oct 17, 2025

The best thing about Aider was the "ai!" comments,

@RyanCarrier For what it's worth, you can mimic this by leaving "// TODO: ...." comments at points of interest and telling an AI to "Complete the TODOs", or similar. Not quite as seamless but having aider --watch big repos was also very slow at times for me.

@MatanAvitan
Copy link

MatanAvitan commented Nov 30, 2025

Why this PR wasn't approved and merged into main?

@tarjeir
Copy link

tarjeir commented Nov 30, 2025

Why this PR wasn't approved and merged into main?

Aider is a stale project now

@MatanAvitan
Copy link

Why so? If the original author doesnt want to continue developing, step aside and let others work... I dont get you'd let your project fall into the graveyard.

@tarjeir
Copy link

tarjeir commented Nov 30, 2025

The last release was in August. According to this thread no particular reason is stated: #4584

@KUKARAF
Copy link

KUKARAF commented Jan 7, 2026

@imunique-ZJ Thank you for giving this a spin! 😄 I tested your config outside of a docker container. I was able to start and connect to the server. It sounds like you're running the project in a Docker container which might be the problem.

When using the MCP SDK's STDIO transport, Aider is responsible for starting and stopping the server processes. This means Aider has to have access to the executables and dependencies needed to start the server (in this case the docker process). So if you're running Aider inside of a Docker container and want to use MCP tools started with Docker you will have to configure some "Docker-in-Docker" solution. I think this might be beyond the scope of this PR though.

would it not make more sense to mount the docker sock as a volume? thats how other services like cup or watchtower handle that

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.