-
Notifications
You must be signed in to change notification settings - Fork 37.8k
Description
✨ Feature Description
The current implementation of the #runSubagent tool in Copilot Chat uses the model currently selected for the main chat session.
This request is to allow users to explicitly specify the model that should be used when executing a subagent via the #runSubagent tool, similar to how one can select the model for the primary chat.
💡 Why This is Needed (Use Case)
In complex development workflows, particularly involving custom-built Multi-Component Processing (MCP) tools or specialized agents, different models offer varying strengths in terms of context handling, speed, and token consumption.
For example:
- A larger, more context-aware model (e.g., Claude 4.5/5) might be needed for the main reasoning or issue-related tasks.
- A smaller, more efficient model (e.g., gpt-mini) could be used specifically for executing the subagent (MCP tool) to handle a specific, self-contained, or rapid task.
Current Problem: Running a large, "token-hungry" model for all subagent executions can be inefficient and costly, even when the subagent's task is simple.
Proposed Benefit: Providing model flexibility for subagents would allow developers to optimize their workflows for cost, efficiency, and performance by matching the right model to the right tool/task.