feat: support for Moonshot LLMs (Kimi 2.5 etc)#6962
feat: support for Moonshot LLMs (Kimi 2.5 etc)#6962stebbins wants to merge 1 commit intoblock:mainfrom
Conversation
b1bcefa to
6e0ac28
Compare
f0fc0ef to
c1a9177
Compare
272f5ce to
3a304c6
Compare
7a630c5 to
441f054
Compare
17f7ddd to
af085f8
Compare
|
Have you tested a longer session? I rebased your PR onto main and it seems to fail with the same error after multiple turns. |
|
My guess is that moonshot might not be echoing back the full reasoning chain. They may only send you back the most recent one. Would be hypocritical of them, but would also explain the error. Idk tho, didn't look deeply. |
|
@Tyler-Hardin thank you for testing, i will take a look |
04854cd to
4238930
Compare
68a1806 to
68a667f
Compare
|
Thanks for all your work on this. It was already working well for me before your last commit. (Not to suggest the last commit was unneeded, just that the last edge case you fixed wasn't something I had encountered yet.) |
|
By the way, fun fact, there's a bug introduced by 2bc3689 that causes text_editor view to return nothing. I thought it might be a bug in your patch but it's actually in main, predating your changes. |
9674e79 to
919d9ca
Compare
|
@Tyler-Hardin cheers and thanks for the heads up. I've been running into more "edge cases" where the reasoning context is lost so trying a few different approaches |
… models Add declarative Moonshot AI provider (moonshot.json) for direct API access to Kimi models via MOONSHOT_API_KEY. Fix reasoning_content handling for thinking models (Kimi, DeepSeek, etc.): - Preserve reasoning content on split tool-call messages in agent.rs (Goose splits parallel tool calls into separate messages, which was dropping reasoning_content that providers like Kimi require) - Only include reasoning_content field when non-empty (Kimi rejects empty string "") - Accumulate reasoning_content during streaming tool call processing - Concatenate multiple reasoning blocks instead of overwriting - Wire request_params into create_request() for thinking mode activation Update Kimi model context limits to 262K (131K for older k2-0711). Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Harrison <hcstebbins@gmail.com>
919d9ca to
c50ee93
Compare
|
@Tyler-Hardin I think we're good! |
DOsinga
left a comment
There was a problem hiding this comment.
I think this looks good, but would be good to make sure it still works with like deepseek
| // Include reasoning_content only when non-empty. | ||
| // Kimi rejects empty reasoning_content (""), so we must omit it entirely | ||
| // when there's no reasoning to send. | ||
| if !reasoning_text.is_empty() { |
There was a problem hiding this comment.
you are removing the existing comment here, which makes me slightly worried - is this now compatible with both Kimi & Deepseek?
Add support for Moonshot AI (Kimi) models: - moonshot.json: kimi-latest, kimi-thinking-preview, kimi-k2-0711, kimi-k2, moonshot-v1-8k, moonshot-v1-32k - kimi.json: kimi-for-coding, kimi-code (coding-optimized models) Both providers use the Moonshot API (api.moonshot.cn) with MOONSHOT_API_KEY. Combines work from PRs #6962 and #7222.
|
ok, I merged the other one, I think we can close this now. let me know if not. and thanks for the contribution @stebbins |
Summary
reasoning_contentwhen these models use thinking mode + tool callType of Change
AI Assistance
Testing
Related Issues
Closes #6902
Screenshots/Demos (for UX changes)
Before:
After: