Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 6 additions & 0 deletions .changeset/add-minimax-m25.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
"roo-cline": patch
"@roo-code/types": patch
---

Add MiniMax M2.5 model and set it as the default MiniMax model
17 changes: 16 additions & 1 deletion packages/types/src/providers/minimax.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,24 @@ import type { ModelInfo } from "../model.js"
// https://platform.minimax.io/docs/api-reference/text-openai-api
// https://platform.minimax.io/docs/api-reference/text-anthropic-api
export type MinimaxModelId = keyof typeof minimaxModels
export const minimaxDefaultModelId: MinimaxModelId = "MiniMax-M2"
export const minimaxDefaultModelId: MinimaxModelId = "MiniMax-M2.5"

export const minimaxModels = {
"MiniMax-M2.5": {
maxTokens: 16_384,
contextWindow: 192_000,
Comment on lines +12 to +13
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The contextWindow and maxTokens values here are copied from M2 but don't match the M2.5 specs provided in #11457. The issue reporter states M2.5 has a 204,800 context window and 131,072 max output tokens. The PR description also claims these values, but the code still uses M2's 192,000 / 16,384. This will cap output at 16K tokens when the model supports 131K, and the context window calculation will be off by ~13K tokens. MINIMAX_DEFAULT_MAX_TOKENS on line 75 should also be updated to 131,072 to stay consistent with the new default.

Suggested change
maxTokens: 16_384,
contextWindow: 192_000,
maxTokens: 131_072,
contextWindow: 204_800,

Fix it with Roo Code or mention @roomote and request a fix.

supportsImages: false,
supportsPromptCache: true,
includedTools: ["search_and_replace"],
excludedTools: ["apply_diff"],
preserveReasoning: true,
inputPrice: 0.3,
outputPrice: 1.2,
cacheWritesPrice: 0.375,
cacheReadsPrice: 0.03,
description:
"MiniMax M2.5, the latest MiniMax model with enhanced coding and agentic capabilities, building on the strengths of the M2 series.",
},
"MiniMax-M2": {
maxTokens: 16_384,
contextWindow: 192_000,
Expand Down
20 changes: 18 additions & 2 deletions src/api/providers/__tests__/minimax.spec.ts
Original file line number Diff line number Diff line change
Expand Up @@ -87,6 +87,22 @@ describe("MiniMaxHandler", () => {
expect(model.info).toEqual(minimaxModels[testModelId])
})

it("should return MiniMax-M2.5 model with correct configuration", () => {
const testModelId: MinimaxModelId = "MiniMax-M2.5"
const handlerWithModel = new MiniMaxHandler({
apiModelId: testModelId,
minimaxApiKey: "test-minimax-api-key",
})
const model = handlerWithModel.getModel()
expect(model.id).toBe(testModelId)
expect(model.info).toEqual(minimaxModels[testModelId])
expect(model.info.contextWindow).toBe(192_000)
expect(model.info.maxTokens).toBe(16_384)
expect(model.info.supportsPromptCache).toBe(true)
expect(model.info.cacheWritesPrice).toBe(0.375)
expect(model.info.cacheReadsPrice).toBe(0.03)
})

it("should return MiniMax-M2 model with correct configuration", () => {
const testModelId: MinimaxModelId = "MiniMax-M2"
const handlerWithModel = new MiniMaxHandler({
Expand Down Expand Up @@ -175,10 +191,10 @@ describe("MiniMaxHandler", () => {
expect(model.info).toEqual(minimaxModels[minimaxDefaultModelId])
})

it("should default to MiniMax-M2 model", () => {
it("should default to MiniMax-M2.5 model", () => {
const handlerDefault = new MiniMaxHandler({ minimaxApiKey: "test-minimax-api-key" })
const model = handlerDefault.getModel()
expect(model.id).toBe("MiniMax-M2")
expect(model.id).toBe("MiniMax-M2.5")
})
})

Expand Down
Loading