Skip to content

Commit

Permalink
Merge pull request #1551 from qodo-ai/tr/custom_reasoning_model
Browse files Browse the repository at this point in the history
feat: add support for custom reasoning models
  • Loading branch information
mrT23 authored Feb 18, 2025
2 parents a5278bd + 35059ca commit 9de9b39
Show file tree
Hide file tree
Showing 3 changed files with 6 additions and 2 deletions.
3 changes: 3 additions & 0 deletions docs/docs/usage-guide/changing_a_model.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,3 +201,6 @@ fallback_models=["custom_model_name"]
custom_model_max_tokens= ...
```
(3) Go to [litellm documentation](https://litellm.vercel.app/docs/proxy/quick_start#supported-llms), find the model you want to use, and set the relevant environment variables.

(4) Most reasoning models do not support chat-style inputs (`system` and `user` messages) or temperature settings.
To bypass chat templates and temperature controls, set `config.custom_reasoning_model = true` in your configuration file.
4 changes: 2 additions & 2 deletions pr_agent/algo/ai_handlers/litellm_ai_handler.py
Original file line number Diff line number Diff line change
Expand Up @@ -205,7 +205,7 @@ async def chat_completion(self, model: str, system: str, user: str, temperature:
{"type": "image_url", "image_url": {"url": img_path}}]

# Currently, some models do not support a separate system and user prompts
if model in self.user_message_only_models:
if model in self.user_message_only_models or get_settings().config.custom_reasoning_model:
user = f"{system}\n\n\n{user}"
system = ""
get_logger().info(f"Using model {model}, combining system and user prompts")
Expand All @@ -227,7 +227,7 @@ async def chat_completion(self, model: str, system: str, user: str, temperature:
}

# Add temperature only if model supports it
if model not in self.no_support_temperature_models:
if model not in self.no_support_temperature_models and not get_settings().config.custom_reasoning_model:
kwargs["temperature"] = temperature

if get_settings().litellm.get("enable_callbacks", False):
Expand Down
1 change: 1 addition & 0 deletions pr_agent/settings/configuration.toml
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ use_global_settings_file=true
disable_auto_feedback = false
ai_timeout=120 # 2minutes
skip_keys = []
custom_reasoning_model = false # when true, disables system messages and temperature controls for models that don't support chat-style inputs
# token limits
max_description_tokens = 500
max_commits_tokens = 500
Expand Down

0 comments on commit 9de9b39

Please sign in to comment.