Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support reasoning effort via configuration #1561

Merged
merged 8 commits into from
Feb 26, 2025
Merged

Conversation

KennyDizi
Copy link
Contributor

@KennyDizi KennyDizi commented Feb 21, 2025

PR Type

Enhancement


Description

  • Added support for reasoning_effort configuration in AI models.

  • Introduced ReasoningEffort enum for reasoning effort levels.

  • Updated chat_completion to include reasoning_effort parameter.

  • Extended configuration file to allow setting reasoning_effort.


Changes walkthrough 📝

Relevant files
Enhancement
__init__.py
Added support for reasoning effort models.                             

pr_agent/algo/init.py

  • Defined SUPPORT_REASONING_EFFORT_MODELS list for specific models.
+5/-0     
litellm_ai_handler.py
Enhanced AI handler to support reasoning effort.                 

pr_agent/algo/ai_handlers/litellm_ai_handler.py

  • Imported ReasoningEffort and SUPPORT_REASONING_EFFORT_MODELS.
  • Added support_reasoning_models attribute.
  • Updated chat_completion to handle reasoning_effort parameter.
  • +12/-2   
    utils.py
    Added ReasoningEffort enum for reasoning levels.                 

    pr_agent/algo/utils.py

    • Introduced ReasoningEffort enum with levels: HIGH, MEDIUM, LOW.
    +5/-0     
    Configuration changes
    configuration.toml
    Extended configuration for reasoning effort setting.         

    pr_agent/settings/configuration.toml

    • Added reasoning_effort configuration option with default value.
    +1/-0     

    Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link
    Contributor

    PR Reviewer Guide 🔍

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Error Handling

    The reasoning_effort parameter is added without any error handling if the setting is invalid. Consider adding error handling for unsupported reasoning_effort values.

    if (model in self.support_reasoning_models):
        supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
        reasoning_effort = get_settings().config.reasoning_effort if (get_settings().config.reasoning_effort in supported_reasoning_efforts) else ReasoningEffort.MEDIUM.value
        get_logger().info(f"Add reasoning_effort with value {reasoning_effort} to model {model}.")
        kwargs["reasoning_effort"] = reasoning_effort

    Copy link
    Contributor

    qodo-merge-pro-for-open-source bot commented Feb 21, 2025

    PR Code Suggestions ✨

    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    General
    Add error handling for unsupported models

    Add error handling for when reasoning_effort is used with an unsupported model.
    Currently, the code silently ignores reasoning_effort for unsupported models.

    pr_agent/algo/litellm_ai_handler.py [237-242]

     # Add reasoning_effort if model supports it
    -if (model in self.support_reasoning_models):
    -    supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
    -    reasoning_effort = get_settings().config.reasoning_effort if (get_settings().config.reasoning_effort in supported_reasoning_efforts) else ReasoningEffort.MEDIUM.value
    -    get_logger().info(f"Add reasoning_effort with value {reasoning_effort} to model {model}.")
    -    kwargs["reasoning_effort"] = reasoning_effort
    +if get_settings().config.reasoning_effort:
    +    if model not in self.support_reasoning_models:
    +        get_logger().warning(f"Model {model} does not support reasoning_effort. Ignoring setting.")
    +    else:
    +        supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
    +        reasoning_effort = get_settings().config.reasoning_effort if (get_settings().config.reasoning_effort in supported_reasoning_efforts) else ReasoningEffort.MEDIUM.value
    +        get_logger().info(f"Add reasoning_effort with value {reasoning_effort} to model {model}.")
    +        kwargs["reasoning_effort"] = reasoning_effort

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 7

    __

    Why: The suggestion improves error handling by adding a warning log when reasoning_effort is configured for unsupported models, making the system behavior more transparent and helping with debugging. The improved code also adds a check for the reasoning_effort configuration existence.

    Medium
    Learned
    best practice
    Add proper error handling and null checks when accessing configuration settings to prevent potential runtime errors

    Add error handling around the settings access to handle cases where config or
    reasoning_effort settings might be missing or invalid. Also validate the model
    parameter to ensure it's not None.

    pr_agent/algo/litellm_ai_handler.py [237-241]

    -if (model in self.support_reasoning_models):
    -    supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
    -    reasoning_effort = get_settings().config.reasoning_effort if (get_settings().config.reasoning_effort in supported_reasoning_efforts) else ReasoningEffort.MEDIUM.value
    -    get_logger().info(f"Add reasoning_effort with value {reasoning_effort} to model {model}.")
    -    kwargs["reasoning_effort"] = reasoning_effort
    +if model and model in self.support_reasoning_models:
    +    try:
    +        settings = get_settings()
    +        supported_reasoning_efforts = [ReasoningEffort.HIGH.value, ReasoningEffort.MEDIUM.value, ReasoningEffort.LOW.value]
    +        reasoning_effort = settings.config.reasoning_effort if (
    +            hasattr(settings, 'config') and 
    +            hasattr(settings.config, 'reasoning_effort') and 
    +            settings.config.reasoning_effort in supported_reasoning_efforts
    +        ) else ReasoningEffort.MEDIUM.value
    +        get_logger().info(f"Add reasoning_effort with value {reasoning_effort} to model {model}.")
    +        kwargs["reasoning_effort"] = reasoning_effort
    +    except Exception as e:
    +        get_logger().warning(f"Error setting reasoning_effort, using default: {e}")
    +        kwargs["reasoning_effort"] = ReasoningEffort.MEDIUM.value

    [To ensure code accuracy, apply this suggestion manually]

    Suggestion importance[1-10]: 6
    Low
    • Update
    • Author self-review: I have reviewed the PR code suggestions, and addressed the relevant ones.

    @KennyDizi
    Copy link
    Contributor Author

    I checked and guess that pipelines have fallen because this PR #1529

    @KennyDizi
    Copy link
    Contributor Author

    Through enum fields of Reasoning's delight,
    Where HIGH and LOW efforts take their flight 🚀
    In TOML's garden, configurations bloom,
    While LiteLLM handlers weave their loom,
    Such craftsmanship in each parameter true -
    A masterwork of code, both wise and new! 💫 🎭

    @KennyDizi
    Copy link
    Contributor Author

    Would you pls take a look at this PR @mrT23

    @mrT23
    Copy link
    Collaborator

    mrT23 commented Feb 24, 2025

    @KennyDizi
    looks good

    can you add a short new section here: https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/
    to document these dedicated parameters ? otherwise i think they will be less accessible

    something like:

    ## Dedicated parameters
    
    ### OpenAI models
    [config]
    reasoning_efffort= ...
    

    @KennyDizi
    Copy link
    Contributor Author

    @KennyDizi looks good

    can you add a short new section here: https://qodo-merge-docs.qodo.ai/usage-guide/changing_a_model/ to document these dedicated parameters ? otherwise i think they will be less accessible

    something like:

    ## Dedicated parameters
    
    ### OpenAI models
    [config]
    reasoning_efffort= ...
    

    Thanks for your advice, @mrT23; I will update it soon.

    @KennyDizi
    Copy link
    Contributor Author

    @mrT23 I've added a concise document for OpenAI models dedicated params.

    @mrT23
    Copy link
    Collaborator

    mrT23 commented Feb 26, 2025

    @KennyDizi
    great.

    Claude Sonnet also introduced some thinking mechanism (with a different API ...), but lets save it to a different PR
    https://docs.litellm.ai/docs/providers/anthropic#usage---thinking--reasoning_content

    @mrT23 mrT23 merged commit 25ba941 into qodo-ai:main Feb 26, 2025
    2 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    2 participants