Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add and document abilty to use LiteLLM Logging Observability tools #1145

Merged
merged 3 commits into from
Aug 22, 2024

Conversation

MarkRx
Copy link
Contributor

@MarkRx MarkRx commented Aug 14, 2024

User description

Adds out-of-the-box support for setting up a logging and observability tool using LiteLLM by setting up callbacks. Should work with any of the documented callbacks.

Code uses the starlette context to track PR information to avoid complexity of passing it around everywhere. It will use a basic context when running with the CLI. Doing so allows for code simplification.

Sample LangSmith run:
image

#1105

See also BerriAI/litellm#5179


PR Type

Enhancement, Documentation


Description

  • Implemented LiteLLM logging and observability support:
    • Added success, failure, and service callbacks configuration
    • Included metadata in chat completions for various observability platforms (e.g., LangFuse, LangSmith)
  • Enhanced context management throughout the application:
    • Added Starlette context for tracking PR information
    • Simplified git provider and repo settings context handling
  • Improved CLI functionality with request cycle context
  • Added documentation for integrating with logging observability platforms, including example configuration for LangSmith
  • Updated configuration.toml with new LiteLLM callback options (commented out by default)

Changes walkthrough 📝

Relevant files
Enhancement
pr_agent.py
Add context information to PR handling                                     

pr_agent/agent/pr_agent.py

  • Imported context from starlette_context
  • Added context information for action, provider_id, and pr_url
  • +6/-0     
    litellm_ai_handler.py
    Implement LiteLLM logging and observability                           

    pr_agent/algo/ai_handlers/litellm_ai_handler.py

  • Added support for LiteLLM callbacks (success, failure, service)
  • Included metadata in chat completion for various observability
    platforms
  • +29/-1   
    cli.py
    Enhance CLI with request cycle context                                     

    pr_agent/cli.py

    • Added request_cycle_context to handle CLI mode
    +9/-6     
    __init__.py
    Streamline git provider context management                             

    pr_agent/git_providers/init.py

  • Simplified git provider context handling
  • Removed try-except blocks for context operations
  • +7/-17   
    utils.py
    Simplify repo settings context management                               

    pr_agent/git_providers/utils.py

    • Simplified repo settings context handling
    +1/-4     
    Documentation
    index.md
    Document logging observability integration                             

    docs/docs/installation/index.md

  • Added documentation for integrating with logging observability
    platforms
  • Provided example configuration for LangSmith integration
  • +24/-1   
    Configuration changes
    configuration.toml
    Add LiteLLM callback configuration options                             

    pr_agent/settings/configuration.toml

    • Added commented-out configuration options for LiteLLM callbacks
    +4/-0     

    💡 PR-Agent usage:
    Comment /help on the PR to get a list of all available PR-Agent tools and their descriptions

    @codiumai-pr-agent-pro codiumai-pr-agent-pro bot added documentation Improvements or additions to documentation enhancement New feature or request Review effort [1-5]: 3 labels Aug 14, 2024
    Copy link
    Contributor

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🏅 Score: 85
    🧪 No relevant tests
    🔒 No security concerns identified
    🔀 Multiple PR themes

    Sub-PR theme: Implement LiteLLM logging and observability support

    Relevant files:

    • pr_agent/agent/pr_agent.py
    • pr_agent/algo/ai_handlers/litellm_ai_handler.py
    • pr_agent/cli.py

    Sub-PR theme: Refactor git provider context handling

    Relevant files:

    • pr_agent/git_providers/init.py
    • pr_agent/git_providers/utils.py

    Sub-PR theme: Add documentation and configuration for LiteLLM callbacks

    Relevant files:

    • docs/docs/installation/index.md
    • pr_agent/settings/configuration.toml

    ⚡ Key issues to review

    Performance Concern
    The metadata dictionary is being created for every chat completion request, which could be inefficient for high-volume usage. Consider moving the static parts of this dictionary to a class-level or module-level constant.

    Code Smell
    The function get_git_provider_with_context is using a nested if-else structure that could be simplified for better readability.

    Copy link
    Contributor

    PR-Agent was enabled for this repository. To continue using it, please link your git user with your CodiumAI identity here.

    PR Code Suggestions ✨

    CategorySuggestion                                                                                                                                    Score
    Possible bug
    Correct the condition for checking the existence of a git provider in the context

    The condition if context.get("git_provider", {}).get("pr_url", {}) is checking if
    pr_url is a dictionary, which seems incorrect. Consider changing it to check if
    pr_url exists and matches the input pr_url.

    pr_agent/git_providers/init.py [41-45]

    -if context.get("git_provider", {}).get("pr_url", {}):
    -    git_provider = context["git_provider"]["pr_url"]
    +if pr_url in context.get("git_provider", {}):
    +    git_provider = context["git_provider"][pr_url]
         # possibly check if the git_provider is still valid, or if some reset is needed
         # ...
         return git_provider
     
    Suggestion importance[1-10]: 9

    Why: This suggestion fixes a potential bug in the logic for checking the existence of a git provider, which could lead to incorrect behavior.

    9
    Enhancement
    Refactor callback setting logic to improve maintainability and extensibility

    Consider using a dictionary to map the callback types to their corresponding
    settings. This would make the code more maintainable and easier to extend in the
    future.

    pr_agent/algo/ai_handlers/litellm_ai_handler.py [47-52]

    -if get_settings().get("LITELLM.SUCCESS_CALLBACK", None):
    -    litellm.success_callback = get_settings().litellm.success_callback
    -if get_settings().get("LITELLM.FAILURE_CALLBACK", None):
    -    litellm.failure_callback = get_settings().litellm.failure_callback
    -if get_settings().get("LITELLM.SERVICE_CALLBACK", None):
    -    litellm.service_callback = get_settings().litellm.service_callback
    +callback_types = {
    +    "SUCCESS_CALLBACK": "success_callback",
    +    "FAILURE_CALLBACK": "failure_callback",
    +    "SERVICE_CALLBACK": "service_callback"
    +}
    +for setting, attr in callback_types.items():
    +    if callback := get_settings().get(f"LITELLM.{setting}", None):
    +        setattr(litellm, attr, callback)
     
    • Apply this suggestion
    Suggestion importance[1-10]: 7

    Why: The suggestion improves code maintainability and extensibility by using a dictionary to map callback types, making it easier to add or modify callbacks in the future.

    7
    Simplify the metadata dictionary structure for improved readability and maintainability

    Consider using a dictionary comprehension to create the metadata dictionary, which
    would make the code more concise and easier to maintain.

    pr_agent/algo/ai_handlers/litellm_ai_handler.py [143-161]

     "metadata": {
         "tags": [context["provider_id"], context["action"]],
    -
    -    # LangFuse
    -    "trace_metadata": {
    -        "pr_url": context["pr_url"],
    -    },
    +    "trace_metadata": {"pr_url": context["pr_url"]},
         "trace_name": context["action"],
    -
    -    # LangSmith
    -    "extra": {
    -        "metadata": {
    -            "pr_url": context["pr_url"],
    -        }
    -    },
    +    "extra": {"metadata": {"pr_url": context["pr_url"]}},
         "run_name": context["action"],
    -
    -    # Add additional fields for callbacks as needed
     }
     
    • Apply this suggestion
    Suggestion importance[1-10]: 5

    Why: While the suggestion does improve readability slightly, the existing code is already clear and the change is mostly cosmetic.

    5
    Best practice
    Move the CLI mode setting inside the request cycle context for consistency

    Consider moving the get_settings().set("CONFIG.CLI_MODE", True) call inside the
    request_cycle_context block to ensure it's set within the same context as other
    operations.

    pr_agent/cli.py [76-82]

    -get_settings().set("CONFIG.CLI_MODE", True)
     with request_cycle_context({"cli_mode": True}):
    +    get_settings().set("CONFIG.CLI_MODE", True)
         if args.issue_url:
             result = asyncio.run(PRAgent().handle_request(args.issue_url, [command] + args.rest))
         else:
             result = asyncio.run(PRAgent().handle_request(args.pr_url, [command] + args.rest))
         if not result:
             parser.print_help()
     
    • Apply this suggestion
    Suggestion importance[1-10]: 6

    Why: The suggestion improves consistency by moving the CLI mode setting inside the context block, but it's a minor change that doesn't significantly impact functionality.

    6

    @MarkRx MarkRx mentioned this pull request Aug 14, 2024
    @hussam789
    Copy link
    Collaborator

    @CodiumAI-Agent /review

    @CodiumAI-Agent
    Copy link

    PR Reviewer Guide 🔍

    ⏱️ Estimated effort to review: 3 🔵🔵🔵⚪⚪
    🧪 No relevant tests
    🔒 No security concerns identified
    ⚡ Key issues to review

    Error Handling
    The new code adds callbacks for success, failure, and service, but there's no error handling or logging if these callbacks fail. Consider adding try-except blocks to handle potential errors in callback execution.

    Potential Bug
    The condition if context.get("git_provider", {}).get("pr_url", {}) might not work as intended. It's checking if the value is truthy, but an empty dict {} is considered falsy. This could lead to unexpected behavior.

    Exception Handling Removal
    The try-except block for setting context["repo_settings"] has been removed. This might lead to unhandled exceptions if the context is not available or writable.

    @hussam789
    Copy link
    Collaborator

    @CodiumAI-Agent /analyze

    @mrT23
    Copy link
    Collaborator

    mrT23 commented Aug 15, 2024

    @MarkRx
    This PR cannot be merged as is.
    These are the needed changes:

    (1) Remove any modification whatsoever to the context, or things related to it. It should not be a part of this PR.
    This is a complicated var, that is used by async applications. don't change it.

    (2) the instructions should move to:
    https://pr-agent-docs.codium.ai/usage-guide/additional_configurations/

    (3) By default, everything should be turned off

    (4) as a general guideline, aim for the smallest need footprint. This, for example, is a no-go. Its the default

    image

    @MarkRx
    Copy link
    Contributor Author

    MarkRx commented Aug 15, 2024

    I need place to get contextual traceability information about the request without having to pass it around everywhere that also works for the CLI. If I don't put it in the middleware request context where could I put it? It could be pulled off of the loguru context ("extra field") but that seems awkward. I could maintain some kind of command context with ContextVar.

    I don't have a clean way of handling the different Observability platforms. For example,LangFuse, Helicone, and LangSmith use different fields and others are either not documented or not mature in LiteLLM. I figured I could just send them all. It's important to set these fields though as without them observability records don't provide enough information to be useful. I suppose I could build the metadata dictionary based on which callbacks are being used?

    Also do there need to be 3 separate ai handlers? The additional abstraction is making this tricky.

    @mrT23
    Copy link
    Collaborator

    mrT23 commented Aug 15, 2024

    @MarkRx
    I will not approve doing these changes in the default mode, without an if that turns them on if a flag is on. They are also too versboe, and should be exported to a function (when a flag, which is off by default, is on).

    I also don't like the context changes that you made. They are fragile, and require hacks for cli.

    you could have passed parameters using get_settings(), or use just in pr_agent.py the existing methodology for app runs.

    But all my concerns above need to be addressed. If you prefer your way, then its suitable for a fork

    image

    @MarkRx MarkRx force-pushed the feature/litellm-logging-observability branch 2 times, most recently from e2e463b to 9e1bf3a Compare August 15, 2024 20:53
    @MarkRx
    Copy link
    Contributor Author

    MarkRx commented Aug 16, 2024

    Updated with a different approach. Metadata is a dictionary to allow for different metadata per command.

    Metadata won't be passed to LiteLLM unless a callback is defined.

    @mrT23
    Copy link
    Collaborator

    mrT23 commented Aug 17, 2024

    @MarkRx

    • This PR is still too intrusive - you are editing every tool, just for a specific option that won't be used by 99.9% of pr-agent users. It's not scalable or manageable that way. To accommodate such options, the code impact needs to be small and currently it isn't
    • You are also ignoring the existing logger mechanism, which already conveys all the PR and flow data
    • In addition, it does not contain a single control parameter, turned off by default: litellm.enable_callbacks

    To address all the issues, you can start from this template with your branch, and edit the add_litellm_callbacks function to accommodate the wanted logic:
    https://github.com/Codium-ai/pr-agent/pull/1152/files#diff-ea1acaa0907f3410665530fbc4cda2ab524de2772e0bbe10bad4648b8be35dfeR113

    With the template above, the entire code impact on the default flow is:

    image

    That is how such changes should be made. Inside the add_litellm_callbacks function you can now freely implement any logic you want

    @MarkRx MarkRx force-pushed the feature/litellm-logging-observability branch from 9e1bf3a to 260b22b Compare August 19, 2024 19:44
    @MarkRx MarkRx force-pushed the feature/litellm-logging-observability branch from 260b22b to 8aa76a0 Compare August 19, 2024 19:45
    @mrT23
    Copy link
    Collaborator

    mrT23 commented Aug 20, 2024

    looks good

    And from now on can easily edit add_litellm_callbacks function and update its content whenever you think it necessary

    @mrT23 mrT23 merged commit 745e955 into Codium-ai:main Aug 22, 2024
    2 checks passed
    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Labels
    documentation Improvements or additions to documentation enhancement New feature or request Review effort [1-5]: 3
    Projects
    None yet
    Development

    Successfully merging this pull request may close these issues.

    4 participants