Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

adding ollama #556

Merged
merged 8 commits into from
Jan 29, 2025
Merged

adding ollama #556

merged 8 commits into from
Jan 29, 2025

Conversation

ohdearquant
Copy link
Collaborator

@ohdearquant ohdearquant commented Jan 29, 2025

PR Description

This PR adds Ollama as a new provider integration, enabling local or self-hosted LLM usage under the same consistent interface as other providers. Below is a brief summary of changes:

  1. Ollama Integration

    • Introduced a new OllamaChatCompletionEndPoint class to handle local chat completions from an Ollama server.
    • Added logic to pull or list local models from Ollama. If a requested model isn’t installed, the system automatically downloads it.
    • The new endpoint is registered under match_endpoint.py so that specifying provider="ollama" routes requests to the Ollama endpoint.
  2. Configuration & Key Handling

    • Updated IMODEL constructor logic to accept provider="ollama" and skip the usual API-key verification (since Ollama is local).
    • Ensures simpler usage patterns, e.g. model="ollama/mymodel" or provider="ollama", model="myModelName".
  3. Schema & Endpoint Adjustments

    • Minor refactoring in function_to_schema.py for better handling of function calls and JSON schema creation.
    • Several improvements in base.py and imodel.py to unify endpoint initialization logic, so that the same base structures can handle remote or local providers.
  4. Version Bump

    • Bumped project version from 0.9.5 to 0.9.6 to reflect the new provider support.

Why Ollama?
Ollama provides a local LLM runtime for secure and faster experimentation, removing external dependencies. This integration makes it easier to switch between remote (OpenAI, etc.) and local LLMs while reusing the same Lion-AGI code patterns.

Testing

  • Manually tested model pulls and local completions with ollama:latest.
  • Verified that the fallback to other providers is unchanged.

Impacts

  • No breaking changes for existing code.
  • A new ollama provider is now recognized, letting users specify {"provider": "ollama", "model": "<modelName>"} to run local inference.

Thanks for reviewing!

@ohdearquant ohdearquant changed the title Update openai chat adding ollama Jan 29, 2025
@ohdearquant ohdearquant merged commit 470cceb into main Jan 29, 2025
5 checks passed
@ohdearquant ohdearquant deleted the update-openai-chat branch January 29, 2025 22:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant