Skip to content

Conversation

@x42en
Copy link
Contributor

@x42en x42en commented May 21, 2025

First thank you for this project, most of the implementation is done: that's awesome !

Subject

Here is a proposal for handling Ollama API using the OpenAI SDK has specified from the doc.

Implementation

  • I've taken the liberty to use directly OpenAIA2AServer and OpenAIA2AClient classes has the implementation is based on OpenAI SDK. So files proposed are respectively OllamaA2AServer and OllamaA2AClient. This PR does not support Ollama API key has that's the way OpenAIA2AServer and OpenAIA2AClient detect if they need to setup the OpenAI client. This proposal has pros and cons, feel free to request modification if you do not agree with this.
  • An Ollama specific API call is defined in both implementation in order to retrieve models list from server
  • I've also integrated the Fix streaming task #51 PR has it was mandatory for my tests to work.
  • The UI interface has been updated to support Ollama in the same way that OpenAI is set (not under development). But I'm not comfortable enough with the UI to validate everything is alright.

Note: My linter has made a lot of noise in client/llm/openai.py and client/streaming.py files. Sorry for that but I've not been able to run your make format && make lint with new poetry command run.

Associations

Thanks again!

@x42en
Copy link
Contributor Author

x42en commented May 21, 2025

This might also /close #42 ... Wow many people are looking for local models 😄

@themanojdesai themanojdesai merged commit 9f59f54 into themanojdesai:main May 22, 2025
@insanesac
Copy link

Maybe provide an example for running models using ollama?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Are there any plans to support open-source models?

3 participants