-
Notifications
You must be signed in to change notification settings - Fork 70
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Ollama open source models #70
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for doing all the work implement this! There are a few changes needed before merging it.
- Please refer to this PR where retries for failed Mint connections was added. Req is trying to handle these, but it's currently not enough. The PR does library level retries when we detect that the Mint connection pulled from the pool is already closed and triggering a retry.
- Having module level docs and docs on important public functions is really helpful for other developers. For the module doc, assume the reader doesn't know what ChatOllamaAI is at all. So a brief overview and links to other external relevant docs is helpful.
lib/chat_models/ollama_ai_fields.ex
Outdated
quote do | ||
@receive_timeout 60_000 * 5 | ||
|
||
field :endpoint, :string, default: "http://localhost:11434/api/chat" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any support for overriding this endpoint? This default value should be explained in the ChatOllamaAI's module doc along with information or an example of how to override.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will make a note in the module docs on this. The app within the image exposes port 11434 but their is a world where a client may want to deploy the image with a different port mapping thus requiring an override
lib/chat_models/ollama_ai_fields.ex
Outdated
defmodule LangChain.ChatModels.OllamaAIFields do | ||
defmacro fields do | ||
quote do | ||
@receive_timeout 60_000 * 5 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this really a 5 minute timeout? Why so long? Is it really slow to run these models locally? We might want to have the ability to override the timeout in that case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No clue why. The only information I found about time outs was here: ollama/ollama#1257
Client should be able to override because it is a valid field in the struct
lib/chat_models/ollama_ai_fields.ex
Outdated
|
||
# Enable Mirostat sampling for controlling perplexity. | ||
# (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0) | ||
field :mirostat, :integer, default: 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For all these fields with special docs, it would be helpful to include links to the docs where these come from.
@brainlid Addressed those comments. Let me know if I'm missing anything else |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Minor comment tweak suggested. Looks good!
Thanks for the work and your patience! ❤️💛💙💜 |
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
Adds support for open source models with Ollama which allows the execution of LLM's locally. Per the current version of the API it doesn't seem like function calling is supported yet. See this issue to track on the possible status of function calling
This issue should close #67