-
Notifications
You must be signed in to change notification settings - Fork 155
Implement tool calling for Ollama and include examples #441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Implement tool calling for Ollama and include examples #441
Conversation
|
@CodiumAI-Agent /update_changelog |
|
Changelog updates: 🔄 2025-10-25 *Added
Tested
|
2e4d5e7 to
e0b5aca
Compare
|
Hi @kalrasamridhi7 , |
|
Changelog updates: 🔄 Added
|
|
Hi @stellasia, thanks for getting back. Awaiting review! |
stellasia
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very nice work, thank you for taking the time to contribute! I left a few comments, the use of options to set the model temperature is the most important.
| async def main() -> None: | ||
| # Initialize the Ollama LLM | ||
| llm = OllamaLLM( | ||
| # model_name="gpt-4o", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nitpicking, we can remove the commented line.
| llm = OllamaLLM( | ||
| # model_name="gpt-4o", | ||
| model_name="mistral:latest", | ||
| model_params={"temperature": 0}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proper way to set temperature for OllamaLLM now is:
model_params={"options": {"temperature": 0}}
(in order to match the Ollama client parameters)
| test_tool: Tool, | ||
| ) -> None: | ||
| # Set up json.loads to return a dictionary | ||
| mock_json_loads.return_value = {"param1": "value1"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remind me why we need to mock json.loads if we are also mocking function return arguments (which I guess should be a string btw)?
|
|
||
| - Added automatic rate limiting with retry logic and exponential backoff for all Embedding providers using tenacity. The `RateLimitHandler` interface allows for custom rate limiting strategies, including the ability to disable rate limiting entirely. | ||
| - JSON response returned to `SchemaFromTextExtractor` is cleansed of any markdown code blocks before being loaded. | ||
| - Tool calling support for Ollama in LLMInterface. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| - Tool calling support for Ollama in LLMInterface. | |
| - Tool calling support for OllamaLLM. |
Description
This PR implements tool calling functionality from LLMInterface for Ollama. Also provides an example for it.
Type of Change
Complexity
Complexity: Medium
How Has This Been Tested?
Checklist
The following requirements should have been met (depending on the changes in the branch):