Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: convert ollama provider to an openai configuration #34

Merged
merged 6 commits into from
Sep 9, 2024

Conversation

codefromthecrypt
Copy link
Contributor

@codefromthecrypt codefromthecrypt commented Sep 7, 2024

This converts the ollama provider to an openai configuration, which reduces the amount of code we need to maintain.

Later we may choose to resurrect native ollama, but early on I think it is better to focus, since ollama supports the OpenAI API.

Specifically, this uses the "mistral-nemo" model until we find something else better. Currently, this configuration does not support vision, so screenshots won't work.

After this is in, we can revisit #23 as CI setup for this provider.

@codefromthecrypt
Copy link
Contributor Author

tests work with ollama and mistral-nemo, but the vision test isn't passing with llava:7b. trying something else for that

@michaelneale
Copy link
Collaborator

hrm - may need to re-arrange things so that CI tests are run before ruff?

CONTRIBUTING.md Outdated Show resolved Hide resolved
@codefromthecrypt
Copy link
Contributor Author

ok this is merged with #23, though llama3.1:8b-instruct-q4_0 isn't happy with the password question:

basically, most of the time, it will call the function, such as the output below, but not return the value.

Request method: POST
Request URL: http://localhost:11434/v1/chat/completions
Request headers: Headers({'host': 'localhost:11434', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'user-agent': 'python-httpx/0.27.2', 'content-length': '362', 'content-type': 'application/json', 'authorization': '[secure]'})
Request content: b'{"messages": [{"role": "system", "content": "You are a helpful assistant. Expect to need to authenticate using get_password."}], "model": "llama3.1:8b-instruct-q4_0", "tools": [{"type": "function", "function": {"name": "get_password", "description": "Return the password for authentication", "parameters": {"type": "object", "properties": {}, "required": []}}}]}'
Response content:
{"id":"chatcmpl-609","object":"chat.completion","created":1725767747,"model":"llama3.1:8b-instruct-q4_0","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":""},"finish_reason":"stop"}],"usage":{"prompt_tokens":63,"completion_tokens":1,"total_tokens":64}}

Request method: POST
Request URL: http://localhost:11434/v1/chat/completions
Request headers: Headers({'host': 'localhost:11434', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'user-agent': 'python-httpx/0.27.2', 'content-length': '460', 'content-type': 'application/json', 'authorization': '[secure]'})
Request content: b'{"messages": [{"role": "system", "content": "You are a helpful assistant. Expect to need to authenticate using get_password."}, {"role": "user", "content": "Can you authenticate this session by responding with the password"}], "model": "llama3.1:8b-instruct-q4_0", "tools": [{"type": "function", "function": {"name": "get_password", "description": "Return the password for authentication", "parameters": {"type": "object", "properties": {}, "required": []}}}]}'
Response content:
{"id":"chatcmpl-702","object":"chat.completion","created":1725767748,"model":"llama3.1:8b-instruct-q4_0","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"","tool_calls":[{"id":"call_7rap96k0","type":"function","function":{"name":"get_password","arguments":"{}"}}]},"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":167,"completion_tokens":13,"total_tokens":180}}

Request method: POST
Request URL: http://localhost:11434/v1/chat/completions
Request headers: Headers({'host': 'localhost:11434', 'accept': '*/*', 'accept-encoding': 'gzip, deflate', 'connection': 'keep-alive', 'user-agent': 'python-httpx/0.27.2', 'content-length': '677', 'content-type': 'application/json', 'authorization': '[secure]'})
Request content: b'{"messages": [{"role": "system", "content": "You are a helpful assistant. Expect to need to authenticate using get_password."}, {"role": "user", "content": "Can you authenticate this session by responding with the password"}, {"role": "assistant", "tool_calls": [{"id": "call_7rap96k0", "type": "function", "function": {"name": "get_password", "arguments": "{}"}}]}, {"role": "tool", "content": "\\"mellon\\"", "tool_call_id": "call_7rap96k0"}], "model": "llama3.1:8b-instruct-q4_0", "tools": [{"type": "function", "function": {"name": "get_password", "description": "Return the password for authentication", "parameters": {"type": "object", "properties": {}, "required": []}}}]}'
Response content:
{"id":"chatcmpl-98","object":"chat.completion","created":1725767749,"model":"llama3.1:8b-instruct-q4_0","system_fingerprint":"fp_ollama","choices":[{"index":0,"message":{"role":"assistant","content":"Now that we have authenticated, what would you like to do?"},"finish_reason":"stop"}],"usage":{"prompt_tokens":108,"completion_tokens":14,"total_tokens":122}}

mistral-nemo responds as expected. So, we need to figure out what to do about vision, still.

Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
@codefromthecrypt codefromthecrypt changed the title feat: document running openai integration tests with ollama feat: convert ollama provider to an openai configuration Sep 8, 2024
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
@michaelneale
Copy link
Collaborator

I have checked this with anthropic, ollama and openai and seems to make things work nicely across those.

@codefromthecrypt
Copy link
Contributor Author

using mistral-nemo is the only choice until #39 when we can start playing with llama based models like llama3-groq-tool-use

Copy link
Collaborator

@baxen baxen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great! This is also how we manage the databricks provider - through their openai chat compatibility. Works well for me locally too

@baxen baxen merged commit 251efe9 into square:main Sep 9, 2024
4 checks passed
lifeizhou-ap added a commit that referenced this pull request Sep 18, 2024
* main:
  feat: Rework error handling (#48)
  chore(release): release version 0.9.0 (#45)
  chore: add just command for releases and update pyproject for changelog (#43)
  feat: convert ollama provider to an openai configuration (#34)
  fix: Bedrock Provider request (#29)
  test: Update truncate and summarize tests to check for sytem prompt t… (#42)
  chore: update test_tools to read a file instead of get a password (#38)
  fix: Use placeholder message to check tokens (#41)
  feat: rewind to user message (#30)
  chore: Update LICENSE (#40)
  fix: shouldn't hardcode truncate to gpt4o mini (#35)
codefromthecrypt pushed a commit to codefromthecrypt/exchange that referenced this pull request Oct 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants