-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ollama server resolves to host.docker.internal regardless of endpoint set in .env file #1877
Comments
Can you try this with the latest version (
|
Hi @sarahwooders , Thanks it seems now the server is communicating with the Ollama endpoint and I'm able to select my models. However, when attempting a chat I'm hitting a new error complaining an Unsupported Authentication Type (I have no auth for my Ollama Server and haven't seen anything in the docs that reference this) The Error:
|
I'm getting the same error with the same docker configuration: letta_server-1 | Letta.letta.server.server - DEBUG - Got input messages: [Message(id='message-eb7b2183-e1d9-4e52-8bad-944bfbc0a271', role=<MessageRole.user: 'user'>, text='{\n "type": "user_message",\n "message": "Hello?",\n "time": "2024-10-20 06:44:17 PM UTC+0000"\n}', user_id='user-00000000', agent_id='agent-7ff38507-e691-4ed4-8a1a-7ebee17136ad', model=None, name='human', created_at=datetime.datetime(2024, 10, 20, 18, 44, 17, 526818, tzinfo=datetime.timezone.utc), tool_calls=None, tool_call_id=None)] |
Same for me: Letta works, local models are available for selection, but chat messages yield an error message in the log. Log output
|
experiencing the same issue currently using container letta/letta:nightly (9a819e78d7b9) |
Hit a different issue with the UI after that but for what it's worth, based on a suggestion from someone on the discord server, this issue can be worked around by moving the OLLAMA_BASE_URL from the .env file directly into the docker compose file. |
The fix for the authentication type error should be in #2018 - we'll rebuild the If you're running Ollama locally, remember to use run |
This issue is stale because it has been open for 30 days with no activity. |
This issue was closed because it has been inactive for 14 days since being marked as stale. |
Describe the bug
I'm trying to run letta in docker connected to an ollama service running on the same host. I'm using an .env file with the following vars:
LETTA_LLM_ENDPOINT=http://192.168.xx.xx:11434
LETTA_LLM_ENDPOINT_TYPE=ollama
LETTA_LLM_MODEL=llama3.2:3b-instruct-q8_0
LETTA_LLM_CONTEXT_WINDOW=8192
LETTA_EMBEDDING_ENDPOINT=http://192.168.xx.xx:11434
LETTA_EMBEDDING_ENDPOINT_TYPE=ollama
LETTA_EMBEDDING_MODEL=mxbai-embed-large
LETTA_EMBEDDING_DIM=512
The server loads fine. I configure an Agent and Persona via the web interface. Then when attempting to start a chat with the agent I receive the following error on the server:
For some reason it's trying to use host.docker.internal even though the url is overridden in the .env file.
I'm also run inspect on the container when running and confirmed the correct env settings have been applied.
Please describe your setup
If you're not using OpenAI, please provide additional information on your local LLM setup:
Local LLM details
If you are trying to run Letta with local LLMs, please provide the following information:
The text was updated successfully, but these errors were encountered: