Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Running LibreChat against LiteLLM backed by Ollama #1026

Closed
1 task done
mjtechguy opened this issue Oct 8, 2023 · 3 comments · Fixed by #1029
Closed
1 task done

[Bug]: Running LibreChat against LiteLLM backed by Ollama #1026

mjtechguy opened this issue Oct 8, 2023 · 3 comments · Fixed by #1029
Labels
bug Something isn't working

Comments

@mjtechguy
Copy link

Contact Details

mikejohnsonit@gmail.com

What happened?

This is a bit of an odd scenario. I have Ollama/codellama running behind the OpenAPI compatible frontend of LiteLLM.

I have confirmed that LiteLLM is responding correctly using Aider.

When point to the local machine on port 8000 (the port LiteLLM is running on) or even the local LAN IP, I get the following error from Librechat in the docker compose logs (see logs section).

Any ideas?

Steps to Reproduce

  1. Install CodeLLama
  2. pip install litellm
  3. ollama pull codellama
  4. litellm --model ollama/codellama --api_base http://localhost:11434 --temperature 0.3 --max_tokens 2048
  5. Download Librechat and update .env with:
  1. Try to ask a question using the default gpt-3.5-turbo model

What browsers are you seeing the problem on?

No response

Relevant log output

27.0.4 "GET /health HTTP/1.1" 200 22 "-" "undici" 0.001096
LibreChat         | 2023-10-08T02:37:44.260667211Z ask log
LibreChat         | 2023-10-08T02:37:44.260841003Z {
LibreChat         | 2023-10-08T02:37:44.260859794Z   text: 'hello',
LibreChat         | 2023-10-08T02:37:44.260864086Z   conversationId: null,
LibreChat         | 2023-10-08T02:37:44.260866878Z   endpointOption: {
LibreChat         | 2023-10-08T02:37:44.260869378Z     endpoint: 'openAI',
LibreChat         | 2023-10-08T02:37:44.260872003Z     chatGptLabel: null,
LibreChat         | 2023-10-08T02:37:44.260874253Z     promptPrefix: 'you are a helpful assistant',
LibreChat         | 2023-10-08T02:37:44.260877128Z     modelOptions: {
LibreChat         | 2023-10-08T02:37:44.260879794Z       model: 'gpt-3.5-turbo',
LibreChat         | 2023-10-08T02:37:44.260945878Z       temperature: 0,
LibreChat         | 2023-10-08T02:37:44.260953378Z       top_p: 1,
LibreChat         | 2023-10-08T02:37:44.260957586Z       presence_penalty: 0,
LibreChat         | 2023-10-08T02:37:44.260964044Z       frequency_penalty: 0
LibreChat         | 2023-10-08T02:37:44.260966628Z     }
LibreChat         | 2023-10-08T02:37:44.260969169Z   }
LibreChat         | 2023-10-08T02:37:44.260971836Z }
LibreChat         | 2023-10-08T02:37:44.265614378Z TypeError: Cannot read properties of null (reading '0')
LibreChat         | 2023-10-08T02:37:44.265647169Z     at OpenAIClient.setOptions (/app/api/app/clients/OpenAIClient.js:133:71)
LibreChat         | 2023-10-08T02:37:44.265654836Z     at new OpenAIClient (/app/api/app/clients/OpenAIClient.js:32:10)
LibreChat         | 2023-10-08T02:37:44.265657586Z     at initializeClient (/app/api/server/routes/endpoints/openAI/initializeClient.js:46:18)
LibreChat         | 2023-10-08T02:37:44.265660128Z     at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
LibreChat         | 2023-10-08T02:37:44.265662878Z     at async /app/api/server/routes/ask/openAI.js:95:24

Screenshots

image

Code of Conduct

  • I agree to follow this project's Code of Conduct
@mjtechguy mjtechguy added the bug Something isn't working label Oct 8, 2023
@danny-avila
Copy link
Owner

this is the offending line:

      this.langchainProxy = this.options.reverseProxyUrl.match(/.*v1/)[0];

Thanks for reporting this! I can fix it, I will have to add a warning that your reverse proxy is incompatible with langchain if it doesn't match the v1 schema of openAI in this case

@ishaan-jaff
Copy link
Contributor

We've added /v1 endpoints to the LiteLLM server now

BerriAI/litellm@329d27d

@danny-avila
Copy link
Owner

We've added /v1 endpoints to the LiteLLM server now

BerriAI/litellm@329d27d

Awesome! Thank you for the update

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
3 participants