Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add integration for LocalAI #5256

Closed
mudler opened this issue May 25, 2023 · 7 comments
Closed

Add integration for LocalAI #5256

mudler opened this issue May 25, 2023 · 7 comments
Labels
03 enhancement Enhancement of existing functionality

Comments

@mudler
Copy link
Contributor

mudler commented May 25, 2023

Feature request

Integration with LocalAI and with its extended endpoints to download models from the gallery.

Motivation

LocalAI is a self-hosted OpenAI drop-in replacement with support for multiple model families: https://github.com/go-skynet/LocalAI

Your contribution

Not a python guru, so might take few cycles away here.

@dev2049 dev2049 added 03 enhancement Enhancement of existing functionality llms labels May 26, 2023
@pratham-saraf
Copy link

Hey I would like to take upon this task and would be willing to contribute
can this be assigned to me
regards

baskaryan added a commit that referenced this issue Jul 24, 2023
Description:

This PR adds embeddings for LocalAI (
https://github.com/go-skynet/LocalAI ), a self-hosted OpenAI drop-in
replacement. As LocalAI can re-use OpenAI clients it is mostly following
the lines of the OpenAI embeddings, however when embedding documents, it
just uses string instead of sending tokens as sending tokens is
best-effort depending on the model being used in LocalAI. Sending tokens
is also tricky as token id's can mismatch with the model - so it's safer
to just send strings in this case.

Partly related to: #5256

Dependencies: No new dependencies

Twitter: @mudler_it
---------

Signed-off-by: mudler <mudler@localai.io>
Co-authored-by: Bagatur <baskaryan@gmail.com>
@dosubot
Copy link

dosubot bot commented Sep 11, 2023

Hi, @mudler. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, you opened this issue as a feature request to add integration with LocalAI. You mentioned that you are not very experienced with Python and may need assistance with the implementation. Pratham-saraf has expressed interest in taking on the task and contributing to the project.

Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let the LangChain team know by commenting on the issue. Otherwise, feel free to close the issue yourself, or it will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project. If you have any further questions or need assistance, please let us know.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Sep 11, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 18, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Sep 18, 2023
@l4b4r4b4b4
Copy link

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

@benm5678
Copy link

benm5678 commented Jan 3, 2024

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?

@l4b4r4b4b4
Copy link

I would highly appreciate this issue being picked up again!
The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

We have the same issue... is there some workaround to point it at LocalAI w/o too many code changes?

use another backend 😅

@mudler
Copy link
Contributor Author

mudler commented Jan 5, 2024

I would highly appreciate this issue being picked up again!

The issue with the current state of affairs is that AzureChatOpenAI and ChatOpenAI both send requests to /v1/openai/deployments/MODEL_NAME/chat/completions but LocalAI expects completion requests to hit /v1/chat/completions.

I'm wondering what openai client version are you using? /v1/chat/completions is actually what is used currently https://platform.openai.com/docs/api-reference/chat/create

@dosubot dosubot bot reopened this Jan 5, 2024
Copy link

dosubot bot commented Jan 5, 2024

Hi @baskaryan, could you please assist with this issue? The user has provided additional context regarding the discrepancy in request endpoints between AzureChatOpenAI, ChatOpenAI, and LocalAI. Thank you!

@baskaryan baskaryan removed the llms label Jan 26, 2024
@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 26, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale May 3, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
03 enhancement Enhancement of existing functionality
Projects
None yet
Development

No branches or pull requests

6 participants