Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can I customize the API of the openai embedding vector ? #594

Open
Hwenyi opened this issue May 9, 2024 · 2 comments
Open

Can I customize the API of the openai embedding vector ? #594

Hwenyi opened this issue May 9, 2024 · 2 comments
Labels
enhancement New feature or request

Comments

@Hwenyi
Copy link

Hwenyi commented May 9, 2024

On the one hand, it is extremely difficult for Chinese users to use the official OpenAI service. On the other hand, my computer is a thin and light notebook without a graphics card. Every time I open Obsidian and start working with the local embedding vector model, there will be frequent freezes due to performance issues, especially when re-indexing or starting to index a new valt. I need to wait for a very long time. During this period, due to the freezing , it is not convenient to do other things on the computer.

If i can customize the OpenAI embedding vector API, it will be much better for users who are inconvenient to use the official service to separate the conversion process from the local.

Thank you for all your work on Smart-connections. I know this is not an urgent need, but I want to know if it is possible to customize the API of embedding vectors now. If not, is there such a plan.

Thank you very much!

@brianpetro
Copy link
Owner

Hi @Hwenyi

While I intend to implement a custom embedding configuration similar to that in the Smart Chat, I do not have a timeline.

If you're not using it already, Smart Connect improves the local embedding speeds and should prevent Obsidian from freezing during the processing.

I hope that helps & thanks for the kind words 😊
🌴

@Xy2002
Copy link

Xy2002 commented Sep 9, 2024

Hello @brianpetro
I strongly agree with what @Hwenyi mentioned. Due to certain limitations, some users are unable to use the API provided by OpenAI. Additionally, local models might not be viable due to performance or other reasons. In such cases, users are left with no option but to resort to alternative solutions, such as using Azure OpenAI or third-party services. Therefore, I kindly request the implementation of a custom embedding API feature to better accommodate these scenarios.

@christianngarcia christianngarcia added the enhancement New feature or request label Sep 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants