-
-
Notifications
You must be signed in to change notification settings - Fork 378
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Enhancement]: Can integrate locally deployed LLM #382
Comments
After implementing the project scope and some other project scope related feature (in 1 or 2 more versions), I will be experimenting to allow third party extensions to provide suggestion and chat features. If it goes well, I think this can be implemented as an extension. If it doesn't go well, I will consider allowing users to use a completion model as the backend of the suggestion feature. But running an LLM inside the app will be out of scope. I just checked the link you posted, it looks like it's using the GitHub Copilot proxy to redirect it to use other LLMs. Copilot for Xcode supports GitHub Copilot proxy (in the GitHub Copilot settings, not realy tested), and you can give it a try, let me know if it works or not. |
Hi, regarding this issue, i've wanted to use a local LLM through Ollama and the litellm OpenAPI proxy (which takes the Ollama api and converts it to a compatible OpenAPI spec) for the Chat and Promt to Code features of the plugin I've sort of managed to get it working but the Chat feature doesn't work quite right because the answer tokens are streamed from the local litellm proxy to the CopilotForXcode chat box as single messages, making it unreadable and unusable I don't have any clue if it's a proxy problem, ollama problem or the way the plugin makes requests to the proxy (because using the OpenAPI servers works as expected) I've wanted to know if it was possible to make it work or if it was a interesting feature to include/implement Here's and example vid demonstrating the issue: Registrazione.schermo.2023-11-17.alle.10.48.42.mov |
I think it happens because the service is returning each data chunk with a different ID. I will see what I can do with it in the hot fix releasing later today. |
Thanks for the quick response! I've also discovered that using the Prompt to Code feature the messages do actually behave as expected which is something to consider Registrazione.schermo.2023-11-17.alle.11.01.43.mp4 |
@mirko-milovanovic-vidiemme Can you post the response body here? |
This is what i managed to log for the prompt to code request that seemingly seems to work (tho it's pretty ugly and kinda unreadable) |
@mirko-milovanovic-vidiemme please give 0.27.1 a try |
Yup, it seems to be working as expected now! Thanks for the quick fix |
This issue is stale because it has been open for 30 days with no activity. |
Now that CopilotForXcodeKit is available, as mentioned before, I will make an extension that provides suggestion service via the local models. |
Suggestion feature with locally running model is released |
Before Requesting
What feature do you want?
Can integrate locally deployed LLM
Like something like this
https://github.com/danielgross/localpilot
The text was updated successfully, but these errors were encountered: